The Defence ELSA Lab

Published on: 29 March 2022

When improving the efficiency, effectiveness and security of the Dutch armed forces, AI technology is needed for dealing with new challenges in both peacekeeping and warfare.

We must be able to deal with misleading or false information, cope with our enemies using artificial intelligence (AI), and we have to handle the processing of large amounts of data. AI therefore has a crucial role to play. The introduction of new technology in defence offers opportunities, yet also creates risks. Introducing AI technology raises ethical, legal and social issues. How can AI-driven systems remain under human control? How can control and dignity be maintained when machines get autonomy? How are we working within all the legal frameworks?

If AI is to be applied responsibly, these and other aspects must constantly be taken into account in the design, implementation and maintenance of AI-based systems. The core question for the Defence ELSA Lab is therefore how the Ministry of Defence can stay strategically competitive and at the forefront of military innovation, while also respecting ethical, legal and social values.

What social challenges in AI are being worked on?

At the moment it is unclear what AI-based systems are acceptable from the ethical, legal and social points of view and where the limits are. It is also still unclear what conditions/circumstances these systems would become acceptable under. This can lead to excess use of AI (for example using too many systems in too many situations, without keeping the possible consequences in mind) and/or not enough use (for example not using AI due to insufficient knowledge or fear of the consequences) . Both excess use and insufficient use of AI technology can have unknown consequences in the defence domain when protecting the freedom and safety of society.

What types of solutions are offered to the end user?

The Defence ELSA Lab is developing a future-proof, independent, advisory ecosystem for responsible use of AI in the defence domain. This is how we will know in future what circumstances an application based on AI technology is or is not acceptable under. The lab is going to develop a method that takes ethical, legal and social aspects into account in the analysis, design and evaluation of military AI-based applications (depending on the context). To increase awareness of the ELSA concept and implement ELSA, the Defence ELSA Lab sees solutions that rely on two elements:

  1. Developing educational programmes, information and advice for military personnel, the media and politicians.
  2. Developing methods to guarantee that ELSA factors become an integral element when setting up requirements, product specifications, purchasing and the use of equipment.

What AI methods or techniques are used in the research?

Existing approaches are being extended, e.g. value-sensitive design, explainable algorithms and human-machine teaming. These methods are adapted specifically to the defence context using realistic case studies.

Additionally, there will be a study of how society and defence personnel experience the use of military AI, how it develops over time, and how it changes in different situations. The Defence ELSA Lab will follow global technological, military and social developments that influence perceptions of the use of AI systems so that the lessons learned can be applied in the ELSA Lab.

Are we collaborating with other sectors?

The partners in the Defence ELSA Lab consortium have a lot of experience with military and other AI systems, with a focus on legal (Leiden University, Asser), ethical (Delft University of Technology), social (HHS) and technical (TNO, NLDA) aspects. Additionally, the consortium has relationships with various ministries, such as Defence and Foreign Affairs. During the project, industry will also be asked to be part of the advisory council of the Defence ELSA Lab. Both SMEs and large companies in the defence industry will be involved in this.

What is the ultimate success this ELSA Lab can achieve?

The Defence ELSA Lab will become an independent advisory agency that makes recommendations about ELSA aspects when using military systems with AI-based technology. As ELSA is very context-dependent and the technology is constantly developing, this will not give standardised answers. Instead, there should be regulatory authorities that give tailored advice.

Awarded the NL AIC Label

The Netherlands AI Coalition has developed the NL AIC Label to underline its vision for the development and application of AI in the Netherlands. An NL AIC Label formally recognises an activity that is in line with the aims and strategic goals of the NL AIC and/or the quality of that activity. NL AIC would like to congratulate the Defence ELSA Lab.

Awarding ELSA Lab funding

The Netherlands Organisation for Scientific Research (NWO) and the Netherlands AI Coalition launched the NWA call for ‘Human-centric AI for an inclusive society: Towards an ecosystem of trust’. After testing by an independent NWO evaluation committee, five projects were approved at the end of January 2022, including this ELSA Lab. The NL AIC would like to congratulate all the parties involved in obtaining this funding and wishes them every success in the further development of the Lab.

More information?

The following parties are taking part in this ELSA Lab: ASSER Institute (Berenice Boutin), Haagse Hogeschool (Elif Kiesow-Cortez), NLDA (Roy Lindelauf), Leiden University (Bart Custers, Eduard Fosch Villaronga), TNO (Jurriaan van Diggelen, Martijn van Emmerik, Marlijn Heijnen), Delft University of Technology (Jeroen van den Hoven, Mark Neerincx, Filippo Santoni de Sio).

For more information about the Defence ELSA Lab, please contact Jurriaan van Diggelen or Martijn van Emmerik. If you are interested in more information about Human-Centric AI and the ELSA concept, please go to this page.

Share via: