All the signs are in place for ELSA Labs and human centred AI

Published on: 20 May 2021

Artificial Intelligence (AI) is a powerful technology that can affect people’s lives both positively and negatively. As the Netherlands AI Coalition (NL AIC), we are therefore committed to the responsible use of AI, which benefits everyone. And that is certainly not just words. We work together with companies, government, knowledge institutions, social organisations and citizens to develop human AI that can help us tackle social problems.

Self-learning algorithms can make the work of many people a lot easier. In the meantime, practitioners from many different professions, including medical specialists, teachers, lawyers, policy makers and journalists, are making good use of them. AI is already playing a major role in our society. And its applications are becoming more and more advanced.

Critical comments

AI is without doubt the most influential technology at the moment. However, there are still some critical comments to be made. How do we prevent AI systems from drawing conclusions based on data that do not reflect reality, and which may therefore lead to bias? How can we check if these systems do exactly what we want them to do? And how can we make AI and humans work better together?

Manifest for human centred AI

“These questions all lead to the same answer: we really need to start focusing on the development of human centred AI now,” concludes Peter-Paul Verbeek, Professor of Philosophy of Man and Technology at the University of Twente and affiliated with the Human Centric AI working group. The fact that he has clear ideas about the important role AI can play in society was demonstrated last year by his contribution to the ‘Manifest for Human Artificial Intelligence‘ (Dutch) in which we, as NL AIC, call for sensible and responsible AI applications.

From paper to practice

“Currently, there is a lot of attention for the ethical side of AI. Also, the European Commission recently announced strict regulations for AI applications. That is very good. And it is all very much needed, especially for algorithms that can have a big impact on people’s lives,” Peter-Paul believes. “However, the focus is now very much on what we do not want, while it is also important to indicate what we do want. AI can help us with complex social issues and bring us much good. There are already quite a few good intentions on paper. It is now time to really get to work on using human centred AI in a responsible way.

An equal partnership

“Exactly,” agrees Emile Aarts. He is Professor of Computer Science at Tilburg University, specialising in Data Science and Artificial Intelligence. And since the end of 2019, he has been leading the NL AIC’s Human Centric AI working group. “When developing an AI solution, you have to take the perspective of all parties involved and make sure that they all benefit from such a solution. It is important to cooperate not only with companies, government, knowledge institutions and civil society organisations, but also with citizens. And then really an equal partnership, with a focus on societal challenges.”

Developing AI solutions in ELSA Labs

It may sound logical, such a collaborative approach in which citizens actively participate, but in current AI practice this rarely happens. Usually, citizens are only involved at the very end of a development process. There is often no room for major adjustments anymore. This has to change. For the NL AIC, this was the reason to start working with the concept of ELSA Labs at the end of 2019. ELSA refers to Ethical, Legal and Societal Aspects.

Its purpose? To ensure that companies, the government, knowledge institutions, social organisations and citizens can jointly develop responsible applications of AI. This involves solutions to both social and business problems, with a focus on honesty, justice, security and above all trustworthiness.

Transparent process

“When we presented our plans for ELSA Labs at the end of 2019, many people immediately thought it was a good idea,” Emile recalls. “We also owe it to their enthusiasm and commitment that a start was made to put the concept into practice. As a follow-up to an initial kick-start funding by the Ministry of Economic Affairs and Climate Change, structural resources will be made available from the National Growth Fund for these specific cooperation projects later this year. We are, of course, very pleased about this. As far as I know, we as the Netherlands are one of the frontrunners with this approach, and on such a large scale too.”

“The distribution of the available funds for the ELSA Labs will soon take place in a transparent manner in which we work closely with NWO,” Emile emphasises. “This is an open registration process about which more information will follow later this summer. In addition to participants in the NL AIC and parties who have already spontaneously contacted us in the past period with the ambition of starting an ELSA Lab, other partnerships can also apply. However, they must meet at least seven criteria.”

Which seven criteria those are exactly can be read here (Dutch), but Emile makes an attempt to summarise it briefly anyway. “First of all, they must be public-private partnerships, including knowledge institutions, social organisations and citizens. The consortia must focus on socially relevant goals and develop appropriate solutions for them. These should be scalable solutions based on data and AI. And, of course, it will be human centred AI. And this human centred approach also applies to the way in which such a consortium works and communicates.”

International ambitions

“It’s nice to see that with the ELSA Labs, the Netherlands is very consciously choosing a different route than America and China,” says Peter-Paul. “So no economically driven AI or an AI where the emphasis is on state control, but a people oriented approach. We see that as the third way, which is also very relevant economically. And we think that not only the Netherlands, but all of Europe can play an important role in that development.”

Given the international ambitions, it is convenient that both Emile and Peter-Paul were recently admitted as AI experts to the international community of experts of Global Partnership on Artificial Intelligence (GPAI). In addition, Peter-Paul is also chairman of UNESCO’s World Commission on the Ethics of Science and Technology (COMEST). “The Netherlands’ approach is not only about taking responsibility, but also about how we can use AI to pursue our ideals. And I notice that our perspective is also doing well internationally and that many experts from other countries find it an inspiring story.”

Call for cooperation

If, after reading the seven criteria, you are interested in starting an ELSA Lab, please contact the coordinator of the Human Centric AI working group. In addition, we as NL AIC are also happy to act as matchmakers. It often starts with bringing the right parties together.

Share via: