The vital importance of human centric AI: a renewed call

Published on: 5 September 2023

Artificial Intelligence (AI) is undoubtedly the most influential system technology of our time. AI has a significant impact on how people interact with digital systems and services. How should our society position itself in the face of this digital transformation? How do we strike a balance between freedom of action and the functional value of AI? How do we safeguard public values, our fundamental rights, and democratic freedoms? And how do we ensure that everyone can benefit from the positive effects of AI and avoid negative consequences?

Impact of AI on people and society

AI offers numerous opportunities and can make a positive contribution to society and the economy. For example, consider the accelerated development of medicines for rare diseases, the facilitation of sustainability efforts, and the support of professionals, allowing more time for hands-on care in healthcare and classroom teaching. However, AI also has its challenges.

It is of utmost importance that AI applications are developed without bias to prevent widespread exclusion or negative judgments against people. Additionally, can people still understand how AI systems work and take responsibility for decisions and actions based on AI? The level of citizen involvement in the implementation of AI within our society determines how successfully we can coexist with this technology.

Responsible Use of AI

In the “Manifest human centric AI” (only available in Dutch), a renewed call for meaningful and responsible applications, the Netherlands AI Coalition advocates for a full commitment to the development of human centric AI with a learning approach.

“It is important to ensure that people working with AI technology understand the concerns and limitations. This not only means establishing strong ethical and legal frameworks but also supporting them with instructions, courses, and training,” says Irvette Tempelman, Chair of the Human Centric AI working group.. “As Europe, we are lagging behind the US and China in AI development. Therefore, it is crucial to realise that we have limited room to slam on the brakes as sometimes suggested. However, we can fully commit to responsible, human-centric AI.”

Development and application of AI

How can we design and use AI systems to reap their benefits responsibly? How can we determine if these systems use data responsibly? And how can we integrate and use them in society effectively? The importance of these questions is widely recognised. The significant positive and negative impact of AI on people and society brings a special responsibility for both the designers and developers of AI systems and their users.

ELSA concept

The ELSA concept provides a solid foundation for the development and application of human centric AI solutions. ELSA stands for Ethical, Legal, and Societal Aspects. By developing human centric AI within clear ethical and legal frameworks and helpful regulations in a European context and actively involving stakeholders, we can achieve manageable socio-economic effects of AI and build trust in its operation.

This pragmatic approach offers a solid foundation not only for ethical discussions about AI but also for shaping AI ethically. It is not just about the algorithms but also about how, where, and by whom they are applied.

Interested?

The Netherlands AI Coalition is involved in various initiatives for the responsible and meaningful development of human centric AI. If you are interested in learning more about the coalition’s vision or the ELSA concept, please visit the Human Centric AI working group’s page or contact Náhani Oosterwijk for more information.

Share via: