The Scientific Council for Government Policy sees artificial intelligence (AI) as the new systems technology: a technological advance that will change society significantly. This is entirely in line with the vision and goals of the Netherlands AI Coalition (NL AIC). There has been a focus on the social impact of AI from the very beginning, specifically within the Human-Centric AI building block. Three main thrusts have been defined that are in line with the ELSA concept (Ethical, Legal and Societal Aspects). In the manifesto entitled ‘Human-Centric Artificial Intelligence: A call for sensible and responsible applications’ (in Dutch) that was previously published, the following statement was made about ethics:
“All too often, ethics gets associated with some kind of external assessment, like a medical ethics committee assessing whether or not a particular study should be carried out. The ethical considerations of artificial intelligence, though, have to connect directly with the technology. It is about keeping the interplay between AI, data, people and society on the right track, looking for sensible and responsible applications through a participatory process. This is done by identifying the impact that real-world AI systems and their underlying data can have on people and society, by listing the values that are affected and by letting those values drive the algorithmic design, the embedding in society and the utilisation of those systems.
This means that the development of human centric and socially responsible AI is not limited to merely formulating general principles but that the ethical side starts out from real-world practice. What technology is used in what environment? Which actors is that relevant for? What are their values? And how can you set up AI ethically –
or improve how ethical that setup is – within our society? This engenders ethics that have been internalised instead of being imposed from outside. Moreover, this gives ethics a role that is not only negative but also positive: in addition to formulating restrictions about what we do not want, it focuses on defining the conditions and potential actions for achieving what we do want.” It is ethics as a designing force.
Platform for supporting ethical aspects
The line of thought explained above has led to a platform for ethics and AI being established with the mission of shaping and using AI applications in Dutch society in an ethically responsible way. The platform links numerous different parties together: companies, governmental authorities, centres of expertise and civil society organisations. It is important to learn from each other so that the development of human centric AI can be accelerated. The platform is called PACE, standing for Participactive And Constructive Ethics.
If this topic intrigues you, visit the page about the PACE platform or read the position paper ‘called Ethics and AI in practice: Working together and learning together’ (in Dutch, English version will follow). You might also enjoy reading the interview with Daniël Tijink and Sophie Kuijt about starting up the platform.