This presents us with the major challenge of ensuring that AI is applied within our society in the correct ways. The Human-Centric AI working group has therefore made ethics one of its main thrusts. From an interview with Daniël Tijink (MT member and ethics project leader at ECP) and Sophie Kuijt (CTO at IBM Benelux), who are responsible for the ethics focal aspect within the core team.
“We have to break away from the idea that there’s tech on one side and humans on the other,” says Daniël. “When the focus is on technological development, we tend to start thinking pretty quickly in doomsday scenarios. That’s partly because of all the films in which robots and other advanced technologies turn against mankind, of course, but in reality there’s always a bit of back-and-forth between people and technology.”
Technology is changing us
“Humans affect technological development and then the technology in turn influences people,” he continues. “You saw that in the rise and further development of mobile phones, for instance. We obviously made them, but they have also shaped us. The way we communicate, find the right routes, go on dates, read the news… those are all things that the advent of mobile telephony has changed radically. And AI is a technology that will lead to even more dramatic changes.”
“Given the continuous interplay between humans and technology, it’s tricky to draw up a set of general ethical rules that apply everywhere at all times. If an application is to be ethical, you in fact have to look not only at its design but also at the context that the tech is going to be applied in and the way the users handle it. That’s why we’re also making the case for a working method where all the parties involved take part in sessions that let them jointly address the ethical aspects of a specific AI project. So that’s not just the AI developers and their clients but also, for instance, the professionals who will have to work with it, the policymakers, and the users and the public at large who have to deal with that technology. These have to be constructive sessions where those present don’t just give their opinion of an application of AI, but where they go looking together for a way of guiding that technology – based on the participants’ various relevant value systems – to ensure that using such an AI solution is genuinely ethically responsible.”
Professor Peter-Paul Verbeek was the one who introduced the term ‘guiding ethics ’ and the associated way of thinking. Together with this philosopher on technology, ECP – and numerous other parties involved – developed the ‘approach to guiding ethics’, which is being used not only in AI applications but also in other new technological domains. It has been about specific projects every time, with a heavy emphasis on participation.
A real-world example
Earlier this year, the NL AIC organised a session on guiding ethics within the Health and Care application domain. On the initiative of the Dutch Patients’ Federation, the care professionals, technical staff and patients who were present discussedU-Prevent, an interactive calculation model that helps specialists determine what medicines for cardiovascular conditions work best for which patients. The discussion was about the desired value sets . What effects might U-Prevent have on the quality of care, the autonomy of the patient and the quality of life? Simply talking together soon led to interesting insights for the participants and a better understanding of each other.
“We’re already taking the following step from within the Human-Centric AI working group,” says Daniël, “ as we’re setting up a platform with the mission of shaping and using AI applications in Dutch society in an ethically responsible way.” T he platform links numerous different parties together, such as companies, governmental authorities, centres of expertise and social organisations. That process will also get an additional shot in the arm when the first ELSA Labs start up officially. The platform is going to play an important supporting role in ethics there too. The idea is that it will be a learning community that keeps developing further thanks to wide-ranging cooperation and real-world cases.
The platform’s focus will include practical wisdom , a term borrowed from Aristotle. A great deal of ethical knowledge is present ‘at the sharp end’: the point where it is used in real-world applications by the various parties and individuals who are involved. We have noticed that the guiding ethics approach is very effective at drawing out the requisite knowledge for an application and then also coming up with actions that can be taken. In the new platform, we will be analysing those results. What values often play a role in AI? Where are the tensions? What positive and negative effects are we seeing? And above all, what suggested solutions are coming to the fore and which of them are genuinely suitable for further elaboration? New questions then arise, almost unbidden. Who will be doing that further elaboration, for instance? And how will they make sure, step by step, that an AI application can actually be deployed ethically? This will all help create an understanding of the backing for an AI application, the commitment to it and its social impact.”
Integrating ethics into organisations and networks
Integrating ethics in a natural way into organisations and networks will be a second key activity of the new platform. Not through the traditional approach of a committee passing judgement but through the broad involvement of many parties throughout the process, determining the desired direction together. Currently, there are many parties already working seriously at this and sharing their knowledge – not only public-sector parties such as municipalities and provinces but also organisations such as the Dutch railways, the Volksbank and IBM.
In the run-up to starting the platform, Sophie Kuijt has already talked to numerous parties about integrating ethics into organisations and networks. She brings valuable practical experience to the table from IBM Netherlands and Benelux. “At IBM, we’re going further than just developing AI technology,” explains Sophie . “We offer customers consultancy in that field, for instance, and can also handle the implementation of various AI applications for them. Throughout all this, guiding the ethical questions is now at the heart of how we work with AI. Not just during the development phase either – it’s throughout the application’s whole lifecycle. We approach this from three angles: design, technology and governance. The core values of an organisation are translated into principles relating to data ownership, explainability, transparency, fairness and robustness. We then use those three perspectives – design, technology and governance – for embedding the principles in the AI application in practice. And when an AI application is live, you can monitor it and make adjustments as necessary. Those experiences then let you look at it again in a new way. That acquired knowledge about the ethics is then shared within the Human-Centric AI working group, with participants from the NL AIC and from the platform.”
Platform for supporting ethical aspects
On top of the focus on practical wisdom and ethics in organisations and networks, the working group wants to use the platform to highlight at least two other activities: the development of various powerful supporting methods, and national and international exchange. Sophie and Daniël also see that thematic communities are developing within the platform, getting down to work on these main topics and also helping each other improve. “Which we encourage, of course. We want to become a place where we cooperate with lots of other parties and so make progress, step by step, in ethical AI in the Netherlands. We think that our platform can be a starting point for support for ethics in that area, as the most practical way of raising the subject to the next level in the Netherlands. And who knows? Maybe our approach will be picked up internationally too. That would be nice, because it’s a very important issue that’s coming to the fore globally.”
If you would like to know more about Human-centric AI or be part of the ethics platform, let Daniël or Sophie know through Edwin Borst.