Only by thinking outside the box will we find solutions to ethical dilemmas concerning AI

Published on: 14 April 2021

An interview with Stefan Leijnen, lecturer in Artificial Intelligence, about the wonderful world of AI. The technology is only in its infancy and already has so much impact -what does the future hold?

Stefan Leijnen

In addition to being a lecturer at the Utrecht University of Applied Sciences, Stefan Leijnen is also an advisor to the Dutch AI Coalition. He is currently working with technology every day, but he started his career in AI as a student of psychology. ‘What surprises you most about AI?‘ That question is the starting point for our conversation. “I see AI not only as technology, but also as a kind of dream. The workings of the brain are one of the greatest mysteries that we are aware of in science,” he states. A bit like the question: ‘What exists beyond the boundaries of the universe?’ The comparison between AI and the human brain has always interested Leijnen. “The exciting thing is that nowadays we are able to not just study the brain, we can now test our theories in practice by recreating it.”

With each advance in technology, the AI that we are capable of developing also changes. First there were computers, now there is machine learning and in the future new technologies are bound to be added. Think, for example, about new types of hardware such as quantum computing. Leijnen: “Every time we ask ourselves the question: ‘Is this it then? Is this the real thing?’ But we still have a long way to go; perhaps we’ll never be done.”

Control

When it comes to present-day AI, two trends can be distinguished. On the one hand, there is the programmed model that is based on logic. A system can do intelligent things that way, but it does so based on pre-conceived rules and tasks. On the other hand, there is machine learning. The system then no longer works with models that we humans have figured out all the way from A to Z. But it can learn some new rules and tasks on its own. “The upshot of this is that models become more complex than we humans are able to comprehend. As a result, we also tend to have less control over them. However, new techniques that combine these two trends are promising,” he adds.

Unique place

We can compare this kind of machine learing system to the human brain. We don’t know exactly what goes on inside it and it works differently for everyone. So what does differentiate it from that kind of system? “It’s very important to ask what makes us autonomous. Aren’t we really just machines?” Leijnen lets a silence fall. “Suppose our autonomy is found in the fact that we can feel, that we have empathy, and that we can be creative. These are properties that machines do not have, at least at the moment. If it also turns out to be impossible for AI to learn these traits, then we hold a unique place in the world.”

The future will have to reveal whether AI will ever adopt these human traits. Moreover, there is a difference between learning rules to appear empathetic, and being genuinely empathetic. Leijnen believes this is an important aspect in healthcare. Service robots can relieve the pressure on healthcare. It is also clear that people can develop empathy for a robot. But how do you build a bond with a system that has no self and therefore nothing to lose? “An emotional bond can grow out of sharing your vulnerability. A robot doesn’t have that vulnerability at the moment. “Robots only learn to come across as empathetic. Suppose someone builds a bond with this kind of robot and then they find out that there’s no feeling behind that mask? That seems pretty awful.”

New solutions

These are important questions to think about, according to Leijnen. In order to manage the development of AI, we need to look beyond existing ideology and create a new vision.”The major problems of the 20th century, such as preventing wars and creating economic prosperity, we have managed to deal with reasonably well. The 21st century has some fresh problems, such as climate change, preventing inequality and the social impact of digitalization. We will not tackle these problems with solutions from the 20th century. We really need to come up with new solutions for that.”

Leijnen calls for good cooperation between governments, companies, knowledge institutions and citizens in order to come up with new ideas. “We shouldn’t just be led by the market. What we can build technologically is not always what we want to build,” he argues. But looking solely to the government for establishing ethical frameworks is not the answer either. “That’s too shortsighted. The government cannot do that on its own,” he continues.

Leijnen: “An ethical problem is often presented as a dilemma. It then seems as if you have to make a choice between two aspects that don’t go well together. But that’s not it. I think we should see it as an invitation to look for a third way, a totally new approach.” Time and attention in order to work on designs and figure out the long-term consequences of technology could help with this. Designers can build bridges between different fields and really look for a solution out-of-the-box.

Art

Artists also have an important role to play in this process. They can inspire the general public and foster public discourse. One example is the choreographer David Middendorp. He creates dance performances featuring dancers and drones. “For a lot of people, technology feels like something that just happens to you, that you have no control over. Scientists discover new things, engineers develop new technology and then all of a sudden there are new products,” he says. “That’s the image, but it simply doesn’t work that way. I see scientists and engineers struggling with ethical questions. We need to talk about that with each other, on newspaper opinion pages, as part of cultural performances, in the living room and in the House of Representatives.”

Europe

A vision for all of Europe is extremely important in this regard, Leijnen believes. “We need to move together towards a shared framework from which we can start developing technology. Are we taking part in an arms race with other countries and developing technology so we can stay competitive? Do we want to protect our data and become self-sufficient? What kind of society do we want to become and how can we use technology to make that happen? Europe is looking for ways in which we can hold on to shared ideals without incurring any economic damage,” he says. “That does not mean that we should always be the good guys, or toss our ideals overboard; that dichotomy is naive. We need to put our energy into a more nuanced narrative and invest in technology that makes our society better.”

This article has been written bij Linda Bak and is part of a serie of articles in cooperation with Innovation Origins. The purpose of the serie is to feed the dialogue about AI. If you have any questions, please do not hesitate to contact Edwin Borst.

Share via: