“Don’t step on the brakes: commit fully to human centric AI”

Published on: 27 February 2023

Irvette Tempelman has been paying close attention to developments in AI for the past thirty years. She studied law, specialising in information technology law , and was then a privacy and IT rights lawyer for years , also teaching at eLaw (the Centre for Law and Digital Technologies at Leiden University). Since 2020, she has been the policy secretary at VNO-NCW/MKB-NL (body for employers and SMEs), where she is involved with the regulations associated with artificial intelligence (AI). That’s a good starting position for becoming chair of the Human Centric AI Working Group of the Netherlands AI Coalition. Which she recently did. An introduction.

How do you see the current state of AI?

“The arrival of ChatGPT has grown into a real hype in just a short time,” says Tempelman. “A lot of teachers are panicking because their pupils are suddenly using AI to write homework assignments. And it makes society at large think about the opportunities and risks of AI. By now, this is actually a development that has been coming for a long time. It’s not the case that people are only now thinking about how AI technology can be used in a human centric way. That’s been a topic of discussion for as long as I’ve been involved in the field. It’s not as if we’re just now suddenly thinking: oh right, the people!”

What do you think is the key challenge?

“It’s important to make sure that people working with AI technology know what points need considering and what the limitations are. That means not only having proper ethical and legal frameworks, but also the support for them – instruction, courses and training. When it comes to the development of AI, the last thing you want to do is to put on the brakes, as has regularly been suggested recently. Europe is already behind the US and China in terms of AI. That’s why it’s important to commit fully to human centric AI. That means that we need to become a leader in all areas that are part of the ELSA concept, in other words with a clear focus on the ethical, legal and social aspects.”

What role do you see for the ELSA labs in this?

“They offer some great opportunities! Really nice steps have already been taken in that regard under the guidance of Emile Aarts, the former chair of the Human Centric AI Working Group. The community currently has 22 ELSA labs. They focus on social issues and – thanks to the wide range of participants – they manage to bridge the gap between practical research and practice. What appeals in particular to me is that they use a learning approach in doing so. That provides new understandings that help the Netherlands AI Coalition drive the social dialogue about AI, as it is currently doing through the AI Parade. Thanks to the ELSA labs and the underlying concept, we have a nice base to work from. Our challenge now is to extend it further – initially in the Netherlands, but I’ve understood that there’s a lot of interest from abroad in the ELSA labs setup and working method. It’s all very promising.”

What is your goal as the new chair?

“If you look at who is currently using AI technology, you can see that there are still major differences between the various sectors. That makes me think that it’s hard for some parties to get started. It would be nice if our working group was able not just to help those parties create human centric AI but also to increase awareness. So that we can identify even better where any bottlenecks are, so that the governmental bodies or other groups, for example, can find solutions to them. I think it’s very important to be a connector. And being a working group gives us a unique position in that regard. Plenty of options to achieve some even more amazing things. I’m looking forward to it!”

Share via: