AI: building a realistic perspective for social issues together

Published on: 13 December 2021

Demystification

An important first move in recognising the potential of AI is demystifying it. The term ‘AI’ gives many people grand ideas, but it is ‘just’ a tool that under certain conditions creates added value. “You hear all kinds of stories about AI,” says Barbara Visser from the Public Services working group. “It is either the solution to everything or it is a massive threat.” But the reality is more nuanced.

Jim Stolze, the founder of Aigency, believes fervently in spreading this realistic perspective. He briefly discusses the National AI Course, in which people learn what AI is all about – and what it is not. “What people call AI is actually a variety of things,” Jim explains. “In knowledge-driven AI, people input the rules that are then executed by the computer. There is also machine learning. We build algorithms that go looking for rules within a dataset; that’s more about statistics than programming.”

Additional intelligence

Paul Iske, Chief Failure Officer of the Institute for Brilliant Failures, speaks about AI as additional intelligence. In the end, we are in control, Jim emphasises. “There are always people responsible who wrote the code and it is people who interpret the results. Ultimately, a flesh-and-blood human being makes the decisions and is assisted by an algorithm.”

Demystifying AI shows how powerful it can be and what challenges the public sector faces in using it. TNO’s Devin Diran discusses the dilemma that public institutions can get into when trying to use AI for the energy transition: new technology like AI is expected to ensure a fast, fair and inclusive energy transition, but there are many knowledge gaps in what we know about AI and that demands space for experimenting. TNO is learning about this from projects from Rotterdam to Zoetermeer, with questions such as how to translate informal local knowledge generated by AI into formal knowledge, or how to create trust in AI in decision-making. How do you guarantee that AI helps empower people and stakeholders? How do you test the reliability of AI? When an institution starts using AI, space for experimenting is needed to be able to answer these questions.

Multidisciplinary team

Guido Hobeijn, founder of the municipality of Amsterdam’s Computer Vision Team (CVT), also emphasises this. A proof of concept often looks nice on paper but is very different from using a solution in practice. That is why they created the CVT, a place where experts collaborate in various areas to implement projects successfully. A project that uses images from the public domain needs to take account of privacy laws and public backing, while at the same time you want the right governmental authorities to have access to the right data.

Conditions

The key challenges are usually not technical but instead involve ethical, legal and social preconditions for embedding and upscaling AI in the public process. The social environment in which a project is set up is often not really examined. Civic engagement is crucial for an honest and inclusive energy transition, states Devin, but the best way to involve people from the start is still unclear: information evenings only reach certain people, and having lots of one-on-one discussions is very labour-intensive. This is where data-driven solutions could help and – because they quickly start needing data about the residents – the ethical, legal and social preconditions are also essential. As Barbara later adds, it is essential to involve members of the public from the start somehow in the question of whether and under what conditions an AI application can create added value. In the Netherlands and within Europe, we stand for AI in which people are always key.

The legal and ethical frameworks a project is embedded in are often also not examined sufficiently. Governmental institutions venture all too soon into a legal jungle where laws often contradict each other. That is why working in multidisciplinary teams is important, according to Jim. “AI is too important to leave to technologists. As Guido said, it is perfectly OK to have a legal expert in your team.”

Cold feet

“Before you start implementing a solution in practice, you first have to check what values might be under pressure,” says Bert Kroese, guest speaker and Acting Director General at CBS. “You’ve got to have the ethical discussion from the beginning and see how you can represent those values in the design. We see that AI can become crucial in major challenges, and we want that to happen responsibly.” It seems awkward yet essential to inform all the parties involved (including the public) correctly and give them a voice. For many institutions, not succeeding in an ambitious project feels risky, whereas others can in fact learn a lot from it. “I see a lot of authorities getting cold feet,” says Jim. “Before you know it, you get punished if things don’t succeed right away. While we can learn the most from each other.”

Ethical AI

People prefer to see projects succeed rather than fail, especially if they were financed from the public purse. In that regard, public backing for AI-based solutions is another big challenge. “Everyone believes that AI is objective,” says Jim, “but nobody wants a robot judge.” Demystification of AI plays an important role in this process as well. “It all starts with transparency,” says Bert. “First, make it clear what you are doing. Express what you’re aiming for and exchange experiences.”

There is still a lot to learn from each other about using AI ethically. What are effective communication strategies? And what if algorithms can make better diagnoses than doctors? Barbara notes that it could even be unethical not to use the potential of AI. After all, handling AI ethically is not only about protecting the public from privacy risks but also about letting the public benefit to the full from what AI can do. “We should exploit the opportunities,” emphasises Bert, “but we need to do it responsibly. Discussing things with each other, like during this symposium, can help find the balance between going too far and doing just enough.”

Successfully failing and learning from each other

To find that balance together, it is important that failed projects are shared with each other too, Paul argues. In the end, we have yet to discover exactly what using AI comprises overall, so it would be strange to assume it will go well on the first try. “You will run into failures and they are going to cost you money, time and effort,” says Paul. “They are your costs of failure. But if you don’t discuss what happened, you also lose out on the potential gains: new knowledge, new experience and a better starting point for future projects.” That is why Paul also urges people to see the educational potential of the outcomes and look at the ‘benefits of failure’.

It is the reason why it is important to realise that business cases are never fully implemented. The value of a project is not just in its success, but also in the effort put into it. A project is often a good idea and everyone has prepared well, yet it still does not go as planned. An important subsequent step is exchanging experiences – after both successful projects and brilliant failures.

A realistic perspective

“AI is no longer a dream,” says Bert in his succinct conclusion to the symposium. “To get a realistic perspective, it is important to show that techniques such as machine learning are not so very different from normal statistics. Patterns are estimated, based on the data, and those patterns can be used to make predictions. That is very powerful but it also has serious limitations. The biggest challenge isn’t in the technology but in the preconditions: how do you get the right data, how do you handle ethical issues, and how do you generate public acceptance? There are risks in using AI but it also has a lot of potential. And we are all still experimenting to see how to use that potential. That is why we have to share our experiences. Not only success stories but also failures – even if they aren’t brilliant ones.”

Public AI Award for brilliant failures

With that in mind, we are proud to announce the award for the most brilliant failure in AI in the public sector, in collaboration with the Institute for Brilliant Failures. Contenders can submit their brilliantly failed projects until 1 March 2022, to share experiences with the rest of the public sector in a safe context. Keep an eye on the Netherlands AI Coalition’s website and social media for more information.

Interested?

The mini-symposium in the Fokker Terminal was not just a live event, it was also streamed online. Both the live audience and the viewers asked the speakers questions. The whole symposium will be available to watch again soon. You can find more information about the Public Services working group here.

Share via: