Home > News > “AI within the government? Yes please! But in a responsible and people-oriented way”
Share
Share on linkedin
LinkedIn
Share on twitter
Twitter
Share on whatsapp
WhatsApp
Share on email
Email
Share on print
Print

“AI within the government? Yes please! But in a responsible and people-oriented way”

woman connecting with technology
Pursuing social goals with the help of Artificial Intelligence (AI)? The Dutch government is getting the hang of it. But how do we ensure that the development of AI solutions happens in a safe and responsible way, and parties exchange their knowledge? This is one of the challenges of the Netherlands AI Coalition (NL AIC), and the working group Public Services in particular.

Artificial Intelligence (AI) is expected to have great impact on our society. The government has an exemplary role when it comes to the responsible development and application of AI. Therefore, within the Netherlands AI Coalition (NL AIC), a dedicated working group focuses on the opportunities that AI offers to improve public services. The government is represented in the broadest sense of the word by the more than eighty participants in the working group. Local goverment, provinces and ministries as well as various  administrative bodies are involved. Companies and knowledge institutions with an interest in the government take part as well.

“AI is not science fiction. We see more practical examples every day, also within the government “, Barbara starts the joint video interview. “And that is a good thing, because as long as we just talk about possible applications, people tend to only see possible downside of AI. Or, in contrary, they might see it as something paradisiacal. Therefore, together with all the other participants in our working group, we want to put AI in a more realistic perspective. The best way to do this is by gaining AI experience and sharing lessons with each other. So, it is a good thing that the government is developing more and more different forms of AI solutions and applying them in practice.”

Usable in many areas

There is not only an enormous variation in the use of AI, but its’ scope is also very broad. From smart use of satellite images to the use of AI for text analysis. And from chatbots to a more proactive service because AI can show all relevant information at exactly the right moment.

The overestimation of AI…

“The development of AI is indeed going fast,” Marieke states. “Two years ago, for example, there were hardly any scaled-up applications within the government. And now there clearly are. I have noticed that the development and implementation of AI is often overestimated. People think that an AI model will come to life on its own. But no: we humans still have to do the necessary preliminary work. And when an AI solution is put into practice, it is important that the organization can also work with it properly. We must continue to actively monitor the system.

…but also underestimation

“On the other hand, the value of AI is still regularly underestimated,” she continues. “For example the discussion about safeguarding public values and fundamental rights. Too few people realize that AI can be used for just that very well. You can use AI to distinguish fake videos from real videos. Or you can use AI as a tool to make texts or videos more accessible, and thus ensure that more people are helped with that information.”

The biggest challenge at this moment

Applications of AI in the social domain are currently sensitive. And the fact that things go wrong sometimes is by no means always due the technology. The problem often finds itself mainly in the organizational setup. For example, working with AI requires different skills from individual civil servants and sometimes quite radical process changes within government organizations. So that is the biggest challenge at the moment: ensuring that AI systems and people really form a team and can work well together. In addition, humans must also be able to keep a sharp eye on the goal and apply the brakes if something threatens to go wrong.

Adapting legislation and regulations

“It would also help if there were a clearer socio-legal basis, so that we as government know exactly what AI applications must comply with,” Chantal points out. “Dutch law needs to be adapted for this. There is still a lot of work to do in that area.”

“At the European level, this is also an issue”, Marieke knows. “On 21 April, the European Commission published draft legislation in the field of AI. But first all interests still need to be heard and weighed. So there too, it may take time before these regulations are actually introduced.”

High risk?

“Initially, the entire public sector was classified as ‘high risk’. But we said that the risk levels differs greatly per subject. For example, it makes quite a difference whether an AI system targets people or, say, plants. This point was included in The Hague’s response to the European Commission’s AI white paper. And this comment has also been included in the draft regulations.”

Human-centered approach

“Meanwhile, it is of course extremely important to take extra care with AI projects where there is a risk that people could be disadvantaged,” Chantal emphasizes. “We now know how essential it is for the entire government to actively involve employees and citizens in the development of AI systems, in addition to technology parties and scientists. You really have to do it together.”

AI for social goals

“Focus on people. That is also an important spearhead of the NL AIC, which includes terms such as ‘responsible AI’ and ‘a human-centered approach’. A lot of research and experimentation is already taking place in these areas. For example, within the ELSA labs, where ELSA refers to Ethical, Legal and Societal Aspects. These special labs are an initiative of the NL AIC, in which companies, government, knowledge institutes and the inhabitants of the Netherlands work together on AI solutions to realize social goals. As a working group, we naturally follow this development closely and want to stimulate initiatives from the public sector.”

Joint focus on data quality

“This also applies to initiatives that focus on data sharing,” says Barbara. “Data is of course the raw material for any AI application. So to be able to work with AI, first you have to make good agreements on the way of data collection and the quality of that data. That’s another thing we can contribute to together with all the other participants in the working group.”

Removing unfounded concerns

“Just sharing experiences in the field of AI already helps to make that subject more accessible,” she concludes. “It also helps to dispel concerns. I know that quite a few of my colleagues still fear that jobs will be lost through the use of AI. But based on what we have already seen, I am convinced that some jobs may be lost, but other jobs will replace them. It’s a search for balance and new forms of cooperation between humans and machines.”

Computer says no

Chantal also sees some cold feet at governmental organizations. “Sometimes there is a fear that AI will really take things over. What we have noticed is that organizations are experimenting with small things, but that they stop when it becomes more serious. Because how do you get different departments to work together properly? Which external parties should you involve? What about issues such as compliance and privacy? Is it worth the investment? Organizations can really go through a transition during an AI project. It is therefore understandable that they first carefully dip their toes into the ‘AI water’. But you learn so much faster and more if you can also go into depth on a subject.”

What is the added value?

Meanwhile, more AI solutions are emerging, becoming more sophisticated and being deployed on a larger scale. The NL AIC encourages that development. However, Barbara emphasizes that it is important to keep looking critically at the extent to which AI solves a problem. “At DUO, for example, we did an experiment two years ago with a chatbot that answered questions by phone. It was well built and it worked fine in theory. But in the end, it turned out to be neither better nor faster than an employee. The project team then decided not to use the chatbot. I thought that was cool. And good. Because you should only implement an AI solution if it really adds value.”

This is the first part of a series. In future articles, we will take a closer look at how the government puts its ambitions into practice. In other words, how we want to tackle specific social issues in the Netherlands with the help of AI. And what is needed to do this in a responsible and people-oriented way. To be continued. 

Working Group Public Services

As the working group Public Services, we are building a broader community to promote the use of AI within the government. We want to discover, together with other parties, how AI can be responsibly applied and scaled up in public services. As a working group, we develop instruments for a good application of AI. We also share good examples and make use of the opportunities that arise within the NL AIC to actually develop and apply AI.  

Working Group Public Services
From left to right
  • Marieke van Putten, innovation manager at the Ministry of the Interior and Kingdom Relations and chair of the Public Services working group 
  • Chantal van der Wijst, strategic advisor Compliance & Technological Innovation at the Tax Authority 
  • Barbara Visser, Strategy & Innovation Advisor at DUO (Education Executive Agency) 

Interested in this topic? Become a participant of the NL AIC and benefit from the knowledge and network around Public Services and other relevant AI themes.

Tags:

Meest recente berichten

Deepfakes: Danger to democracy or creativity for all?

Building a digital future together