The massive impact of artificial intelligence (AI) as a system technology on people and society comes with a very special responsibility. That is why the Human-Centric AI working group is leading the search for ways of learning and discovering – alongside people – what the best and most desirable AI solutions are.

This also applies to legal aspects of developing and deploying AI. Some questions have straightforward answers; others don’t. And when is the right time to be asking those questions about the legal aspects? Is it when an AI application is commissioned, when a prototype is delivered or when an AI application starts being used? Unfamiliarity with AI or incorrect perceptions of it can mean that organisations (i.e. corporate lawyers) see numerous risks and are reluctant to grasp the opportunities offered by AI. Answers are often already available but they have been written by lawyers for lawyers. Moreover, they are scattered across the websites of governmental bodies, industrial sector organisations and centres of expertise.

As a result, acceptance of AI by companies and governmental authorities is still low – too low. To accelerate the development and application of human-centric AI, the working group is identifying new insights and bringing the appropriate parties into contact.

European regulations for applying AI

Because AI can affect people’s lives dramatically, Brussels has announced regulations for applications of this technology. AI developers will soon be obliged to be extra careful. Not only those who develop AI applications but also the users must determine for themselves what risk category the AI application falls under and whether it complies with the legislation and regulations. The Human-Centric AI working group is squarely behind Europe’s decision to draw a line in the sand through special regulations and the fact that they have explicitly chosen to adopt ethical principles in which human dignity is central. This will undoubtedly help to achieve reliable, human-centric AI.

The Legal AIR platform

How does the working group assist organisations that are engaged in practical applications of AI? How do you know which laws and regulations have to be observed? The Human-Centric AI working group plays an active role in this through the Legal AIR initiative, a knowledge platform that offers specific information to everyone in an easily accessible way, and where template documents can be downloaded and questions about AI in practice put to legal experts.

If a question is not easily answered from the information on the knowledge platform, you can be put in contact with an expert who can answer your query through the knowledge platform. That could for instance be a question about data science, cybersecurity, data ethics or a legal issue.

If you would be ready to be called in as an expert for these complex issues, please let us know through the Legal AIR knowledge platform.

More information

If you’re interested, go to the Legal AIR knowledge platform for more information or read the interview with Bart Schermer and Jos van der Wijst about the proposed European

Share via:

Related news

Er zijn geen nieuwsberichten gevonden.