Home > News > What is the current state of the AI Act?

What is the current state of the AI Act?

In April 2021, the European Commission published a proposal for AI regulations. This was the first proposal in the world for comprehensive horizontal regulation of Artificial Intelligence (AI). The proposal made quite a stir about the very definition of AI alone, not to mention the definition of high-risk AI applications and the requirements they must meet: that has also triggered much debate.

On 14 November 2022, the Zuid-Holland AI Hub and the Human Centric AI working group organised a meeting in Mondai, House of AI in Delft, led by Jos van der Wijst, to update you on the status of these AI regulations. Various speakers gave their views of the developments.

Responses from the member states

In the European legislative process, it is then up to the member states to reach a compromise text that – if all goes well – will be ratified by the European Parliament. The member states have now almost reached an agreement about that text. The most recent publicly available text is from 3 November 2022. During the meeting, it was stated that a meeting of the Telecom Council is going to take place on Tuesday 6 December 2022 at which the final compromise text is expected to be ratified. It is then up to the European Parliament to complete the response to the proposal. That process is expected to be completed in the first half of 2023.

Where are we now?

At the meeting, Prof. Anne Meuwese discussed the proposal and the compromise text (version of 3 November 2022). Anne said that relevant changes had been made to various points. For example, the definition of AI has been amended: fewer applications will now fall under the definition of AI. No fundamental changes were made, though. Subjects such as the risk-based approach, the self-assessments and the AI regulatory sandbox remain in the proposal. However, details – sometimes important ones – were changed here.

Should practical AI people get started on the proposal already?

Dasha Simons (IBM) talked about how she helps organisations adopt trustworthy AI (human centric AI). She also stated that organisations that start working on this in good time could later have a ‘frontrunner advantage ’. In most cases, adopting trustworthy AI means a multidisciplinary approach. Sometimes different disciplines (data science, legal, ethics) can speak ‘different languages’.

How is the supervisory body preparing for the AI regulations?

Huub Janssen (Agentschap Telecom – Radiocommunications Agency Netherlands) explained how the AI supervisory bodies (more than twenty in the Netherlands) are preparing for the AI regulations. As multiple supervisory bodies can be the authorities, agreements have been made about how these bodies are going to work together. Huub paid a great deal of attention to the AI regulatory sandbox. In 2023, the Telecom Agency – together with the Dutch Data Protection Authority and the Authority for Consumers and Markets – is going to organise a pilot of an AI regulatory sandbox. Huub wondered if, given the amended compromise text, the sandbox tool still provides any added value.

Interested in more information?

This meeting was initiated by the Human Centric AI working group of the NL AIC. This subject was also extensively covered during the ECP Annual Festival. We will be following the legal aspects of AI closely in 2023. Those who want to keep an eye on this (both lawyers and non-lawyers) are welcome to join the team that is working on the legal aspects of human-centric AI. Take a look here for more information or contact Jos van der Wijst.


Meest recente berichten

Building a digital future together