Consider the views of many and take time: lessons from the EU AI Act with Kai Zenner

1 August 2024

As the EU AI Act comes into force today, 1 August, James Boyd-Wallis spoke to Kai Zenner, Head of Office and Digital Policy Adviser for MEP Axel Voss (EPP group) in the European Parliament, about its implementation and what lessons he has for UK policymakers considering AI legislation.

I would urge UK policymakers to take their time with AI legislation.
— Kai Zenner, Head of Office and Digital Policy Adviser

Where are we now with the EU AI Act?

The EU AI Act has been published in the official EU journal, meaning that it is now law and will become applicable on 1 August. Next, the Act has a few transition periods, meaning parts of the law will come into force at different times. For example, after six months, on 2 February next year, provisions on banned AI systems will take effect, meaning the development and deployment of such AI systems must stop. Then, the next big date is 2 August 2025, when regulation on foundation models will kick in. Finally, the remainder of the Act will become applicable on 2 August 2026. It will be interesting to monitor enforcement after August next year and in 2026 to see if the EU Commission focuses on the big tech platforms or enforces also against smaller homegrown AI firms.  

How is the EU and its member states preparing?

The European Commission and, to a certain extent, EU member states are busy with two things. First, they are trying to build up an AI governance system. For example, the AI office in the European Commission needs to be built, as does the AI Board, the scientific panel and other bodies that will take care of governance and enforcement at a European level. Within the member states, each will need to develop a regulatory sandbox and designate or appoint national competent authorities to look after the AI Act. All of that requires time, and everyone lacks AI talent. So, there will be a battle to find the brightest people available to fill those positions.

Second, the Commission and the member states will also need to create a lot of secondary legislation, meaning guidelines, templates, delegated acts, and implementing acts to add specifics to the EU AI Act. This secondary legislation is necessary because the AI Act as a law is broad, even vague, in many chapters and articles. The hope is that these additional documents will specify, for example, how to conduct a risk assessment, among many other areas.

Is the EU’s approach to regulating AI sensible?

The European Parliament wanted to establish horizontal but non-binding AI principles that apply to all AI systems (like the US AI bills of rights) and then complement them with sectorial legislation. The Commission decided differently. Instead, they created one horizontal AI act, which goes into detail and applies to every sector and use case. This approach creates problems because not all AI is the same. For instance, the AI in a hospital has different risks from the AI driving a facial recognition system for CCTV. My privacy may be violated in the second case.

The UK and the US approach might be better, and the EU may see problems. However, it depends on the merits of our secondary legislation and whether those documents specify differentiating legal applications in different use cases and sectors. If the technical harmonized standards cannot do that, then it may lead to innovation being stifled or hampered, especially among smaller companies that cannot afford the cost of compliance. In this case, European companies will have a competitive disadvantage towards UK and US start-ups and SMEs. This disadvantage is a significant risk with our prescriptive AI Act.

 

What lessons do you have for UK policymakers considering how to regulate AI?

One of the strengths of the UK is the cooperative approach. For instance, in data protection, the ICO is one of the few data protection authorities talking to everyone whether civil society, industry, academics or other stakeholders. I also see this with the Competition and Markets Authority and its approach to foundation models, where they have a careful, very evidence-based strategy.

This contrasts with the EU’s approach to foundation models where several decisions have been taken because of the French startup Mistral and the German start-up Aleph Alpha. So, my first lesson would be for UK policymakers to continue talking to and considering the input from many stakeholders to avoid a similar situation. Not one company or organisation should have an outsized impact on any legislation. Policymakers must also remain agile and check if proposals benefit all rather than just one business or sector.

Next, I would urge policymakers to take their time with AI regulation. While the European Union started the process in 2014, the trialogue phase, where the Commission, Council and Parliament debate the outstanding issues, took only three to four months. That was way too fast. Many issues were not discussed, which is one of the reasons why the quality of the final legal text is not high. There are too many vague points and contradictions.

James Boyd-Wallis is co-founder of the Appraise Network.