This website uses cookies

This website uses cookies to ensure you get the best experience. By using our website, you agree to our Privacy Policy

Matthias Artzt

Senior Legal Counsel, Deutsche Bank AG

How the EU and UK diverge in regulating AI

International
Share:
How the EU and UK diverge in regulating AI

By

Matthias Artzt examines potential conflicts when developing or deploying AI-powered systems in both jurisdictions

Securing the benefits of Artificial Intelligence (AI) while mitigating its potential risks has become an urgent priority for policy and law makers worldwide. 

Various jurisdictions are therefore implementing piecemeal laws to address the issue, but when it comes to developing a comprehensive approach to AI regulation of AI, the EU and the UK are leading the way. However, their differing models may become a source of future complexity for producers of AI applications and companies deploying them in both jurisdictions.

The EU’s approach

The EU was first to take comprehensive steps in regulating AI.  Chances are that the EU approach will follow a similar model to what has been successfully achieved in other areas, such as the EU GDPR: first regulating within the EU, then exporting to other jurisdictions (Artzt/Dung, Artificial Intelligence and Data Protection: How to reconcile both areas from the European law perspective, Vietnamese Journal of Legal Sciences, Volume 7, Number 2, 2022, pp 39 - 58).

In April 2021, the European Commission introduced a draft regulation known as the EU AI Act. The proposed regulation has since been amended by the Council of Ministers in December 2022 and the European Parliament in June 2023. The EU institutions are now in “trilogue” negotiations to agree a compromise text, which will be followed by a two (or possibly three) year transition period. This means the EU AI Act will probably be fully applicable from early 2026.

The EU AI Act can be described as taking a top-down and structured risk-based approach. It establishes a single set of rules governing the provision and use of AI systems across the entire economy and it specifies the AI uses that fall into three regulated buckets. The three categories are uses of AI that create: 

  • Unacceptable risk: these AI systems are prohibited. Examples: social scoring; systems that exploit children or other vulnerable groups; live biometric facial recognition (with narrow law enforcement exceptions).
  • High-risk: these AI systems must meet specific requirements, including a prior ‘conformity assessment,’ registration in a public EU database, and compliance with ethical AI principles (related to data, governance, record keeping, transparency, human oversight, robustness, accuracy and security). Examples of systems subject to these rules systems posing a high risk to health and safety (eg medical devices, machinery, toys) or fundamental rights (eg AI systems used for crime analytics, entitlement to state benefits, creditworthiness, recruitment and promotion); and generative AI systems (eg ChatGPT).
  • Low or minimal risk: these are subject to less extensive requirements than high-risk AI. For example, users have to be informed that they are interacting with an AI system (unless this is obvious), or with a system that can detect their emotions or will be generating images of people that appear authentic but aren’t, ie “deep fakes.”

AI systems not falling within these buckets are not regulated by the EU AI Act. 

EU Member States must designate a national supervisory authority under EU AI Act, which is expected to be an existing regulator rather than a new entity. Meanwhile, the EU will establish the EU AI Board, a body tasked with overseeing and ensuring the uniform implementation of the EU AI Act across all 27 Member States. Another element of the EU AI regulatory framework is the AI Liability Directive, which aims to ensure that damages will be available to victims of losses caused by breaches of the AI Act.      

The UK’s approach

The UK Government has sought to make a virtue of opting for more of a light-touch and agile framework. The Department for Science, Innovation and Technology published a White Paper in March titled, "AI Regulation: A Pro-Innovation Approach." The paper acknowledges the potential benefits of AI, such as improving healthcare, enhancing transport systems and boosting economic productivity, while also recognising the potential risks and challenges.

In contrast with the EU, however, the government currently doesn’t intend to legislate (to avoid stifling innovation and hindering it from responding to technological advances. It prefers instead to “take an adaptable approach to regulating AI.”

At the heart of this is an invitation to existing regulators to develop AI rules and resources over the coming months, tailored to their sector and reflecting the following five cross-sector principles of responsible AI:

  • Safety, security and robustness.
  • Appropriate transparency and explainability.
  • Fairness.
  • Accountability and governance.
  • Contestability and redress.


There is much overlap here with the principles that should be applied to high-risk AI systems under the EU AI Act. Similarly, both jurisdictions refer to the importance of companies adopting AI technology and risk management standards developed by organisations like the International Organization for Standardization (ISO).   

The White Paper does not propose a new AI regulator, nor a lead regulator. Instead, bodies such as the Financial Conduct Authority, the Information Commissioner’s Office, the Competition & Markets Authority, Ofcom, the Health & Safety Executive and the Human Rights Commission will use their industry knowledge to develop appropriate AI rules. Yet there is a risk that this regulatory patchwork approach might lead to inconsistent application of the principles, and sanctions for breach, between sectors. It will certainly leave gaps where AI is used in unregulated parts of the economy.

However, the government´s preference is to ensure AI regulation is context-specific, based on the outcomes that AI is likely to generate in a particular case, and it doesn’t want to assign one-size-fits-all rules or risk levels across all sectors. The White Paper says that this framework is designed to achieve the following three objectives:

  • Drive growth and prosperity by making responsible innovation in AI easier and reducing regulatory uncertainty, which is intended to encourage investment in AI and support its adoption throughout the economy (ultimately creating more jobs).
  • Increase public trust in AI by addressing risks via the effective implementation of the regulatory AI framework.
  • Strengthen the UK’s position as a global leader in AI by establishing a strong position that allows the UK to shape international governance and regulation and promote interoperability of AI standards, while minimising cross-border risks and protecting democratic values.

The government also intends its AI regulatory framework to be “iterative,” in other words to develop in response to feedback from stakeholders and new risks identified as technology and business use advances.

Important considerations

As often occurs in other policy goals, the UK leans towards flexibility in its regulatory approach whereas the EU leans towards certainty. Each has its advantages and disadvantages.  

The diverging approaches may well present challenges for companies currently operating in both markets, as well as for those looking to expand into the other market. They will have to keep a close eye on legal and regulatory developments in both jurisdictions over the coming months and years to ensure they can remain compliant with both where applicable. 

However, one can’t help but wonder whether the post-Brexit approach of UK organisations with EU customers in the context of data protection might offer a glimpse into a possible dynamic we may see with AI. Even if the UK’s rules become more flexible than those of the EU GDPR, it could be impractical to run two different compliance and risk frameworks. Organisations might therefore lean towards adopting the EU framework as the basis for their platform, even if the UK regime is less demanding.

Certainly, UK companies would be well advised to consider that the EU AI Act has extra-territorial reach. It will apply to AI system providers and their authorized representatives, importers, distributors, and users as well as product manufacturers that place an AI system on the EU market or use an AI system’s output in the EU.

They should note, too, of the potential fines associated with the extra-territorial reach of the EU AI Act. These fines could amount to 6 per cent of the global annual turnover or 30 million euros, whichever is greater. The European Parliament is pushing for these figures to be increased to 7 per cent or 40 million euros respectively. The degree of regulatory risk may be a relevant factor if it makes sense to prioritise the requirements of one regime over the other.    

Dr. Matthias Artzt is a senior legal counsel at Deutsche Bank AG Frankfurt and a data protection practitioner. He is the editor of the Handbook of Blockchain Law (published 2020) and of the International Journal of Blockchain Law.