This website uses cookies

This website uses cookies to ensure you get the best experience. By using our website, you agree to our Privacy Policy

James Castro-Edwards

Counsel, Arnold & Porter

Quotation Marks
A new AI Office will be established, which will sit within the European Commission, to monitor the effective implementation and compliance of GPAI model providers

AI regulation: contrasting EU and UK approaches

Feature
Share:
AI regulation: contrasting EU and UK approaches

By

James Castro-Edwards looks in detail at the differences between the approaches to the regulation of artificial intelligence taken by the EU and the UK

The EU AI Act was recently approved by the European Parliament and, subject to final formalities, will imminently come into force with most of its provisions taking effect within the following two years, if not sooner. The AI Act will regulate developers of artificial intelligence (AI) systems, by categorising AI systems according to the level of risk they present. It imposes obligations upon those that develop and deploy AI systems, in order to protect individuals’ rights and freedoms.

By contrast, the UK government has indicated that it may in future enact legislation to regulate AI systems, but for the time being will rely on existing law and regulators. Notwithstanding the government’s position, the UK AI Regulation Bill (a private members’ bill) recently passed its second reading in the House of Lords, and could potentially become law if it garners sufficient support.

Organisations involved in the development and deployment of AI systems must keep abreast of developments in this area in order to prepare for when the new legislation takes effect, a challenge exacerbated by the differing approaches taken by the EU and the UK.

The EU AI Act

The AI Act was originally proposed by the European Commission in April 2021. Political agreement was reached between the Council of the European Union, the European Parliament and the European Commission in December 2023, and the AI Act was approved by European Parliament on 13 March 2024.

The AI Act ‘aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field.’ It applies to providers that put AI models into service or place on the market general purpose AI models in the European Union. The Act classifies AI according to its risk, and the majority of the obligations it imposes fall upon developers of ‘high-risk’ AI systems.

Changes the AI Act introduces

The changes the AI Act will introduce are summarised below.

Prohibited AI systems: Title II of the AI Act bans a number of AI systems that threaten individuals’ rights. The Act prohibits AI systems that deploy subliminal techniques intended to distort individuals’ behaviour, or systems that exploit the vulnerability of any group due to their age, physical or mental disability, in order to cause harm. The prohibited AI systems are set out in Title II, Article 5 of the Act, and include biometric categorisation systems based on sensitive characteristics; untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases; emotion recognition in the workplace and schools, social scoring, predictive policing (where based solely on profiling a person or assessing their characteristics); and any AI that manipulates human behaviour or exploits individuals’ vulnerabilities.

Law enforcement exemptions: The AI Act prohibits the use of ‘real-time’ remote biometric identification (RBI) systems by law enforcement authorities other than where one of an exhaustive list of narrowly defined situations exists. Biometric identification may only be deployed in real time if strict safeguards are met, for instance, where the scope and time are limited and specific prior judicial authorisation has been obtained. For instance, a targeted search for a missing person, or preventing a terrorist attack.

High-risk AI systems: The AI Act considers a number of AI systems, which are listed in Title III, Article 6 as ‘high-risk’. It imposes additional obligations upon providers of high-risk AI systems. The Act defines high-risk AI systems by reference to Annexes II and III. For the purposes of Annex II, high-risk AI systems are those used as a safety component or a product covered by EU laws in Annex II and required to undergo a third-party conformity assessment under one of the Annex II laws. Annex III provides a list of high-risk systems, which includes biometric systems (including RBI systems), AI systems for biometric categorisation (according to sensitive or protective characteristics) and AI systems used for emotion recognition. AI systems used in critical infrastructure, as well as those used in educational and vocational training, employment and worker management are deemed to be high-risk, as are AI systems intended for use by public authorities to evaluate individuals’ eligibility for essential services and benefits. AI systems for use in law enforcement, migration, asylum and border control management, and the administration of justice and democratic purposes are also treated as high-risk AI systems for the purposes of Annex III.

The AI Act requires that providers of high-risk AI systems must establish a risk-management system throughout the lifecycle of the AI system. They must carry out data governance to ensure that training datasets are relevant, sufficiently representative and, as far as possible, complete and free of errors. Providers must draw up technical documentation to demonstrate compliance and provide authorities with information necessary to assess compliance, design systems for appropriate record keeping to identify and monitor risks, and provide instructions to ensure compliant deployment by downstream users. High-risk AI systems must be designed to achieve appropriate accuracy, robustness and cybersecurity, and allow deployers to implement appropriate human oversight. Providers must also establish a quality management system to ensure compliance.

General purpose AI (GPAI) models: The Act defines general purpose AI models as ‘an AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications. This does not cover AI models that are used before release on the market for research, development and prototyping activities.’ Providers of GPAI systems are subject to less stringent obligations than providers of high-risk systems, but must still draw up technical documentation (including for downstream providers), establish a policy to respect the Copyright Directive, and publish a summary about the content used for training.

The AI Act deems GPAI models that use over a certain quantity of training data present systemic risks. Providers of GPAI models with systemic risks are required to notify the Commission within two weeks if their model meets the criteria, and must implement additional measures.

Transparency requirements (Title IV): Article 52 requires that providers of AI systems that are intended to interact directly with the public must inform affected individuals, unless it is obvious, although AI systems that are authorised by law to detect, prevent, investigate and prosecute criminal offences fall outside the requirement. Providers of AI systems that generate audio, image, video or text content (including deepfakes) must ensure that such content is marked in a machine-readable format as having been artificially generated or manipulated.

Governance
A new AI Office will be established, which will sit within the European Commission, to monitor the effective implementation and compliance of GPAI model providers. Downstream providers will be able to lodge complaints regarding upstream providers with the AI Office. The AI Office may evaluate GPAI models to assess compliance and investigate risks. Violation of the prohibitions carry penalties of up to 7% of global annual turnover, or €35 million, while the maximum penalty for most of the other violations is 3% of global annual turnover, or €15 million. The maximum penalty for supplying incorrect information is 1.5% of worldwide annual turnover, or €7.5 million, though fines for SMEs and startups are capped.

Status and next steps

Following its approval by the European Parliament, the AI Act is now subject to a final lawyer-linguist check, and must be formally endorsed by the European Council. The AI Act will then enter into force twenty days after its publication in the Official Journal. Its provisions on prohibited practices will take effect six months after the Act’s entry into force; codes of practice provisions after nine months; general purpose AI rules (including AI governance) after twelve months; and obligations for high-risk systems after thirty-six months. The remaining provisions will take effect two years after the AI Act enters into force.

The UK AI regulatory regime

In contrast to the EU’s regulatory approach towards AI, the UK government has stated that, while new legislation to regulate AI may be necessary in the future, in the short term it will rely on the domain-specific expertise of existing sectoral regulators, with support from a central function currently being developed by the Department for Science, Innovation and Technology (DSIT).

The UK government set out its approach to AI regulation in its March 2023 white paper ‘A pro-innovation approach to AI regulation’. This includes a framework underpinned by five principles to ‘guide and inform the responsible development and use of AI in all sectors of the economy’. The principles are safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress.

The government favours a voluntary rather than a prescriptive approach, leaving individual regulators to decide how best to address AI. In July 2020, the Digital Regulation Cooperation Forum (DCRF) was formed, consisting of the Information Commissioner’s Office (ICO), the Competition and Markets Authority (CMA), the Office of Communications (Ofcom) and later the Financial Conduct Authority (FCA). The DCRF is a voluntary forum, whose role is to harness the collective expertise of its members to support the regulation of online services, which includes AI.

In its white paper consultation response, the DSIT announced a £10 million package to boost regulators’ AI capabilities. The central function will also support increased coherence across regulators to analyse and review potential gaps in regulators’ existing powers and remits. The government intends to expand on the role by the central function in further guidance, which is expected to be published in summer 2024. However, the government’s approach has been criticised on the basis that in the absence of binding regulation, it will be entirely dependant on the goodwill of the tech giants.

The UK AI Regulation Bill

Notwithstanding the government’s position regarding the adoption of new legislation to regulate AI, on 22 March 2024, the UK AI Regulation Bill received its second reading in the House of Lords. The Bill is a private members’ bill, introduced by Conservative Lord Holmes of Richmond, in November 2023. The Bill would establish a new AI authority, which would have various functions intended to address AI regulation in the UK. These functions would include ensuring that existing regulators are taking AI into account in an aligned manner, and carrying out gap analysis of regulator responsibilities for AI. The AI authority would have additional functions including monitoring economic risks arising from AI, conducting horizon-scanning technologies, facilitating sandbox initiatives to allow the testing of new AI models and accrediting AI auditors. The Bill would also introduce a number of regulatory principles governing the development and use of AI.

Private members’ bills are bills introduced by MPs and peers who are not government ministers. In principle, private members’ bills are subject to the same parliamentary stages as any other bill, however in practice they face greater procedural barriers and only a small proportion of private members’ bills are enacted. During the second reading in the House of Lords, the UK AI Regulation Bill received wide ranging support from more than twenty members of the House of Lords. However, the Bill will still need support from the government if it is to become law, with no certainty that it will do so.

Practical compliance

Businesses operating in the EU and the UK that are involved in the development and or deployment of AI technologies will need to comply with both the EU AI Act, and applicable guidance published by the UK regulators, in particular, the ICO. In this emerging area of the law, AI businesses will need to monitor developments to ensure they remain compliant with the legislation and guidance as it develops. It remains to be seen whether the EU’s legislative approach or the UK’s self-regulation will best achieve the aims of promoting innovation while protecting individuals’ rights as AI technology emerges.