This website uses cookies

This website uses cookies to ensure you get the best experience. By using our website, you agree to our Privacy Policy

Paul Kavanagh

Partner, Dechert LLP

Dylan Balbirnie

Associate, Dechert LLP

Anita Hodea

Associate, Dechert LLP

Quotation Marks
The Labour Party’s manifesto specifically mentioned implementing binding regulations on the “handful of companies developing the most powerful AI models” and prohibiting the creation of sexually explicit deepfakes

AI will impact the labour market, but how will Labour impact the AI market?

Practice Notes
Share:
AI will impact the labour market, but how will Labour impact the AI market?

By , and

Paul Kavanagh, Dylan Balbirnie and Anita Hodea review the future of AI regulation in the UK under the new Labour government

The previous Conservative government championed a ‘pro-innovation’ approach to AI regulation. As part of this strategy, the UK developed a non-binding, cross-sector, principles-based framework to enable existing regulators such as the Information Commissioner’s Office, Ofcom and the Financial Conduct Authority to apply bespoke measures within their respective fields of data protection, telecommunications and finance. While this approach anticipated that there may be need for targeted legislative interventions in the future, specifically for General Purpose AI systems, it prioritised remaining agile as new technologies emerged.

Sir Keir Starmer, the new UK Prime Minister, has suggested that a Labour government would move away from the Conservative government’s laissez-faire, pro-innovation strategy. Instead, Labour intends to introduce stronger regulation of AI, albeit in targeted areas. Starmer has publicly emphasised the need for an overarching regulatory framework and has expressed concerns about the potential risks and impacts of AI, while also acknowledging its transformative potential for society. The Labour Party’s manifesto specifically mentioned implementing binding regulations on the “handful of companies developing the most powerful AI models” and prohibiting the creation of sexually explicit deepfakes.

The King’s Speech outlining the government legislative programme fell short of announcing an AI Bill, but repeated the intention to ‘establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models’. This suggests that the Government will be taking time to develop and implement AI legislation.

New initiatives

On 26 July, Peter Kyle (the new Secretary of State for Science, Innovation and Technology) announced a new AI Opportunities Action Plan to accelerate use of AI across the public and private sectors, declaring that the Labour Government was “puttingAIat the heart of the government’s agenda to boost growth and improve our public services”.

The Labour Party’s manifesto outlined several initiatives related to AI.

Firstly, there is the Regulatory Innovation Office and Developing Ethical AI. This entails the expectation that existing regulators will regulate AI within their respective fields is potentially problematic where issues span the remits of multiple regulators or, more significantly, fall outside the remit of any existing regulator.

Labour aims to address this by establishing a ‘Regulatory Innovation Office’.

It is proposed that the new office will consolidate government functions, streamline approval processes for innovative products and services and manage cross-sectorial issues. It will also set targets for technology regulators, monitor their decision-making speed against international benchmarks and guide them according to Labour’s industrial strategy. The Regulatory Innovation Office will not, however, be a new AI regulator, but will support and facilitate existing regulators expected to address AI within their respective spheres.

It also remains to be seen how the Regulatory Innovation Office will materially differ from the Conservative Government’s proposals to deliver ‘central functions to support the [previous Government’s] framework [for AI regulation]’, or the ‘AI Safety Institute’, which the Conservative Government established at the beginning of 2024 (as the first state-backed organisation focused on advanced AI safety for public interest).

Support for Data Centres

To support the growth of AI, the Labour government plans to remove planning barriers for new data centres by designating them as Nationally Significant Infrastructure Projects. This reclassification would allow these projects to circumvent local opposition and to consequently speed up their approval process.

Creation of National Data Library

The National Data Library initiative is a component of the Labour Party’s broader national industrial strategy. It aims to consolidate existing research programmes and to help deliver data-driven public services “whilst maintaining strong safeguards and ensuring all of the public benefit”.

Long-term R&D Funding

Labour committed to scrapping short funding cycles for key R&D institutions in favour of ten-year budgets that should allow for meaningful industry partnerships. The government will collaborate with industry to support spinouts and start-ups by providing the necessary financing for their growth. This initiative aims to simplify the procurement process and foster innovation.

The ‘Brussels Effect’

The EU has been bolder. Despite a lengthy legislative process, the EU has succeeded in passing what is, in the words of the European Parliament, ‘the world’s first comprehensive AI law’ coming into force on 1 August 2024 (subject to phased implementation). The EU has not just been quicker than the UK, and other jurisdictions, but also more ambitious in the breadth of its regulation. Whereas the Labour Government’s proposed AI regulation focuses on the “most powerful AI models” and the “handful” of companies developing them, the EU AI Act imposes obligations along the AI value chain, including on providers of AI systems and users of AI systems, in addition to developers of the foundation models that underpin such systems.

The EU AI Act is precisely the kind of regulation that many advocates of Brexit saw as innovation-stifling red tape. The current UK Government (similarly to the previous Government) is utilising its freedom to regulate differently to the EU with a lighter-touch regime, at least for the majority of businesses that are not behind the most powerful foundation models (although the EU AI Act may, of course, have looked different had the UK been involved in its negotiation).

Nevertheless, Keir Starmer and Peter Kyle will be aware of the actual and potential relevance of EU legislation in the UK. First, ambitious UK-based AI developers will want their systems to conform to EU requirements to exploit the EU market. Second, even where EU legislation is exacting, regulatory alignment can ease the compliance burden for international businesses operating across the EU and UK markets.

Third, the Labour Party has been critical of the current trade deal with the EU – a closer relationship may require closer regulatory alignment.

An alternative to closer alignment to the EU is to position the UK almost as a regulatory sandbox for AI, allowing innovators more freedom to develop products before scaling in compliance with the EU AI Act to take advantage of the EU market.

To date, the EU has also been bolder than other major jurisdictions like the US and China in implementing comprehensive AI legislation. While China and the US have taken steps towards regulating AI, they have avoided broad, sweeping laws. For example, China has introduced specific regulations targeting generative AI and deepfakes.

Meanwhile in the US, the Biden-Harris administration issued an Executive Order on the ‘Safe, Secure, and Trustworthy Development and Use of AI’ which aims to establish a broad framework for responsible AI use. Unlike binding legislation applicable to the private sector, Executive Orders serve as directives for federal agencies, guiding their actions and policies.

Future outlook

Under the new Labour Government, the technology sector can likely expect a shift towards more proactive and structured regulatory measures. While there have been indications of an intention to implement stricter regulations around AI, there has been no proposal for a general AI regulation. Any new legislation is expected to be more narrowly focused than the approach taken by the EU.

The Labour Government’s manifesto suggested that the UK will maintain a relatively light-touch regulatory approach to AI for the majority of businesses. However, the Labour party ran a cautious election campaign and, having won, it proposals may become bolder. In addition, its plans to re-build the UK’s relationship with the EU may lead to greater alignment with EU regulation.