This website uses cookies

This website uses cookies to ensure you get the best experience. By using our website, you agree to our Privacy Policy

Andrew Roberts

Solicitor, Ashfords

Quotation Marks
To resolve these disputes, the UK Government is floating a TDM exception that would allow AI training on copyright works unless right holders explicitly 'reserve' their rights

Copyright in the age of artificial intelligence

Practice Notes
Share:
Copyright in the age of artificial intelligence

By

Andrew Roberts examines the UK Government’s proposals for enforcement of copyright and their practical implications from a legal perspective

Artificial intelligence (AI) is redefining the creative and commercial landscape in unprecedented ways. Yet despite its potential benefits, it is also prompting hard questions about the proper scope and enforcement of copyright. In December 2024, the UK Government opened the Copyright and Artificial Intelligence consultation acknowledging that policy is struggling to keep pace with technology, placing legal practitioners front and centre in navigating an evolving terrain. 

The Current Legal Landscape

UK copyright law, codified primarily in the Copyright, Designs and Patents Act 1988 (CDPA), is wrestling with a tech revolution not necessarily envisaged when the legislation was first enacted. The crux of the dispute lies in whether copying copyrighted works for AI ‘training’ amounts to infringement. AI firms need vast swathes of data - often drawn from publicly accessible sources - to develop and refine their models. However, right holders increasingly find themselves unable (or ill-equipped) to monitor and license the use of their content at scale.

Many AI developers have argued that existing provisions - particularly the ‘temporary copies’ exception or implied “fair dealing” for text and data mining (TDM) - allow them to train in the UK without additional licensing. However, right holders view such interpretations as too broad. Contemporary litigation (such as Getty Images v Stability AI) underscores the high commercial stakes: if courts accept a narrower reading of these exceptions, it could expose AI firms to liability or push them to jurisdictions with more permissive frameworks.

Introducing an opt-out TDM exception

To resolve these disputes, the UK Government is floating a TDM exception that would allow AI training on copyright works unless right holders explicitly ‘reserve’ their rights, for instance through a machine-readable signal or metadata. This approach aims to resolve a core tension: allow AI firms lawful access to large, open collections of works while preserving the ability of rights holders to exercise control and, potentially, to negotiate licences.

On its face, this model echoes the EU’s approach under Articles 3 and 4 of the Digital Single Market (DSM) Directive (Directive (EU) 2019/790). Yet it goes a step further by coupling TDM rights with transparency obligations and technical standards for rights reservation. For example, the UK Government is exploring whether a standardised ‘flag’ (like an extended robots.txt or embedded metadata) could consistently communicate a rights holder’s desire to opt out.

Transparency measures: moving beyond compliance

A linchpin of the UK Government’s proposed regime involves obligating AI developers to disclose details about their training data. This disclosure could range from high-level summaries of data sources to more granular logs of precisely which copyrighted works were ingested. For rights holders, such transparency not only enables enforcement but also opens the door to potential licensing opportunities, bringing IP negotiations from the murky depths of big data onto a clearer contractual footing.

Yet compliance is not trivial. Mandating large-scale data inventories could be expensive—especially for smaller AI start-ups—raising competition law concerns about potentially chilling innovation. The UK’s government’s consultation hints that developers should keep records and provide them on request, but the final details remain uncertain.

In practice, this will involve negotiating AI development agreements to include contractual clauses outlining each party’s transparency obligations (e.g., log-keeping requirements, usage reports). For rights holders, clarifying a developer’s contractual warranties on dataset use might ensure an easier route to damages if unlicensed content surfaces. Alternatively, practitioners will look to encourage industry-wide codes of conduct that align with the UK Government’s eventual rulemaking, balancing transparency with commercial confidentiality.

Treading carefully, AI firms often consider the structure of their training data a trade secret. The legal counsel’s role is to navigate the tension between mandatory transparency for copyright enforcement and preserving legitimate commercial confidentiality.

Should AI outputs enjoy copyright?

Section 9(3) of the CDPA controversially grants protection to computer-generated works (CGWs) with “no human author” for 50 years. Despite initially conceived for software-generated content, the law’s wording has proved awkward in the era of sophisticated generative AI. Critics note that EU jurisprudence, such as Infopaq International A/S v Danske Dagblades Forening (C-5/08), emphasises a human “personal touch” for copyright protection - potentially excluding purely machine-produced texts or images.

The UK Government is now mulling three main paths: 

  • retain the existing CGW protection and wait for case law to mature,
  • clarify it through legislative amendments, or
  • remove it entirely to align with jurisdictions like the US, where no such statutory protection exists for AI-authored works.

Implications for IP advice:

  • If a business relies heavily on AI-generated outputs (e.g., marketing copy or purely AI-made illustrations), counsel should underscore that these works might not enjoy robust copyright protection in the long run.
  • For “AI-assisted” works, ensure that the human creative input is documented. This could help prove originality under conventional legal standards, potentially securing standard copyright terms.
  • Where statutory protection is doubtful, consider contractual routes - licensing, confidentiality, or trade secret protections - to secure exclusive rights over AI outputs.

Labelling, deepfakes, and digital replicas

The consultation also grapples with a thorny question: how do we handle AI-generated ‘digital replicas’ of real individuals? Alongside copyright issues, the law intersects with defamation, data protection, and performers’ rights. Actors or musicians whose voices, images, or entire performances can be synthetically replicated by AI might invoke the tort of passing off or moral rights (Chapter IV, CDPA) to prevent unauthorised exploitation. However, many argue that the UK legal framework does not yet offer a clearly defined ‘image right’ or ‘publicity right’ akin to some US states.

Practical responses:

  • For clients in entertainment or sports, negotiating broad language around image rights and AI-created clones could mitigate the risk of unlicensed digital simulations.
  • Encourage AI providers or platforms to adopt robust labelling of AI-generated outputs. While not specifically mandated by UK law yet, this is gaining traction in the EU via its AI Act. Pre-emptive adoption of transparent labelling can both signal good faith and limit litigation risks.

International legislative development

Two recent California bills, Assembly Bills 2602 and 1836, tackle the subject of digital replicas head-on. 

  • AB 2602 aims to prohibit the unauthorised use of an individual’s image, voice, or likeness for commercial AI-generated content, offering a statutory remedy beyond traditional tort claims.
  • AB 1836 builds on this by requiring generative AI developers to label and disclose when a ‘digital replica’ is used, imposing potential civil penalties for non-compliance. 

These initiatives underscore a shift from relying solely on defamation or privacy doctrines to explicit statutory controls - especially in cases where an “imitation” was never performed in the traditional sense.

For UK practitioners, these moves could foreshadow similar legislative or regulatory initiatives. Parliament may decide that current protections (e.g., data protection, passing off, or performer’s rights) are insufficient to address deepfakes. If so, we could see the emergence of new personality-right-style legislation - or at least expansions to performer’s rights - echoing California’s approach and forcing AI platforms or developers to seek explicit consent and labelling.

Concluding thoughts

The UK Government’s consultation on Copyright and AI represents a pivotal moment for UK IP law. Legal practitioners have a unique chance to shape that conversation, providing advice on immediate compliance strategies while pushing for more coherent legislation. By advocating robust licensing solutions, carefully drafting transparency or TDM clauses, and guiding AI-driven projects in line with emerging best practices, solicitors can help bridge the gap between protecting creative expression and enabling transformative innovation.

Far from merely summarising new rules, lawyers now have to integrate technical, commercial, and regulatory perspectives in a climate of unprecedented technological shift. The next few years will test not only the adaptability of the UK’s legislative framework but also the creativity and foresight of the legal profession. However, if tackled head-on, it could provide the UK with a balanced, future-facing IP regime - one that safeguards the essential spark of human creativity yet harnesses the immense possibility of artificial intelligence.