This website uses cookies

This website uses cookies to ensure you get the best experience. By using our website, you agree to our Privacy Policy

Karen Holden

Managing Director, A City Law Firm

Quotation Marks
Advising clients with AI products typically requires a bespoke legal strategy

Advising clients on artificial intelligence

Feature
Share:
Advising clients on artificial intelligence

By

In this article, Karen Holden explores the key considerations lawyers need to address when advising commercial clients on artificial intelligence-related projects

As the world of artificial intelligence (AI) continues to evolve, law firms are increasingly being called upon to provide expert advice in a field where the law is still catching up with technological innovation. For firms unfamiliar with AI law, navigating the legal challenges of advising clients commercialising AI products can be daunting. 

Commercialising AI: tailored legal advice

Advising clients with AI products typically requires a bespoke legal strategy. At the core of this strategy is understanding that using AI within a product, more than most technologies, is nuanced and context specific. It depends on what the client owns, whether it is incorporating another entities’ large language model (LLM), whether they are writing prompts/algorithms and what their customers believe is being done for them by the product and we also need to consider if they need to be regulated.

The documents that a law firm prepares to help a client commercialise its AI products, be it software agreements, licensing terms or contracts with third-party vendors, must reflect the client’s unique needs, the functionality of the AI product and the commercial environment in which it operates.

For instance, in the process of drafting a contract, the following key elements need to be meticulously tailored:

  • Risk allocation: what risks are associated with the AI product, such as potential malfunctions, ethical issues or liability concerns?
  • Insurance coverage: how should the client be insured, and what level of coverage is appropriate given the potential financial and reputational risks? What elements are the clients responsible for compared to other providers as it’s not always straightforward. It depends on what is being created and who is responsible for what part.
  • Regulatory compliance: should the AI product or service be regulated? If so, which regulatory regime applies? If regulation isn’t mandatory, should they adopt self-governance or alternative codes of conduct?
  • Data protection and processing: breaking down what data is being collected, collated, and covered by the General Data Protection Regulation (GDPR) is essential to carve out the right level of policy, especially where data is shared internationally.
  • Post-termination: what can and can’t be done with data post-termination, especially where the data has been incorporated into a third-party LLM. For instance, can the data be deleted?
  • What warranties can you give: traditional warranties may not be appropriate, especially where a third-party LLM is being used. Can you guarantee to your customer that you own the intellectual property?
  • Governance: what internal checks and measures are in place? For example, what human in the loop/responsible internal governance/suitable staff processes and policies apply?
  • Training data: what data can be used, if anything, to train the LLM?
  • Intellectual property: what intellectual property is owned by the client/its customer and any third-party LLM/AI as a service (AIaas)?

Tailored bespoke advice is essential as it ensures that the client is aware of the specific risks inherent to their AI product and that they can make informed decisions about mitigating these risks. Sometimes this takes a level of technological skill, but mostly it comes down to taking the time to have the client disseminate their processes to categorise the stages and requirements and allow you to isolate specific risks as you unpick the process. 

Is the AI product subject to regulation?

Outside the wider conundrum of whether specific regulation is needed for the AI itself and navigating the EU AI Act for those that fall within the scope of the regulations, clients desperately need to understand whether their product falls within existing regulatory frameworks or whether future regulations will or may impact them. 

When determining the regulatory status of the product, the following aspects should be considered: 

  • The nature of the AI product: is it providing services in a regulated industry, such as law, healthcare or finance?
  • The level of autonomy: is the AI making decisions autonomously or aiding decision-making, such as making a diagnosis or giving advice?
  • Potential risks: how might the AI’s deployment create risks for users or third parties or influence their decisions? These risks often relate to data protection, bias in decision-making, or causing harm to consumers.

Deciding whether to seek regulatory approval for an AI product involves balancing the benefits of regulation, such as enhanced credibility and market trust, against the administrative burdens and potential limitations that come with regulatory compliance, as well as clearly understanding whether it’s mandatory.

Law firms must help clients navigate this terrain and understand how regulations will affect their AI offerings now and in the future. They need to tailor documents, consider appropriate insurance, apply for regulatory approval, and implement suitable internal governance.

Navigating complex IP laws

One of the most significant challenges in the AI space is protecting the intellectual property (IP) embedded within AI algorithms and software. AI innovators often ask whether their AI products can be patented, but this is not always a straightforward process. While certain aspects of AI, such as a novel technical process, may be patentable, algorithms themselves may not meet the patentability criteria under UK law.

Lawyers should not forget other forms of IP protection that may apply to AI-related innovations:

  • Trademarks: ensuring that a client’s branding, product names, and logos are protected is essential, especially as AI products are often commercialised under distinctive branding.
  • Copyright: AI software, written algorithms and the datasets used in machine learning processes may be protected by copyright law. However, the evolving nature of AI brings into question the extent of copyright protections, particularly where an AI system is autonomously generating content. Case law, as it targets these areas, needs to be watched carefully as relevant rulings in this space will change the legal landscape.
  • Design rights: in cases where the design or appearance of an AI product is integral to its function, design rights can offer a layer of protection.
  • Database rights: datasets may attract database rights.
  • Trade secrets: parts of the product and the clients inner workings/models may be trade secrets.
  • Common law passing off: in the absence of formal registration, AI creators can still protect their rights against competitors who attempt to imitate their branding or product under the law of passing off.

This is a complex and evolving area.

The law still holds AI at arm’s length and is reluctant to accept AI generated IP as anything other than IP generated by a human creator. Several example cases are detailed below.

Thaler v Comptroller-General of Patents, Trade Marks and Designs (2021)

  • Summary: applied to name his AI system, as the inventor on a patent. The UK Intellectual Property Office rejected this, stating that inventors must be created by a human. The court upheld this;
  • Implications: this case establishes that AI cannot be recognised as an inventor, which may limit protection for AI-created innovations.

However, at the same time the courts are holding AI to account in regard to the laws that are in place, as exemplified by the following case. 

Warner Music UK Ltd v TuneIn Inc (2019)

  • Summary: TuneIn, an AI-powered streaming platform, was found to have infringed copyright by providing access to unlicensed music content;
  • Implications: AI-driven platforms must comply with copyright laws;
  • Significance: this case underscores the responsibility of AI-powered services in terms of avoiding IP infringement, highlighting the need for robust compliance frameworks for platforms utilising AI to deliver content.

We still await the decisions in the Getty Images v Stability AI case, which is likely to set a precedent whichever way the case is determined. 

Moreover, the use of non-disclosure agreements (NDAs) becomes critical in protecting trade secrets and proprietary algorithms. Given the competitive nature of AI development, having robust NDAs in place, and regularly updating them to cover AI-specific risks, is a crucial part of any legal strategy.

Data protection and AI: compliance with the GDPR

Data protection is a cornerstone of AI legal advice. AI systems often process vast amounts of personal data, making compliance with the GDPR a primary concern. Law firms need to ensure that clients have robust data protection policies that align with the GDPR requirements.

Key issues include:

  • Data processing: is the client processing personal data through their AI system in a lawful manner? This includes considering whether this involves autonomous decision making, whether explicit consent has been obtained from the data subjects, and ensuring that data is processed transparently and securely.
  • Sub-processors: is the product using a sub-processor, such as a third-party LLM?
  • Data minimisation: are clients only collecting the data they truly need for the AI system to function, or are they exposing themselves to unnecessary legal risks by processing excessive amounts of data? How are they checking that this is happening in practice?
  • Accountability and governance: has the client established an accountability framework that demonstrates their commitment to data protection?

Clients may also require advice on the implementation of data anonymisation techniques to reduce the risk of GDPR breaches and avoid the more stringent rules around personal data processing.

This applies regardless of whether the data is accessed/processed by humans.

Dispute resolution in regard to AI

When disputes arise involving AI technology, they may not always be addressed in the same way as other commercial disputes. For example, AI systems, especially autonomous ones, introduce new questions about liability and causality. Who is responsible when an AI makes a decision that leads to a loss or injury?

In commercial disputes, traditional frameworks of negligence and breach of contract may still apply, but AI-specific issues require a deeper understanding of technology and how it interfaces with the law. Dispute resolution strategies need to take into account the following:

  • Algorithmic accountability: is the AI system responsible for the harm caused? If so, how do we trace decision-making processes within a ‘black box’ AI system? Is such an error human error or negligence?
  • Liability: how should liability be apportioned between the AI developer, the user, and potentially even the AI system itself?
  • Remedies: what are the available remedies under UK law, and are these remedies fit for the unique challenges presented by AI disputes?

Web3, virtual worlds, and AI

Our firm also specialises in advising on emerging technologies, such as Web3 and virtual worlds, areas that often intersect with AI development. This intersection raises unique legal questions, particularly regarding IP infringement in decentralised digital environments.

For example, Web3 technologies, which rely on decentralised systems, such as blockchain, may introduce complications when enforcing traditional IP rights in virtual worlds. As AI increasingly intersects with these platforms, law firms need to stay at the cutting edge of how traditional UK law, particularly IP and dispute resolution, will need to adapt to meet these challenges.

Does IP infringement that occurs in a virtual world governed by decentralised systems follow the same principles as infringement in a traditional context? Can AI-created works or systems even be owned or governed in the same way in these environments? These are questions we regularly explore with clients seeking to expand into this intersection of technologies.

Despite reservations, AI is starting to be used in law and for law enforcement purposes, so the route to change has already started to appear.