This website uses cookies

This website uses cookies to ensure you get the best experience. By using our website, you agree to our Privacy Policy

Martin Mackowski

Partner, Squire Patton Boggs

Francesco Liberatore

Partner, Squire Patton Boggs

Quotation Marks
It is settled case law that competitors can intelligently adapt to the market without infringing EU/UK antitrust law, as long as there is no ‘concurrence of wills’ between them

Artificial intelligence and antitrust issues: a competition law practitioner’s perspective

Feature
Share:
Artificial intelligence and antitrust issues: a competition law practitioner’s perspective

By and

Francesco Liberatore and Martin Mackowski provide a detailed overview of competition law as it applies to artificial intelligence, including a look at recent case law and the evolving regulatory approaches being adopted in the UK, EU and US

Recognising the extraordinarily rapid evolution of artificial intelligence (AI) and its broad impact across many sectors, regulators in the UK/EU and globally are moving quickly to ensure that the market remains competitive and that the technology is not used by companies to gain an unfair advantage or harm consumers. Below we first examine the key competitive concerns relating to AI and the guidance provided by the cases decided thus far. We next explore the regulatory approaches being take by the UK, as well as the EU and US, and the common themes that underlie them. Finally, we cover some practical considerations for companies as they navigate this evolving landscape.

Lessons from previous case law: key competition risks

There is a wide spectrum of potential EU/UK competition law issues arising from the use of AI, and regulatory and court decisions thus far have only begun to address them. Firstly, AI can facilitate collusion. This happens where AI allows businesses to exchange information that is competitively sensitive, forward looking, disaggregated and company specific.

At one end of the spectrum, there is little doubt that the use of pricing algorithms to implement resale price maintenance (RPM) or a price-fixing cartel is illegal. Examples include:

  • Consumer electronics manufacturer, Asus, monitored and implemented RPM through the use of sophisticated monitoring tools allowing the supplier to intervene swiftly in case of price decreases (Case 40465, EU Commission Decision, Asus). Similar cases were brought against other manufacturers of consumer electronics (Philips, Pioneer, Denon & Marantz);
  • Sellers on Amazon’s UK website used automatic repricing software to monitor and adjust prices to give effect to an offline price-fixing cartel whereby they had agreed not to undercut each other’s prices (Case 50223, Competition and Markets Authority (CMA) Decision, Trod Ltd & GB Eye Ltd.); and
  • Casio set minimum prices for online resellers of its digital pianos and keyboards over a five-year period. Casio used price-monitoring software to monitor RPM implementation in real time (Case 50565-2, CMA Decision, Casio).

At the other end of the spectrum, the application of EU/UK antitrust rules on self-learning pricing algorithms is more complex. In 2021, the CMA published a paper titled ‘Algorithms: How they can reduce competition and harm consumers’, where it outlined hypothetical theories of harm, including ‘autonomous tacit collusion’. The CMA noted that ‘simulation studies show that there are clear theoretical concerns that algorithms could autonomously collude without any explicit communication between firms. For example, Calvano et al (2019) showed that Q-learning (a relatively simple form of reinforcement learning) pricing algorithms competing in simulations can learn collusive strategies with punishment for deviation, albeit after a number of iterations of experimentation in a stable market’. However, there is little or nothing in the way of directly applicable precedents to date.

It is settled case law that competitors can intelligently adapt to the market without infringing EU/UK antitrust law, as long as there is no ‘concurrence of wills’ between them, replacing independent decision-making with collusion (Wood Pulp II). However, it is an open question whether self-learning algorithms that signal prices to each other and learn to follow the price leader would fall within this safe harbour.

There is no precedent to date on this latter scenario, but the case law on price signalling may provide a useful analytical framework. In Case 39850, the EU Commission Decision in Liner Shipping, fourteen container liner shipping companies regularly announced their intended future increases of freight prices on their websites, via the press, or in other public ways. These announcements were in absolute price percentage increases, did not provide full information on new prices to customers, but merely allowed the carriers to be aware of each other’s pricing intentions and made it possible for them to coordinate their behaviour. The parties ultimately agreed to commitments to address the EU Commission’s concerns.

Secondly, AI can facilitate the exploitation of market power or foreclosure of competitors. This can happen through a merger or an exclusive cooperation agreement resulting in the combination of a large and unique set of ‘Big Data’ or control of other essential inputs required for AI models; or it can happen where a dominant company’s use of such a large and unique set of Big Data or other key inputs does not constitute ‘competition on the merits’. Recent examples include:

  • Mergers or partnerships in which the EU Commission or CMA has considered the question of the accumulation of Big Data or other inputs and its impact on competition (for e.g., Microsoft/Mistral AI partnership CMA merger inquiry; Microsoft/Activision CMA merger inquiry; Case COMP/M.7217, Facebook/WhatsApp; Case COMP/M.6314, Telefonica UK/Vodafone UK/Everything Everywhere/JV; Case COMP/M.4731, DoubleClick; Case COMP/M.8124, Microsoft/LinkedIn; Case COMP/M.4726, Thomson/Reuters); and
  • Cases of abusive leverage of a dominant position facilitated by AI to discriminate against competitors or customers through ‘self-preferencing’ (for e.g., Case AT.39740, EU Commission Decision, Google Search (Shopping)).

All of these cases demonstrate that the application of traditional antitrust concepts to the use of AI is far from straightforward.

Issues when establishing antitrust liability

Even assuming that an anti-competitive object or effect is established, the question arises whether, and under what circumstances, EU/UK antitrust liability can be established, if business decisions are made by self-learning machines rather than by the companies.

Liability can only arise from anti-competitive conduct that is committed ‘intentionally’ or ‘negligently’. Defining a benchmark for illegality requires assessing whether any illegal action was anticipated or predetermined (for e.g., through programming instructions) or whether it could have reasonably been foreseen and avoided. In trying to define a benchmark for illegality, the CMA referred to ‘ineffective platform oversight’ in its paper on ‘Algorithms: How they can reduce competition and harm consumers’. A previous EU Commission Note to the OECD on ‘Pricing Algorithms and Collusion’ makes an interesting statement in this regard: ‘An algorithm remains under a firm’s direction and control and therefore the firm is liable for the actions taken by the algorithm’. Similarly, the EU Commission’s 2023 Horizontal Cooperation Guidelines indicate that ‘firms involved in illegal pricing practices cannot avoid liability on the ground that their prices were determined by algorithms’, because just like an employee or consultant, ‘an algorithm remains under the firm’s control, and therefore the firm is liable even if its actions were informed by algorithms’. This sounds like a presumption of direct liability, but it remains to be seen whether such a presumption would find support in statute or the existing case law on liability.

The use of AI can also be considered an aggravating circumstance. For example, concerning the previously mentioned investigations into RPM involving Asus (Case AT.40465), Denon & Marantz (Case AT.40465), Philips (Case AT.40181) and Pioneer (Case AT.40182), the EU Commission stated that the effect of these price restrictions may be aggravated due to the use by many online retailers of pricing software that automatically adapts retail prices to those of leading competitors.

So, as AI continues to progress, and the lessons from previous case law only go so far, how can we determine who is liable for the decisions and actions of AI: the developers, users, and/or beneficiaries?

Ex ante regulatory approach

The EU and UK legislatures have taken the view that antitrust law enforcement may be too slow to tackle competition issues arising from the use of AI in digital markets and have adopted or proposed ex ante regulation to prohibit certain types of conduct without the need to establish antitrust liability.

The EU Digital Services Act (DSA) requires very large online platforms to mitigate risks of systemic AI use (Article 34 DSA). Among other things, at least once a year, independent auditors are required to assess such compliance (Article 37 DSA).

The EU Digital Markets Act (DMA) requires designated ‘gatekeepers’ of core digital platforms to comply with certain ‘do’s and don’ts’, including an obligation not to engage in self-preferencing and to carry out ranking and related indexing and crawling on their platforms based on transparent, fair and non-discriminatory conditions (Article 6(5)(2) DMA). The DMA also includes restrictions on the use of personal end-user data (Article 5(2) DMA), which may affect the training of AI models and make it more difficult for gatekeepers to develop potentially biased models.

The UK Digital Markets, Competition and Consumers (DMCC) Act was enacted in May 2024. Similarly to the DMA, it purports to address self-preferencing and interoperability concerns by vesting the CMA with new authority to intervene proactively. Specifically, the CMA has the power to designate specific tech companies as having ‘strategic market status’, which in turn imposes additional rules on such companies to ensure fair dealing, choice and transparency. The CMA has the power to issue fines for non-compliance with those obligations of up to 10% of a company’s worldwide turnover. Notably, this represents a shift in the CMA’s role from an ex post to ex ante regulator, with a role in digital markets similar to other sector-specific UK regulators, such as Ofcom in the electronic communications and online safety sectors.

The UK has also introduced a new act on content moderation similar to the DSA: the Online Safety Act (OSA), which is enforced by Ofcom. Under the OSA, the use of AI in the context of content creation and moderation is part both of the challenge and the solution, as Ofcom recognises that online services increasingly rely on AI systems for moderating the vast amounts of third-party content they host. Automated content filters and review algorithms can be employed to identify inappropriate material efficiently. GenAI, in particular, is able to identify illegal or harmful content (referred to by the OSA as ‘proactive technology’). In particular, section 231(10)) OSA covers ‘technology which utilises artificial intelligence or machine learning’. The OSA also imposes obligations on service providers to include ‘clear and accessible provisions’ in their terms giving information about any use of proactive technology to comply with the OSA’s duties, as well as requirements to allow users to complain about the way proactive technology has been used by the service in content moderation. Ofcom is consulting on a number of codes of conduct to give effect to various parts of the OSA. In its draft codes of practice on the OSA’s illegal content duties, published in November 2023, Ofcom specified that, in order to comply with the code (and thereby demonstrate compliance with the OSA’s core duties to tackle illegal content), certain services would need to take automated content moderation measures, such as the use of ‘hash matching’ technology to proactively detect and remove child sexual abuse material. Therefore, as with the DSA, AI may continue to exacerbate the problem of illegal and harmful content the OSA seeks to address, but it may also be part of the solution deployed by online services to meet their new regulatory obligations.

The US, by contrast, has not followed the ex ante regulation approach. In the absence of legislation proscribing specific conduct, President Biden in October 2023 issued an Executive Order directing the antitrust enforcement agencies to use their existing authority broadly ‘to ensure fair competition in the AI marketplace and to ensure that consumers and workers are protected from harms that may be enabled by the use of AI’. The Executive Order specified that agency action should include ‘addressing risks arising from concentrated control of key inputs, taking steps to stop unlawful collusion and prevent dominant firms from disadvantaging competitors, and working to provide new opportunities for small businesses and entrepreneurs’. Consistent with that directive, both Federal Trade Commission (FTC) and Department of Justice (DOJ) officials have explained that they are using the broad antitrust enforcement tools at their disposal to regulate AI, including the prohibition of ‘unfair methods of competition’ under Section 5 of the FTC Act.

International cooperation

Given the cross-border nature of digital markets, the OECD noted in its 2018 report, ‘Going Digital in a Multilateral World’ that ‘Governments may need to enhance co-operation across national competent agencies to address competition issues that are increasingly transnational in scope or involve global firms.’

Against this backdrop, the US, EU and UK competition agencies have issued joint statements to reaffirm their commitment to cooperating in this area, including through participation in high-level meetings, as well as regular staff discussions. The fact that US, EU and UK antitrust officials will have an official place to meet regularly to talk policy and exchange views may be expected to bleed into how they approach enforcement in AI in the future, including on AI foundational models.

Despite the differences in regulatory approach noted above, regulators are focused on addressing similar competitive concerns. Indeed, in both their public statements and investigations to date, regulators globally – including in the US, EU and UK – have sought to address similar worries relating to AI markets and the competitive dynamics that characterise them.

Shared regulatory concerns

Regulators in UK, EU and US have explained that they are seeking to provide as much clarity as possible regarding their enforcement priorities in this rapidly evolving space. They have both made public comments and published a number of statements outlining their approach to AI regulatory intervention and identifying the key principles that will guide that approach. They are also rolling out new tools such as the Digital Regulation Cooperation Forum’s ‘AI and Digital Hub’, through which multiple UK regulators are coordinating to not only provide guidance on policy positions but also respond to specific questions from industry participants and publish those responses (in anonymised form) to provide guidance to others.

The UK CMA, for its part, has rolled out a series of publications documenting its comprehensive study on the antitrust risk arising from the ‘increased presence of the largest and most established technology firms across multiple levels of the foundation models value chain’, which ‘is happening both through direct vertical integration but also through partnerships and investments’ (‘AI Foundation Models Update Paper’, April 2024).

According to the CMA, ‘some of these firms have strong positions in one or more critical inputs for upstream model development, while also controlling key access points or routes to market for downstream deployment. That creates a particular risk that they could leverage power up and down the value chain’.

The CMA identified three key competition risks arising from the expanding use of foundational models (FMs) and the growing interconnection of the FM value chain: (1) firms controlling critical FM inputs by restricting such access, (2) powerful incumbents exploiting their consumer facing positions in FM markets to curtail competition and (3) co-operation between existing players further reinforcing their market power through the FM value chain.

In this same vein, the EU Commission has identified similar concerns in both public statements and the inquiries it is pursuing. In a February 2024 speech, EU Competition Commissioner Vestager said: “Large Language Models depend on huge amounts of data, they depend on cloud space, and they depend on chips. There are barriers to entry everywhere. Add to this the fact that the tech giants have the resources to acquire the best and brightest talent.”

The EU Commission is undertaking a consultation in which it has issued an open call for stakeholder contributions on competition in generative AI. In its call for contributions, the Commission explained that ‘[i]t has become clear in the past that digital markets can be fast moving and innovative, but they may also present certain characteristics (network effects, lack of multi-homing, ‘tipping’), which can result in entrenched market positions and potential harmful competition behaviour that is difficult to address afterwards’.

Meanwhile, in the US, the FTC and DOJ have similarly expressed concern regarding control of ‘essential’ AI inputs and the need to guard against ‘bottlenecks’ across the AI stack. At a February 2024 ‘Tech Summit on AI’ hosted by the FTC, Chair Lina Khan explained: “History shows that firms that capture control over key inputs or distribution channels can use their power to exploit those bottlenecks, extort customers, and maintain their monopolies. The role of antitrust is to guard against bottlenecks achieved through illegal tactics and ensure dominant firms aren’t unlawfully abusing their monopoly power to block innovation and competition.”

FTC and DOJ officials have explained that, as AI offerings depend on a set of necessary inputs, such control can be used to protect existing market power and leveraged for competitive control of related markets. The FTC and DOJ have identified key FM inputs as including underlying data needed to train FMs, labour expertise and access to computational resources like cloud infrastructure and graphics processing units (GPUs). Beyond access to inputs and markets, the FTC and DOJ have also expressed concern with incumbents’ bundling or tying generative AI offerings with existing core products (such as cloud services or software) and with using certain sensitive personal data to train FMs.

Regulators thus remain focused on improved market outcomes and on ensuring that markets remain fair and open, with a diversity of competitors and consumer offerings, in the AI space. And they are scrutinising control of both inputs and outputs (or market access points) across all levels of the AI stack. They have explained that observations regarding market outcomes provide guidance regarding where greater intervention may be necessary, and that such intervention will be more proactive in AI-related segments in order to avoid the less desirable alternative of waiting to see what happens and attempting to restore competition after markets have tipped.

CMA approach to AI risks

Providing further insight on its enforcement priorities, and anticipating the new powers it has since received with the passage of the DMCC, the CMA recently published a new document providing an update on its approach to AI regulation both broadly and as it relates to FM development specifically. In the publication, titled ‘CMA AI strategic update’ (published on 29 April 2024), the CMA said regarding its view of AI competition risks: ‘Taking a broader view of AI systems, firms’ misuse of AI and other algorithmic systems, whether intentionally or not, can create risks to competition often by exacerbating or taking greater advantage of existing problems and weaknesses in markets’. The CMA pointed to three specific examples:

  • ‘AI systems that underpin recommendations or affect what choices customers are shown and how they are presented’, and they thus have the potential to ‘distort competition by giving undue prominence to choices that benefit the platform at the expense of options that may be objectively better for customers’;
  • firms may use AI systems to ‘assist in setting prices in a way which could facilitate collusion and sustain higher prices’; and
  • firms may use AI systems ‘to personalise offers to customers’, which could potentially allow incumbent firms to ‘analyse which customers are likely to switch, and use personalised offers, selectively targeting those customers most at risk of switching, or who are otherwise crucial to a new competitor, which could make it easier for such firms to exclude entrants’.

As to competition risks around FMs specifically, the CMA stated that its ‘strongest concerns arise from the fact that a small number of the largest incumbent technology firms, with existing power in the most important digital markets, could profoundly shape the development of AI-related markets to the detriment of fair, open and effective competition’. The CMA went on to describe specific concerns mirroring those in its April 2024 ‘AI Foundation Model Update Paper’ relating to incumbents’ control over critical inputs for FM development and key market access points for FM services.

According to this new document, the CMA’s approach to AI risks (including the AI FM-related risks identified above) will be guided by the following six principles:

  • Access: Ongoing ready access to inputs;
  • Diversity: Sustained diversity of business models and model types;
  • Choice: Sufficient choice for businesses and consumers so they can decide how to use FMs;
  • Fair dealing: No anti-competitive conduct;
  • Transparency: Consumers and businesses have the right information about the risks and limitations of FMs; and
  • Accountability: FM developers and deployers are accountable for FM outputs.

The document acknowledges the additional powers that the CMA has since received with the passage of the DMCC and embraces the CMA’s new role as an ex ante regulator, stating: ‘We are ready to use these new powers to raise standards in the market and, if necessary, to tackle firms that do not play by the rules in AI-related markets through enforcement action’.

The CMA notes that the new authority granted by the DMCC will give it ‘the ability to respond quickly and flexibly to the often rapid developments these markets, including through setting targeted conduct requirements’ for firms designated as having strategic market status (SMS). The CMA likewise acknowledges its greater power under the DMCC to observe and test SMS firm algorithms, which is essential to ensuring it can effectively address the risks posed by their AI systems.

The CMA will continue to focus on such market outcomes in determining where and how to intervene using its new authority under the DMCC. And the intrusiveness of such interventions will depend on how far the competitive harms it is seeking to address have advanced. While some industry participants have noted the potential for regulatory overreach this presents, the CMA has noted that it is hard to test the counter-factual and that in many instances waiting to see what happens and then attempting to unwind competitive harms is a worse outcome for everyone involved. Markets have not yet tipped and there is a currently a good narrative regarding the availability of multiple models for offering AI products and services. But the CMA believe that intervention will be required to ensure that remains the case.

To identify potential bad outcomes of the type the CMA will look to address, companies should look to well-functioning markets as a guidepost. In such markets, innovative products and services are being offered to consumers, but those consumers are not broadly unhappy with how their data is collected or with how difficult it is to switch between competing services. The CMA will also rely on other regulators with sector-specific expertise to understand how those markets are operating. For example, it will turn for guidance to Ofcom’s Market Reports regarding the state of the telecoms sector, which many in the industry view as authoritative.

In terms of next steps, the CMA laid out the following work programme for the remainder of 2024, including:

  • A forward-looking assessment on the potential impact of FMs on how competition works in the provision of cloud services as part of the ongoing cloud market investigation;
  • Monitoring current and emerging partnerships closely, especially where they relate to important inputs and involve firms with strong positions in their respective markets and FMs with leading capabilities;
  • Stepping up the use of merger control to examine whether such arrangements fall within the current rules and, if so, whether they give rise to competition concerns; and
  • Continuing the dedicated programme of work to consider the impact of FMs on markets throughout 2024, including: (1) a forthcoming paper on AI accelerator chips, which will consider their role in the FM value chain; (2) publishing joint research with the DRCF on consumers’ understanding and use of FM services; and (3) publishing a joint statement with the ICO on the interaction between competition, consumer protection and data protection in FMs.

In the meantime, companies and their antitrust counsel must consider whether the current development and use of AI may create potential exposure to antitrust scrutiny later.

Practical considerations for antitrust counsel

AI gives rise to complexities which will inevitably have an impact on how EU/UK antitrust counsel should advise businesses developing and/or applying AI. Some practical tips (some of which are common sense) are outlined below.

Counsel should understand why and how businesses intend to either use AI or participate in one or more segments contributing to AI development, particularly: how AI will aid business processes; what information will be processed and exchanged with other parties; which other parties will participate in the AI ‘network’ and which will be excluded; whether the AI ‘network’ will be public or private and, if private, who are the ‘nodes’ of the AI ‘network’; and what is the ‘relevant market’, what is the position of the business in such a market, and what if any control does the company have over access to key inputs or market access points. Some of these elements may also require economic input.

Counsel should then assess the potential antitrust theories of harm (for e.g., is it RPM, hub and spoke, or unilateral foreclosure, etc.) and try to disentangle the pro-competitive effects from the anti-competitive effects.

Compliance safeguards could include changes to the AI structure, use or policies. This will depend on the circumstances of each case, including whether the potential competitive concerns relate to use of AI tools or the functioning of the AI sector itself. For example:

  • As regards Big Data pooling agreements, companies deploying AI tools could send their data to a platform, and get back aggregate data with no indication of which company it comes from. That would still give companies information that would help build better cars or make existing ones run better – without undermining competition. Or companies might limit the type of information they share. So, car companies might decide not to share information that would tell rivals too much about their technology. Online shops might share data without saying when products were bought, or for how much. And companies also need to be sure that pooling data doesn’t become a way to shut rivals out of the market; and
  • Companies operating in sectors contributing to the development of AI, in deciding whether to restrict access to market entry points or to key inputs – including data, compute resources and engineering talent – should closely examine the potential competitive impact and the justifications underlying its decisions, particularly given the stated intention of regulators globally to scrutinise how control of such access may give rise to competitive concern.

Companies also should consider proactively engaging with regulators to ensure they understand market realities as they develop their enforcement priorities, as it is in companies’ and regulators’ mutual interest for regulators to understand how AI products and services operate from a technical perspective. Likewise, it is important for regulators to understand the technical and practical effect of other regulatory obligations that companies face with AI, such as requirements under the GDPR and other privacy regulations.

  • In responding to regulatory inquiries and investigations, companies should incorporate input from their internal subject matter experts. They should also consider sharing data that they possess (and regulators do not) and explaining how that data demonstrates pro-competitive outcomes.
  • Similarly, companies should consider inviting regulators to site visits or offering to conduct tech teach-ins. In addition to demonstrating openness and cooperation, all of this will help ensure a shared understanding and if there is any disagreement it will at least be based on the same facts.

Finally, as both AI technology and the competition regulations governing it are rapidly evolving, counsel should therefore monitor the use and development of AI and applicable regulation and reassess the initial risk analysis whenever there are significant changes or advances in technology.