This website uses cookies

This website uses cookies to ensure you get the best experience. By using our website, you agree to our Privacy Policy

Alexander Amato-Cravero

Regional Head of The Emerging Technology Group, Herbert Smith Freehills

Quotation Marks
"Algorithms may not be human, but they can still imbibe the prejudices which influence human judgement."

Computer says no: will fairness survive in the age of AI?

Practice Notes
Share:
Computer says no: will fairness survive in the age of AI?

By

Navigating AI's evolving landscape demands global regulatory harmony, focused legislation, and dialogue balancing its risks and rewards, says Alexander Amato-Cravero

Hollywood has colourful notions about artificial intelligence (AI). The popular image may be one where the future is dominated by epic tales of robocop versus real cop, but cinematic fiction is nothing compared to reality.

If you're yet to be convinced, take a moment to explore news about actor Tom Hanks' latest advert promoting a dental plan. You'd be forgiven for thinking it's the real deal, but alas – the advert is a deepfake – something which Hanks was neither involved with, nor endorsed.

The fact of the matter is, in this AI age, truth really can be stranger than fiction. Risks posed by AI are insidious and hard to predict. There is little wonder why research conducted by Herbert Smith Freehills reveals that just 5 per cent of UK consumers are unconcerned about the growing presence of AI in everyday life, or that only 20 per cent say they have a high level of faith that AI systems are trustworthy.

Now, that's not to say AI doesn't benefit businesses and consumers. From finding the quickest route to your destination and discovering your new favourite TV show, to reducing the friction of customer service interactions and accelerating development of life-saving drugs, analytical and now generative AI already play an undoubtedly important role in everyday life. But clearly there's still work to do to win trust and overcome cynicism.

Ethical dilemmas

As applications of AI-style tools spread rapidly across industries, concerns have inevitably been raised about how these systems may detrimentally – and unpredictably – affect someone's fortunes, as Hanks' example demonstrates.
One colleague put it well, noting that there's an increasing appreciation among regulators and businesses about the potential human rights impacts of AI and related technologies. This growing awareness is helping identify risks, but we are yet to move into a period where there's consensus on how to address them. A core challenge is the breadth of risks, and in turn the existing laws and regulations that interact with them – equality and employment, data protection and privacy, intellectual property, competition. The list goes on. In the UK, at least, different areas are overseen by different regulators, resulting in greater fragmentation.

Inevitably, this means a growing number of risks sit outside of existing laws and regulations. Even where they fall within scope, there can be an uncomfortable fit. Some laws were simply not designed for this AI era. So, while lawmakers wrestle with the far-reaching ramifications of AI, other groups, such as industry bodies and regulators, are driving the adoption of guidance, standards, and frameworks.

Many of these initiatives revolve around ethical AI use, particularly on issues such as bias, transparency, and explainability. Indeed, Herbert Smith Freehills' research revealed a significant proportion of consumers refuse to accept that AI can be impartial, and – among those who said they do not trust AI – more than one third also fear that the outputs of AI systems could be biased against specific groups. Algorithms may not be human, but they can still imbibe the prejudices which influence human judgement.

Filling the gaps

The existing patchwork of laws and regulations are propped up by a growing body of guidance and standards that lack the force of law. This risks the AI market being seen as the 'wild west.’ As policymakers define – or refine – their strategies to address the risks of AI, they have the challenging task of creating a system that delivers the certainty required to instil much-needed confidence in businesses and consumers today, while also being flexible enough to promote and account for future innovations.

There are some early movers in this space. China was quick to implement laws to govern specific applications of AI technology, including deep synthesis tech (that which underpins deepfakes like those plaguing Hanks) and recommendation algorithms.

The EU is well on the way to implementing its AI Act, which provides a risk-based approach to control the use of AI systems depending on the intended purpose of that system, and the European Commission has proposed the AI Liability Directive and revised Product Liability Directive to streamline compensation claims for AI-related damage. But whether these more prescriptive regimes will strike the right balance between consumer protection and innovation remains to be seen.

The UK has been slower off the mark; having published its strategy paper in 2021, the Government's highly anticipated AI white paper was only released in April this year. This articulated a pro-innovation approach that differs from the EU's central, rules-based regime, by being sector-led and principles-based. Essentially, the UK Government would leave it to sectoral regulators to identify any gaps in their own areas of coverage and address them through new regulations or guidance.

And herein lies two major challenges, both identified recently by the House of Commons Science, Innovation & Technology Committee (SITC) in its interim report on the governance of AI.

The first is the pace of advancement of AI regulation in the UK. The sooner policymakers plug gaps in the current patchwork of AI-related rules with laws and regulations that are fit for purpose and have the force of law, the sooner businesses and consumers can become comfortable engaging with AI systems.

Having taken more than 18 months to progress from AI strategy to white paper, a further wait remains while sectoral regulators each carry out an initial review of their areas of coverage, as we have seen from the Competition & Markets Authority (CMA), to then put into action any remediation activities needed to plug the gaps. This risks both the technology and the market moving on; indeed, SITC noted the prospect of "being left behind by other legislation - like the EU AI Act - that could become the de facto standard and be hard to displace".

The second is leaning on multiple regulators and authorities to plug gaps instead of putting a tightly focussed AI Bill in front of Parliament. Sector-led approaches require harmonisation among policymakers to properly plug gaps and avoid creating entirely new ones.

This is a challenge even domestically, but the ubiquitous nature of AI technology requires global interoperability as well. That is as much about simplifying the AI landscape for businesses and consumers as it is a matter of national security in a world where AI could be used by threat actors – state or otherwise – to harm our liberal, democratic values.

The US and UK governments are already using foreign direct investment laws to block AI-related investments from and to potentially hostile nations on grounds of national security, but a forum of like-minded countries is still seen as vital to developing mutual protection from evolving threats. 

 
What's next?

As policymakers worldwide chart a course through the complex and evolving AI landscape, we will – after this inevitable period of turbulence – see the greater certainty around legal and regulatory matters that is needed to boost business and consumer confidence around using AI technology. The dialogue we are starting to see at both a domestic and global level, with the date for the AI Safety Summit at Bletchley Park set for 1 and 2 November, is a step towards greater alignment between policymakers. But this must now be maintained, and indeed built on.

There is still a need to support this progress through better education around the risks of AI systems, for businesses and consumers. Despite excitement about the possibilities of AI technology, fear and distrust will only be minimised through balanced debate about the benefits and risks as well as the opportunities. So, yes there is promising progress, but the bottom line remains: if we want to see long-term success, we need more dialogue and less fanfare.