This website uses cookies

This website uses cookies to ensure you get the best experience. By using our website, you agree to our Privacy Policy

Chloe Flascher

Associate, Withers

Quotation Marks
The OSA, however, is simply not interested in tackling online intermediaries using AI outside of the prescribed criminal offences

Seeing is deceiving: manipulated digital content, deepfakes and the law

Feature
Share:
Seeing is deceiving: manipulated digital content, deepfakes and the law

By

Chloe Flascher takes a closer look at the dangers of deepfakes and the legal protections introduced by the Online Safety Act in England and Wales

The BBC surveillance drama ‘The Capture’, depicts a world where deepfakes are used as a justice-seeking tool by the ‘department of corrections’ (a fictitious department in the UK government reminiscent of Orwell’s Ministry of Truth). The ‘department of corrections’ acts to distort, manipulate and deepfake videos, which are ultimately relied upon as admissible evidence in the English courts in order to convict individuals of crimes they never, in fact, committed. By doing so, their general objective is to clean up the streets and bring those same individuals to justice for the crimes which they have committed but for which they have previously evaded responsibility due to a lack of evidence or inability to otherwise prosecute. Most significantly, the deepfake video purporting to depict the ‘crime’ central to the first season of ‘The Capture’ was seen, believed and relied upon by Lord Justices of the Court of Appeal.

While the ‘department of corrections’ may be the BBC’s own fiction, in 2024 we are already seeing immense growth in the UK of artificial intelligence (AI)-enabled crime and, in particular, the use of deepfake technology to commit unlawful acts. While largely comprised of deepfake pornography targeting those in the public eye and in particular the entertainment industry, senior executives working for higher profile companies, as well as politicians, are also proving to be at risk of falling victim to deepfakes.

The term ‘deepfake’ describes the use of deep learning, a sub-field of AI and machine learning, to recreate new but synthetic versions of existing images, videos or audio material. The Online Safety Act 2023 (OSA), a long-awaited piece of legislation in England and Wales, came into force on 31 January 2024. Under the OSA, offences have been introduced to tackle intimate image abuse and ‘revenge porn’, meaning that the sharing (or threatening to share) intimate deepfaked images or videos of someone without their consent – even if created or disseminated without the intention of causing distress – is now a criminal offence.

For the reasons outlined in this article, these protections contain potentially important avenues for those who are victim to deepfaked revenge porn. However, while the new offences which have been introduced under the OSA offer protection to those who fall victim to deepfaked illegal content, it is also important to recognise that deepfake content more generally, including the use of deepfakes in misinformation campaigns, does not fall under the OSA at all.

Alongside alternative potential legal avenues for victims of deepfakes as content, discussed below, in 2024 it will be essential to shift towards a zero-trust mindset in distinguishing what is authentic and what is manipulated online.

What protections does the OSA afford?

The OSA creates new offences of sharing or threatening to share intimate photographs or film. The offences all contain wording to reflect circumstances that includes the intentional sharing of ‘a photograph or film which shows, or appears to show, another person […].’ The UK government has made clear through its 31 January 2024 circular that the focus on photography or film that ‘shows or appears to show’ seeks to ensure that the offences cover the scenario not only where the content is ‘genuine’ but also where content has ‘been altered or manufactured so that it appears to be a genuine photograph or film.’

The fact that these new offences have been introduced is welcome as it explicitly includes deepfakes as a form of intimate image abuse under English law. The recent victimisation of Taylor Swift has proven that new legislation was required in this area, especially if your image is vulnerable to abuse because you are in the public eye. Previous attempts to criminalise the distribution of private sexual photographs or films without the consent of individual subjects (under Section 33(1) of the Criminal Justice and Courts Act 2015) is one which required proof of intent to cause the subject distress. This left victims in the position where, if the objective were ‘mere’ sexual gratification or blackmail, the offence could not be made out. It is also notable that the new offences under the OSA do not require proof of intent to cause distress and they also cover not only the actual sharing of content but the intent to share (ie, the blackmailing of victims).

So, what is the problem?

The World Economic Forum ranks disinformation as a top global risk for 2024, with deepfakes as one of the most worrying uses of AI. It is, therefore, rather disappointing that the OSA only contemplates deepfakes if the AI-generated content is one of the proscribed criminal offences under the new legislation. The result is that deepfakes of politicians, CEOs, or any celebrity, which are not pornographic in context fall entirely outside of the OSA regime and the victim must look elsewhere for legal protection. Protections against deepfakes will primarily fall under laws relating to copyright infringement, privacy and data protection.

There are also clearly potential enforcement issues that may arise in circumstances where deepfake abusers are likely to out-smart police forces by using virtual private networks (VPNs) or blocking IP addresses, such as occurred with Taylor Swift. It is also, rather oddly, a defence for the person charged under these sections to prove that they had a reasonable excuse for sharing the photograph or film without consent or a reasonable belief in consent. Novel questions may, therefore, arise regarding whether or not a sharer of deepfaked content can in fact possess a ‘reasonable belief’ in the existence of consent of the subject where they suspected the material to be deepfaked.

Crucially though, while the OSA provisions seek to tackle non-consensual deepfakes in an image abuse context, the OSA also does not make any attempt to criminalise the creation or sharing of any other type of AI-generated content created without the subject’s consent. It does not include, for example, measures to tackle deepfakes more generally, including that which involves elements of blackmail.

It is true that some deepfakes are relatively harmless if created simply for satirical purposes. Who doesn’t want to see Pope Francis wearing a puffer jacket? However, in 2024, we need to brace ourselves as we enter an ‘infopocalypse’ due to the increased use of hyper-realistic manipulations online of audio-visual content. The concern of many is that this is not only happening in the political sphere through deepfakes of politicians and candidates, but also that deepfakes are being used to target chief and senior executives, and even in the sphere of geopolitics and as part of warfare.

The Daily Telegraph recently reported that a finance worker in Hong Kong fell victim to a ‘deepfake’ video call when fraudsters used deepfake technology to trick an employee at a company into believing that they were speaking with the CFO. The police reportedly confirmed that every individual on the call was a fake representation of a real person. They conned him into paying over £20 million to five local banks and the scam was only detected after the employee spoke to the company’s head office days later. Notably, the fraudsters used genuine, publicly available videos from video conferences in the past, with fake audio added.

It is only a matter of time before the same fraudsters who used MoneySavingExpert.com founder Martin Lewis’ image to scam others online move on to use generative AI, such as deepfakes, to falsely dupe others online.

Legal protections involving content not deemed to be illegal under the OSA

The EU AI Act does say that AI systems that influence the outcomes of elections or voter behaviour are treated as high-risk systems. The OSA, however, is simply not interested in tackling online intermediaries using AI outside of the prescribed criminal offences. In terms of how English law can assist those in the UK who fall victim to a non-consensual deepfake outside of an image abuse context and where the publication/broadcast is in the UK, they may be afforded protections under copyright law, privacy law or data protection law.

A deepfake of someone contains their personal data, since a video/audio/visual of a person is information relating to an identified or identifiable natural person. Deepfakes of individuals generally may breach the General Data Protection Regulation (GDPR) where personal data is used without consent and there may be potential for GDPR-based injunctive relief. Of course, in order to generate a deepfake the technology must use an existing image of a living individual. The copyright in the image will likely belong to the photographer. Training generative AI, such as deepfake technology, without the permission or licence of the owner is likely to be copyright infringement (whether that be of the subject of the photograph or film themselves or the copyright owner, if different). The tort of misuse of private information has also long recognised that protections can be afforded to false-private information if there is a reasonable expectation of privacy in that information and an unjustified intrusion which is not in the public interest.

Looking forward

The second season of the BBC’s ‘The Capture’ does go on to depict a UK government minister falling victim to an entirely deepfaked interview on Newsnight. The deepfaked version promotes governmental security policies which are antithetical to the minister’s actual policies, damaging his political reputation and it later emerges that the video manipulation was carried out by a Chinese competitor as part of a disinformation attack.

Deep learning is especially useful in fields where raw data needs to be processed, like image recognition, natural language processing and speech recognition. Deepfakes are clearly a dangerous use of deep learning, which are proliferating online. We will only continue to see victims of deepfake porn and the use of deepfakes as a method of propaganda for political reasons (both in the UK and the US), as well as more elaborate cyber heists committed through the deepfake impersonation of CEOs and the blackmail of innocent individuals as a result.

The reality is that whether it is an immediately obvious deepfake or a sophisticated undetectable deepfake, for the subject, the very fact of its publication means that the reputational damage to that individual is already done. It is well known that deepfakes target social media platforms, where misinformation spreads like wildfire, and generating deepfakes online is cheap and easy, with user-friendly software often available for free. In fact, it is reported that the deepfake market is currently valued at $7 billion in 2024, estimated to grow to $38.5 billion by 2030.

The MIT Technology Review posed that the mere idea of AI-synthesized media is already making people stop believing that real things are real. MIT drew on a detailed study of misinformation by DeepTrace Labs, a cybersecurity firm, to claim that ‘the biggest threat of deepfakes isn’t the deepfakes themselves.’ In other words, that over analysing whether or not a picture, audio or image is a deepfake and, thus, undermining trust in the media is a greater danger than the actual deception created by a ‘real’ deepfake. The reality is that the success of deepfakes will continue to ride on the innate ability of human beings to easily acquire false beliefs. In my view, and regrettably for trust in the media, the only real solution to mitigating the danger of deepfakes will clearly be adopting a seeing-is-deceiving approach to much online content and especially social media platforms, which contain vast swathes of user-generated content.