This website uses cookies

This website uses cookies to ensure you get the best experience. By using our website, you agree to our Privacy Policy

Mark Jones

Partner, Payne Hicks Beach

Henry Watkinson

Associate, Payne Hicks Beach

Quotation Marks
Recently, in Hong Kong a deepfake video call was used to steal over £20 million from a company

Deepfakes and fraud: an ever-increasing risk

Opinion
Share:
Deepfakes and fraud: an ever-increasing risk

By and

Mark Jones and Henry Watkinson discuss recent examples of deepfakes being used for nefarious purposes and the possible avenues for prosecutors

Deepfakes are increasingly used by fraudsters and scammers. Deepfakes can impersonate a person’s appearance convincingly, thereby increasing the risk when relying on video calls to ascertain who you are speaking to.

The rise of deepfakes

Numerous celebrities have been targeted; fake endorsements have misused the identities of Tom Hanks, Taylor Swift and Elon Musk, to name a few. These are not amusing videos shared among a small group of people on lesser-known chatrooms in dark corners of the internet – over 47 million viewers saw deepfake pornographic images of Taylor Swift on X (formerly Twitter).

As recently as 24 February this year, the BBC reported that celebrities, including Piers Morgan and Oprah Winfrey, had been deepfaked for advertisements by a US influencer, Wesley ‘Billion Dollar’ Virgin. Real footage was overlaid with deepfake impersonations of various celebrities, so that they could be heard promoting the influencer’s online self-help course. At first glance, and even on repeat viewing, the viewer could be excused from believing the videos to be genuine. The only giveaway being the implausibility of celebrities such as Nigella Lawson promoting the product. Lawson’s spokesperson branded the ad as ‘fraudulent’.

Deepfakes are also a useful tool in romance frauds. These occur when a criminal adopts a fake online identity to gain a victim’s affection and trust. Once these have been secured, the fraudster then uses the illusion of a romantic or close relationship to manipulate and steal from the victim. Action Fraud notes that scammers use language to manipulate, persuade and exploit, so that requests for money to not raise alarm bells.

Deepfakes are not restricted to celebrities. Increasingly fraudsters are using deepfake audio or visual technology to assume the identity of an existing relative. Fraudsters no longer need to pretend to be a potential romantic interest, where they will need to invest days and months into building trust with a target. Using deepfake technology, they may be able to imitate family or close friends, often in distress, that require urgent assistance or funds. Most people will be quick to assist loved ones, making the job of the fraudster that much easier.

Taken a step further, artificial intelligence (AI) fraud could occur in the form of a work colleague asking for a transaction to be made. The voice of the colleague has been replicated and sometimes even their faces. Recently, in Hong Kong a deepfake video call was used to steal over £20 million from a company. A multi-person video conference took place, but everyone was fake other than the unwitting employee who went on to make the payments.

The law

The Online Safety Act 2023 recently introduced a series of new communications offences, one of which made it an offence to share sexual deepfake imagery. However, it is still problematic for prosecutors where deepfakes are not of a sexual nature to cause harm, but instead are used purely for the purposes of fraud. Prosecuting authorities, therefore, have to rely on offences in existing legislation, for example the Fraud Act offences.

The offence which immediately appears relevant is Section 2 of the Fraud Act 2006, being fraud by misrepresentation. To be guilty of an offence, a defendant must make a false representation, dishonestly knowing that the representation was or might be untrue or misleading with the intention to make a gain for themself or another, or to cause loss to another (or to expose another to a risk of loss).

To unpack the legalese a little, it is worth examining the ingredients of the offence. A ‘representation’ for the purposes of the Act means any representation as to a fact, and there is no limitation to the way in which the representation can be expressed, and includes misrepresentations to a machine (i.e., inputting knowingly false data). This will surely therefore cover videos or audio generated to convince another of an untruth.

Where prosecutors can become unstuck is the concept of ‘gain’. Gain is defined in Section 5 of the Act and extends only to money or other property, of keeping what one already has, or preventing another from getting what they might have received otherwise. This offence is therefore very much appropriate for instances of marriage fraud for example where offenders seek to extract money from their victims.

It seems also very plausible that Sections 6 and 7 of the Fraud Act 2006 may be relevant for the production of deepfakes for a nefarious purpose. Section 6 creates the offence of possessing articles for the use in fraud, and Section 7, the making or supplying of such articles. Prosecutors must prove that the article was held or created with the intention of committing fraud. Helpfully, an ‘article’ has an extremely wide meaning and includes data held in electronic form.

In addition to criminal prosecution, a victim may have recourse to turn to the laws of defamation, data protection, intellectual property and even privacy and harassment, especially in cases where media is made public.