Deepfake impersonations made the news recently with a TikTok account called deeptomcruise went viral. To the untrained eye, the replication was impressively deceptive and was clearly labelled as a parody however now the technology has been proven to work what implications could deepfake videos have on the law and your personal rights?
What exactly is a deepfake?
Deepfake videos, to describe them in the simplest terms, are videos that use artificial technology to create a video that superimposes a facial image onto the body of somebody else to create a digital lookalike. You may be familiar with similar technology in apps like Snapchat which use ‘filters’ to superimpose faces onto an image.
Up until recently the quality of deepfake images and videos were not particularly high and the very obvious fakes were used almost entirely for entertainment purposes. Technology moves fast though, and some of the videos we’re starting to see would comfortably fit in with any Hollywood production.
Another example of a less than nefarious use of deepfake videos in the real world is the use of CGI in films to fill in the parts of deceased actors. A well-publicised example was Carrie Fisher featuring in Star Wars: Rogue One after passing away. A stand-in body double was used and then using facial tracking the look was completed.
Deepfakes use a combination of artificial intelligence and machine learning to create the image of the person. Essentially by feeding the system enough information a deepfake image can be created from scratch. What information is required to be able to self-generate the fake? Images and video, and where else to look for such images than social media or in the case of Tom Cruise, a quick Google search.
Real-life implications
The technology has rapidly developed and we’re getting very close to the point where the naked eye will not be able to tell the difference between a deepfake and a real video. One thing 2020 has shown is that fake news has reached epidemic proportions. Donald Trump led the way with a flurry of flagged tweets before eventually getting a permanent ban and despite their best efforts, the social media giants simply cannot keep pace with the wild Covid-19 conspiracy theories.
The difference with those was that they were posted by real people. For people like Trump, there are real-world consequences beyond a failed reelection. What happens when you add another layer into that? What could happen if a deepfake is used to disseminate fake news by impersonating somebody who would normally be considered a pillar of truth? Sure, the individual would issue a denial but not before the video went viral and created doubt in the mind of the public.
So far that’s broadly a hypothetical fear but there have already been real-life implications. The Tom Cruise deepfake was created by a professional visual effect artist and a Tom Cruise impersonator but the ability to create such videos is becoming available in commercially available software.
In May 2018, Davide Buccheri was convicted of harassment after creating fake pornographic images of an intern. In what was an apparent revenge attack for turning down his advances, Buccheri found himself on the wrong side of the law after creating the images to destroy the reputation of the intern with their employer. The judge handed down a custodial sentence and he was forced to pay compensation to the victim signalling the intention to pursue individuals where resources allow. Unsurprisingly, he was also dismissed for gross misconduct.
Another peril of deepfakes in law is the potential to undermine confidence in video evidence. If videos became so widespread that a jury may doubt the legitimacy of a video would they be able to make a conviction based on any submissions? Even if the video was verified by an independent expert the doubt could still be there that could alter how they interpret other evidence.
How can the law tackle malicious deepfake videos?
Currently, there is no specific legislation dealing with deepfake videos and any proceedings currently use interpretations of existing laws as shown in the Buccheri case. Depending on the damage caused there are various approaches:
- Privacy and anti-harassment laws – Buccheri was convicted after displaying a disregard for the privacy of his victim. Using a manual version of the techniques described, he used his victim’s social media images to create the images which were uploaded online as well as bombarding her with messages and in-person harassment. In this case, the deepfake images being uploaded online were used as evidence of harassment but possession of the images wasn’t specifically defined as illegal. However, if a video were produced that revealed private information then it could be possible to apply a breach of confidence and misuse of private information torts to the video creator.
- Passing off – Should a deepfake be used without the permission of the individual in question in a commercial setting (for example an advert) then the tort of passing off could be used so long as the individual is recognisable by the public. In 2003, the former F1 driver Eddie Irvine successfully sued the radio station Talksport after they manipulated an image to show him holding a radio with their branding.
- Defamation – This is a more difficult claim as any false information would need to have a quantifiable monetary loss. The Defamation Act 2013 states that the publication of material must cause ‘serious harm’ meaning that if there is no public reputation to harm success is unlikely. To somebody in the public eye, this could prove a useful avenue.
- ‘Revenge porn‘ laws – At the end of February 2021, a review was published of the effectiveness of current legislation for making and sharing images. This would seem to suggest that deepfake images would be covered under the ‘making’ portion of the offence should the guidance be updated. Since 2015 it has been an offence to share an intimate image online without an individual’s consent under section 33 of the Criminal Justice and Courts Act 2015. This doesn’t cover fake images however the review just published specifically mentions “sexualised photoshopping and deepfake pornography” making it likely that this will become law at some point in the future.
How else can we deal with deepfake videos?
At the moment whilst laws remain non-specific, we currently rely on big tech to make their own rules. Twitter added new rules deepfake media on their platform in the run-up to the US election. Depending on the context of whether the content is either damaging or misleading the content could either be labelled or removed entirely. Facebook went a step further and banned deepfake content, however, there is an exception for parody videos for entertainment purposes. Likewise, Pornhub has also banned deepfake videos but this relies on them being able to identify them in the first place.
We know that some Tweets include manipulated photos or videos that can cause people harm. Today we’re introducing a new rule and a label that will address this and give people more context around these Tweets pic.twitter.com/P1ThCsirZ4
— Twitter Safety (@TwitterSafety) February 4, 2020
Practically speaking, the moderation of such content is a mammoth task. As deepfake technology becomes more accessible it will be almost impossible to moderate manually. Technology to automate the detection of the videos does exist however there will be a continuous battle between video creators and social media to play catch up with each other. The Tom Cruise video for example was able to trick the software into believing it was a genuine video.
As it stands deepfake videos aren’t a major concern but that could rapidly change. Should it reach the point where the impact of the videos affects public opinion on a particular matter then the government would need to legislate. At this time the technology is still in it’s infancy and with other considerations such as free speech there is still a lot of debate to have before this is settled.
Are you concerned about the development of deepfake videos? Alston Asquith has offices in London and Hertfordshire and can arrange a call to provide some initial advice and if your case is covered by the law.
We pride ourselves on our relationship with our clients as well as the service we provide. View some of our feedback on Trustpilot.
Visit our contact page to find out how we can help.
Read more
Artificial Intelligence and Copyright Law: Who (or What) Owns What?
Technology Law
Defamation on Social Media and Online: A Complete Guide
Amazon Fresh launches checkout free store in London: Dangerous tech or the future of shopping?
Summerfield Browne v Waymouth Trustpilot ruling fallout continues