Deepfakes have been around since the late 2010s, but their increasingly sophisticated capabilities, not to mention greater availability to the public, beg the question: how can brands maintain control of their image and reputation, and is it even possible anymore? Rory Lynch, a legal director at Gateley Legal, explores why reputation management in the age of AI is not only achievable – it is more crucial than ever.
First emerging on Reddit in 2017, deepfakes took the concepts of image distortion, manipulation, and falsification to new levels. Using complex artificial intelligence (AI) algorithms and reams of publicly available data, a Reddit user swapped the faces of women in pornographic videos with those of celebrities such as Taylor Swift or Scarlett Johansson. To the idle scroller, the videos were convincing enough to pass for explicit videos that involved a global superstar. To the technological community, a new capability for AI-powered digital content creation was unlocked – both for better and for worse.
Since then, use of AI-based face swapping and speech synthesis technology to create deepfakes has exploded. In its benign uses, it has helped history come alive at museums by enabling famous figures, both living and dead, to interact with visitors. Dubbing in films and TV can also become more synchronised and realistic through such technology.
Its more ignoble capabilities, however, are concerning. From replicating the voices of CEOs to con companies out of millions, to spreading fake news via digitally doctored photos and videos, deepfakes can be used, and are being used, by individuals and entities to blur the lines between reality and fiction.
In a world where anyone can use, exploit, and distribute content for their own purposes, it is tempting for both individuals and businesses to wonder if it is even possible to claim ownership of, and control, their own stories, reputations, and public perceptions anymore. The answer is yes, it is. Like all new technologies and risks, however, deepfakes need to be understood and their risks accounted for before they become part of a business’s all-important communications strategy.
Deeply Disturbing
Deepfakes come in many forms, both visual and aural. The most notable is arguably face swapping – particularly where this concerns pornographic images or videos – largely due to its planned inclusion under the Online Safety Bill as a form of image-based sexual abuse.
Deepfakes, however, can also involve creating new faces of fictitious individuals, manipulating the features and / or surroundings in an image or video, and re-creating or replicating voices digitally.
The technology behind deepfakes is highly sophisticated and requires a high input of data to create even a passably convincing replica. Using an AI algorithm, for example, an encoder can find and learn the similarities between two people’s faces via thousands of different images. It then extracts common features and compresses them into a single image.
Generative Adversarial Networks (GANs) can also create deepfakes by pitting two AI algorithms against each other across multiple cycles. A final deepfake is then created based on feedback from the user.
Due to the level of processing power required, standard computers with no specialist software are rarely able to create convincing deepfakes. The technology is, however, becoming increasingly widespread, with some mobile apps allowing users to add their faces to certain TV and film characters.
Fact or Fiction?
According to AI firm Deeptrace, 15,000 deepfake videos were available online in September 2019, with the majority targeting female celebrities. Their scope is widening, however, as are the kinds of individuals and organisations targeted by deepfake creators.
A UK subsidiary of a German energy firm, for example, was tricked into transferring £200,000 into a Hungarian bank account after fraudsters allegedly used deepfake technology to clone the CEO’s voice on the phone. Even titans like Tesla are not immune, with the company’s stock price crashing following a deepfake of CEO Elon Musk appearing to smoke marijuana on a live web show.
Furthermore, although the fundamental risk at the heart of deepfakes is not new, the degree of risk is. Distorting images to parody individuals or pass off fictitious events as real existed as far back as the 1900s, when cousins Elsie Wright and Frances Griffiths used cardboard cut-outs of fairies to dupe viewers into believing the mythical creatures existed in their garden in Cottingley.
Sophisticated programs such as Photoshop have also been available since 1988, giving users the ability to change the backgrounds of photos, remove blemishes, and add in additional features.
The prevalence of social media, the ease and speed with which hoaxes and fake news can travel around the world, and the sophistication of technology, however, have arguably exacerbated the harmful effects of such capabilities, particularly those offered by deepfakes. As a result, industry commentators are noting an increase in ‘reality apathy’, with many individuals choosing to trust only news delivered from a highly curated and personal selection of sources.
Legal Protection
Due to the technology’s novelty, there are not yet hard-and-fast methods in which brands can seek legal recourse from deepfakes – particularly those that tarnish a brand’s reputation or implicate a company or its employees in illegal activity.
In the UK, a patchwork of legal options exist, depending on the severity of the deepfake’s accusations and its impact on the business, although these are unlikely to apply where a deepfake is an obvious parody.
Defamation claims, for example, could be used against deepfakes that implicate a business in illegal activity, so long as a business can convince the judge that the video is not intending to be a parody, and that its existence has caused significant financial harm to the business. Businesses will also need to be able to identify the deepfake’s original creator, something that is easier said than done considering the ease with which content can be posted anonymously online. As of February this year, there had been no civil cases involving defamation claims against deepfakes.
Intellectual property (IP) law is another potential avenue that is currently being explored in the legal sector, particularly as deepfakes are based on existing images to which copyright law applies.
As of 2021, however, neither the UK nor the US courts had made any decisions explicitly discussing this as an option, although industry commentators have identified several possible, but highly contextual, aspects of IP law that may be useful.
The UK does not recognise the concept of image and personality rights, but the common law tort of ‘passing off’ may be used by individuals and businesses where their positive reputation is exploited and misrepresented to make claims or endorsements without prior agreement or consent. This may be particularly relevant where deepfakes of influential businesses appear to support certain political or social issues.
Proactive Protection
With the law’s ability to provide effective protection against deepfakes still in its infancy, mitigating their risks through proactive reputation management has never been more important.
As a starting point, businesses need to review and understand the content about them that is already available online. Like most AI-based content generators, deepfake technology uses data from the internet as the foundation of its images or videos. Leading MIT Open Documentary Lab fellow Halsey Burgund advises both businesses and individuals to “think of everything one puts out on the internet freely as potential training data for somebody to do something with”.
Quality control of, and consistency in, branded content and messaging has arguably never been more important, not least because it helps to develop an authentic and recognisable voice that can be better distinguished from fake or manipulated content.
Where a deepfake is discovered, it is vital that businesses act quickly. In addition to reporting the offending content and requesting its removal, businesses should prepare holding statements for their websites, social media accounts and – if necessary – the press, stating awareness of the deepfake and its content, and highlighting that the video was not created, or endorsed, by the company.
For deepfakes involving serious accusations, businesses should then work with legal teams to create more detailed messaging that both addresses the facts and advises those affected of what to do next. Speed is of the essence here – the faster a business can counter a deepfake with its own communication, the less likely the deepfake is to spread. It may even help a business to build trust with its stakeholders and highlight its authentic voice if done effectively and strategically in collaboration with legal and communications professionals.
Hoping deepfake technology will fade into obscurity is not an option. Pandora’s Box is already open, and businesses that choose to ignore it are leaving their brands vulnerable to distortion, manipulation, and abuse.
Like many challenges posed by digital technology, however, having in place a comprehensive communications strategy, backed by authentic, consistent, and distinct brand messaging, will be one of the most effective ways to counter, and even harness, the capabilities of deepfakes in the age of AI.
Rory Lynch is a legal director at Gateley. He specialises in media litigation, particularly reputation management and protection, defamation and privacy. Heading up Gateley’s reputation management team, he acts for corporates, politicians, musicians, sportspeople, actors, and ultra-high-net-worth individuals, families and royal houses. Earlier this year, he was named a recommended reputation and privacy lawyer in the 2023 Spear’s Reputation Index.