UK on the ‘precipice’ of AI deepfakes disrupting our elections, experts say
NationalWorld’s political editor Tom Hourigan has been speaking to experts about the risks artificial intelligence poses to our democratic process
and live on Freeview channel 276
Picture this: it’s October 2024, leaves are on the ground and there are six days to go until the general election. With the Conservatives’ poll numbers improving, Rishi Sunak is in confident mood as his chances of staying in Downing Street appear to be growing. He’s having a good campaign while Sir Keir Starmer is showing the strain of relentless media scrutiny.
Then someone releases an audio recording online of the Prime Minister, taped in a west London pub, saying he doesn’t like King Charles - calling him “interfering” and “difficult”. Social media goes into meltdown, his opponents are up in arms and public opinion turns on its head. There’s one issue: Sunak never made these comments. He wasn’t even at that pub. The PM has become the victim of a deepfake.
This is exactly the kind of scenario the Labour MP Darren Jones - the chairman of the Commons Business Committee - is worried about. Hostile actors, wanting to upend political stability in the West, have at their fingertips artificial intelligence (AI) systems to create video, audio and pictures of events that never happened. The kit is pretty cheap, easy to use and readily available.
For Jones, who’s raised this issue in Parliament and will lead a debate in the Commons tomorrow (May 22) on the UK’s AI strategy, this is about “defending democracy” at the dawn of a new kind of deception.
What is a deepfake?
“Deepfake” gets its name from deep learning - the method of AI that teaches computers to mimic the way the human brain processes information. It’s gained a lot of attention in recent months after the release of ChatGPT - which uses deep learning to have a text conversation with the person typing in questions or inviting it to do their homework or reply to an email.
Deep learning programmes also allow people to make changes to sound and still or moving images - and even create them from nothing. From face swapping to voice cloning to manipulating people’s expressions, the possibilities - and potential consequences - are seemingly endless. Sometimes, deepfakes cause nothing but a smile: in March, large parts of the internet fell for a doctored photo of Pope Francis wearing a puffer jacket. But they can be much more serious.
At the beginning of Russia’s war with Ukraine, a deepfake video appeared online of President Volodymyr Zelensky urging his soldiers to lay down their arms; he’d made no such appeal. And just last week, a candidate in Turkey’s presidential election - Muharrem İnce - withdrew from the contest after an alleged sex tape emerged. He says it was a deepfake based on edited footage taken from an Israeli porn website.
Dr Tim Stevens
(AI) can produce content that looks to be true and ticks all the right boxes for the right audiences almost with impunity
“It used to be ‘seeing is believing’, well it’s nothing like that now”, Dr Tim Stevens from King’s College London’s Cyber Security Research Group told NationalWorld. “Deepfakes are absolutely rife and people are all too willing to believe what they see in front of them if it conforms to their pre-existing ways of looking at the world.”
“That’s the value of something like AI”, he goes on. “It can produce content that looks to be true and ticks all the right boxes for the right audiences almost with impunity.”
Two months ago, more than 1,000 people - including leading AI researchers and Twitter’s owner, Elon Musk - signed a letter calling for a six-month pause on development of artificial intelligence because of the potential risks to society and humanity. Since then, Rishi Sunak has hardened his approach towards AI, saying he accepts “guardrails” needed to be put in place to tackle (among other things) misinformation.
Might imprints work?
One idea to get on top of political deepfakes before they cause disruption to the voting process is to proactively change the rules on how material is shared online. From this November, any adverts, social media posts or videos made by political campaigners in the UK will need to include a digital “imprint” or watermark - showing who’s produced and funded them, giving their name and address. This will apply all year round, not just at election time, and to anyone paying to place a political ad on the internet.
But Dr Stevens thinks this is better in theory than in practice. “For a watermark to work, you have to have it certificated in some way and we know from the wider cybersecurity landscape that certification authorities can be breached and fake certificates issued. It can be gamed like anything else.”
One of Europe’s leading legal experts on deepfakes, Kelsey Farish, told NationalWorld she has another concern. “What if you have genuine or authentic content that doesn’t necessarily have that imprint?” she asks. “Maybe someone innocently forgot to add the watermark, it’s posted online and everyone then assumes it’s fake. It allows people to use the ‘liar’s dividend’ - pointing to genuine, authentic content and saying it’s not real”.
Going after the deepfakers
If imprints don’t work, how worried should politicians be about deepfakes derailing their election chances and what can they do about it? It’s already illegal to publish a “false statement of fact” about an election candidate’s “personal character or conduct”. But making false claims about the candidate’s policies or pledges is not, partly to protect freedom of speech. This means that, in the eyes of a court, a deepfake might be considered fair game depending on the content.
“Political parties can ask social media platforms to take the clips down, they can do proactive PR pointing out they’re aware of the clip and it’s not genuine”, Farish says. “They could also do forensic digital tracing - working out where the clip emerged, whether it was posted by a bot and if they find out who’s responsible, send a cease and desist letter.”
“But that’s incredibly unlikely because it’s very, very difficult to establish where these deepfakes originate. And the footage has already been circulating, people will have already downloaded it, the impression has been made on those who’ve seen it”.
Does the law need changing again?
Some hope the Online Safety Bill currently going through Parliament will force social media firms to remove deepfakes more quickly - and deter people from posting them in the first place. Others think more legislation may be needed as the effects of AI become more apparent.
The Electoral Commission - the independent body that oversees voting - told NationalWorld the UK needed the right laws to “ensure the electoral system is safe and secure”. It said it had urged the government to consider “strengthening the powers of regulators so they are equipped to deal with future challenges” and was also worried that the laws around election spending hadn’t “kept pace with the growth and methods of digital campaigning”.
How do we as a society separate material that’s true and not true? We’ve never had to do this before at this scale
In a statement to NationalWorld, a government spokesperson said ministers recognised the threat of digitally manipulated content, and took it “very seriously”.
They went on: “Our priority is always to protect our elections and take action to respond to any threats to the UK’s democratic processes and institutions. Under the Online Safety Bill, all companies subject to the safety duties will be required to remove illegal content from their platforms when they become aware of it. This will include the unlawful use of deepfakes or manipulated media.”
Dr Stevens says we’ve been here before, though. “We have seen children die because social media companies didn’t clamp down on the promotion of suicide material”, he points out. “We’ve seen regulation and law attempting to respond to that, but the material’s still online and available for people with the will to find it.”
For Farish, one of the best ways to combat the disruption that political deepfakes might cause is public awareness. “The first step is understanding there’s a possibility that what you’re seeing is not real,” she says. “People have been trained to believe that video is real so how do we as a society separate material that’s true and not true?”
“We’ve never had to do this before at this scale,” she concludes. “We’re on the precipice.”