Election officials are working “at speed” with the Home Office on a pilot project to combat the use of deepfakes to target candidates standing in this year’s Scottish and Welsh elections.
Officials at the Electoral Commission in Scotland said they and the Home Office expected software capable of detecting AI-generated deepfake videos and images to be operational before election campaigns begin in late March.
Sarah Mackie, the commission’s chief in Scotland, said that if the software detected a hoax video or image, officials would contact the police, the candidate concerned and inform the public, although she acknowledged it could not always provide 100% certainty.
They would then urge the social media platform to take the content down, she said. However, because such action is currently voluntary, the commission also wants legally enforceable “takedown” powers that would require media platforms to remove hoax material.
She said the commission had urged the UK government to consider introducing such powers.
“What we don’t have at the moment, and what we want, is called takedown powers – where we approach social media companies and demand something is taken down,” Mackie said.
There are no known cases of deepfakes emerging during UK election campaigns, but their use has surged in elections abroad, a trend that has accelerated dramatically with the spread of free AI image-generation tools.
British elections and referendums have repeatedly been targeted by often state-sponsored fake social media accounts from countries including Russia, Iran and North Korea, typically designed to spread dissent or amplify controversy.
Speaking at a pre-election briefing for journalists in Edinburgh, Mackie said the commission was also working with the Scottish parliament and police on a “safety and confidence” project to support women and black, Asian and minority ethnic candidates who experience gender- or race-based abuse or harassment.
She said a study from 2022 found about half of all female election candidates had experienced abuse, with many saying they would not stand again. Candidates from minority-ethnic backgrounds also said the abuse made them too scared to stand again, undermining diversity at Holyrood.
Mackie said the upsurge in AI-driven and pornographic “undressing” technology, particularly generated by Elon Musk’s Grok AI platform, would fall into that category if used during an election and would be reported to the police.
Musk’s X and Grok platforms have been criticised for failing to remove fake, pornographic and harmful content, with senior politicians at Westminster calling for urgent action by the government and the media regulator Ofcom.
Mackie said there was no clear legal role for the Electoral Commission or other agencies to regulate deepfakes during elections, but both the commission and Home Office needed to test what action they could take.
She said the pilot project could be rolled out for all UK elections if it proved successful.
She said: “We don’t regulate campaigning but there is an empty space here where it’s a little bit like there’s lots of regulations surrounding the edge of the ring.
“So what we are doing is just jumping into the centre of the ring by sort of saying, well, let’s see what we learn from this and then share it with the other people.”
A Home Office spokesperson said the UK’s Online Safety Act required social media companies to remove unlawful content and to prevent the spread of false information that could cause psychological or physical harm.
It said: “Protecting elections from the threat of sophisticated deepfakes is vital to maintaining public trust in our democratic system.
“This pilot will help to detect and tackle deepfake material and protect the public from the impact of disinformation.”