Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Politics
Michael Savage Policy Editor

Call for action on deepfakes as fears grow among MPs over election threat

Rishi Sunak appears to turn his back on health worker as she challenges him on NHS.
A Sky News clip that edited Rishi Sunak’s encounter with a former NHS worker was criticised for misrepresenting what happened. Photograph: Sky News

Most MPs across Westminster fear the use of deepfakes and AI-generated content will undermine the integrity of the next general election, amid demands for an urgent overhaul of their regulation ahead of polling day.

Ministers are already concerned about the effect the technology has had on elections overseas, with a survey of MPs revealing that a majority believe it poses a threat to the integrity of the coming election. Some 70% fear AI will increase the spread of misinformation and disinformation.

With 2024 billed as a huge year for democracy with elections taking place around the world, senior figures from all major parties are pushing for the issue to be prioritised before the election campaign takes off in earnest. All parties are being urged to promise not to amplify material they cannot verify.

Both London Mayor Sadiq Khan and Labour leader Keir Starmer have already been targeted by deepfake attempts in recent months, with invented audio clips designed to damage them. Meanwhile, security minister Tom Tugendhat has warned that deepfakes have already been attempted in the run-up to the Slovakian elections. Last week, chancellor Jeremy Hunt warned that the development of AI had to be in line with “liberal democratic values”.

A survey of MPs seen by the Observer shows that 57% of those asked said AI could negatively interfere with electoral integrity in 2024. Two-thirds (65%) said political parties and politicians should be transparent about how they are using AI tools in campaigns, according to a YouGov poll commissioned by the communications consultancy Cavendish.

The vast majority of general voters – some 78% – backed a requirement for political parties to disclose their use of AI.

The findings are included in a forthcoming report by the Demos thinktank, which calls for all parties to sign up to new standards on the use of AI and to publish their internal guidelines for using it. Researchers also call on parties to commit to not amplifying suspect material.

Robert Buckland, the former Tory justice secretary, said action was needed ahead of “the biggest year of elections in world history”. He added: “We have to make sure that the clear and present danger to democracy presented by deepfakes and AI generated misinformation is both headed off and mitigated by direct action.”

Khan told the Observer that the far right had sought to use the material against him to “raise tensions, sow hatred and turn communities against each other”. He urged ministers to heed a call from the Electoral Commission to update laws on disinformation. “This trend is only likely to get worse in the weeks and months to come in the lead-up to the London mayoral and general elections later this year,” he said.

“The danger this new and unregulated technology poses to our politics and democratic freedoms, as well as wider society, can’t be overstated.

“I also urge the tech industry, the police and the wider criminal justice system to work together and provide researchers with the data they need to identify and examine the changing risks and challenges of this harmful content.”

Peter Kyle, the shadow science and technology minister, said the technology could have “potentially devastating consequences and can further erode trust in institutions”. He added: “A Labour government would urgently introduce binding regulation of the small group of companies developing the most powerful AI models that could, if left unchecked, spread misinformation, undermine elections and help terrorists build weapons.”

Polly Curtis, chief executive at Demos, urged all parties “to come together to agree on a way forward to protect the democratic systems they hold so dear and establish responsible norms about how generative AI is used”.

In addition to the potential of fake content, some Tories have also been warning of the dangers of misrepresenting real material after an encounter with Rishi Sunak and a former NHS worker last week. Sky News tweeted a video showing the voter berating Sunak about the state of the NHS. Sunak is then shown laughing and walking off. After dozens of Labour MPs and commentators criticised Sunak, Sky News broadcast full footage of the encounter which showed the pair had walked off together before ending their encounter by shaking hands. A Downing Street source said Sunak was laughing at a comment from someone in the crowd, rather than the woman.

A Department for Science Innovation and Technology spokesperson said: “We are working to ensure we are ready to respond to threats to our democratic processes, including through our Defending Democracy Taskforce.

“The Online Safety Act will soon make social media platforms legally responsible for removing illegal mis- and disinformation and enforcing their terms of service. We have also introduced the Digital Imprints Regime, which requires certain political campaigning digital material, including AI-generated material, to have a digital imprint that makes clear to voters who is promoting the content.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.