Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Roll Call
Roll Call
Jim Saksa

AI deepfakes in campaigns may be detectable, but will it matter? - Roll Call

At some point in the months leading up to the 2024 election, a tape will leak that will confirm voters’ worst fears about President Joe Biden. The audio, a bit grainy and muffled as if it was recorded from a phone in someone’s pocket, will have the 80-year-old sounding confused, perhaps seeming to forget that he’s president, before turning murderously angry. It may arrive in journalists’ inboxes from an anonymous whistleblower, or just go viral on social media.

Or maybe the uproar will be over audio of former President Donald Trump saying something that his supporters find disqualifying.

Whether such a clip is real or the work of new, startlingly realistic generative AI models, the affected politician will call it a fake and evidence of the other side’s willingness to lie, cheat and steal their way to the White House. And while generative AI experts say they will most likely be able to detect charlatans, it would be impossible to prove a recording is real. And it’s another question, and a doubtful one at that, whether such evidence of some audio’s provenance will matter to partisan voters so ready to reject any datapoint that doesn’t conform to their worldviews.

Deepfake audio, authentic-sounding but false recordings built from short snippets of a subject talking, have become so realistic that they can fool your own mother, presenting painfully obvious potential for underhanded political tactics. AI developers warn that the technology’s rapid development and widespread deployment risks ushering in an epistemological dystopia that would undermine the foundations of representative democracy.

“Campaigns are high stakes,” said Hany Farid, a generative AI expert at the University of California, Berkeley. “We know that we have state-sponsored actors interfering, we know the campaigns are going to play dirty tricks. We know the trolls are going to do it. We know the supporters are going to do it. We know the PACs are going to do it.”

Testifying before a Senate Judiciary subcommittee in May, OpenAI CEO Sam Altman called AI’s capabilities to generate disinformation personalized to the targets, one-by-one, one of his gravest concerns. A United Nations adviser recently told Fox News that a deepfake October surprise was his deepest worry.

Already deployed

Campaigns have already deployed deepfake technology in less malicious ways in the GOP presidential battle. Never Back Down, a PAC backing Florida Gov. Ron DeSantis’ presidential campaign, used AI to get a fake Trump to read a post @RealDonaldTrump made on Truth Social, making it sound like he had called into a radio show. Before he dropped out of the GOP presidential nomination race, a super PAC supporting Miami Mayor Francis Suarez posted videos of “AI Francis Suarez” that touted the accomplishments of “my namesake, conservative Miami Mayor Francis Suarez.”

Editing media to mislead voters is not new and doesn’t require AI. A video of Biden visiting Maui after the devastating fire there was doctored to add chants cursing out the president. And right-wing pundits recently claimed, falsely, that Biden fell asleep during a memorial for the victims, pointing to a low-quality video of Biden looking down for a few seconds. Campaign attack ads have long used the most unflattering pictures of their opponents, often rendered in more menacing black and white, to make them look like shifty-eyed liars.

But generative AI will supercharge the ability of campaigns, and their rogue supporters, to produce believable fakes. Widely available generative AI photos and videos today can produce images that appear real at a glance but fall into the uncanny valley upon closer inspection, like the pictures of Trump hugging Dr. Anthony Fauci that Never Back Down showed in an attack ad. At first view, nothing may seem amiss during the five seconds the collage appears under the heading “REAL LIFE TRUMP”; only when pausing the ad does it become clear how unnatural their hands looked or how, in one image, Trump appeared to be kissing Fauci on his eye.

‘Shocking how good it is’

Replicating someone’s voice in a believable manner was nearly impossible a few years ago — even the best impressionists could only get close. Today, companies like Respeecher only need to analyze a few minutes of a person’s voice to generate a convincing sonic replica. And that fake can be directed to say anything. “It’s shocking how good it is,” said Farid.

While computers alone can’t produce an eye-fooling video yet, someone with the requisite editing prowess could polish AI’s work product enough to create something that seemed real in a video viewed on a phone or other small screen, Farid said. And given computing power’s long trend of exponential growth, it’s only a matter of time before an AI text-to-video platform’s movie magic alone will be able to trick us. “There’s no putting the generative-AI genie back in the bottle,” Farid said. “It’s coming.”

The AI experts and companies in this field all say the most effective way to mitigate the damage done from deepfakes is for the industry to adopt proactive standards while continuing to develop “passive” methods for analyzing media to uncover markers of generative AI.

Farid’s lab focuses on “passive” detection, researching analytical methods for uncovering AI’s fingerprints on a piece of media. For example, Farid noted how artificial audio tends to produce unnaturally regular cadence and how image generators still haven’t caught up to the Renaissance when it comes to the concept of perspective — parallel lines in the real, 3-D world, like railroad tracks, look like they are converging in 2-D pictures, but AI doesn’t seem to like breaking geometric definitions

While Farid says he’s confident that his lab and other experts can uncover traces of AI’s handiwork when asked to investigate a particular clip, doing so at scale would be impossible. “On YouTube, for example, there’s 500 hours of video uploaded every minute of every day,” Farid said. “You can’t sit there at that pipe and analyze every single piece of content.”

And, he added, while a responsible journalist might reach out to verify a leak’s veracity, there’s no stopping someone from posting a fake on social media, where “you’ve got milliseconds before something goes viral.”

So the industry needs to implement “active detection” measures as well, Farid said, like embedding digital watermarks into media metadata. He would extend that imperative to devices that record and capture real media — an unedited mobile phone photo would essentially come with a certification stamp, verifying when and where it was recorded.

Pledges from industry

That two-pronged approach seems to be the one favored by the nascent industry, with established firms like Adobe, Microsoft and the BBC leading the Coalition for Content Provenance and Authenticity (C2PA), which is developing technical standards for certifying the sources’ digital content. At a White House gathering of AI corporate leaders in July, the firms pledged to implement active detection protocols.

Anna Bulakh, head of ethics and partnerships at Respeecher, likened the ongoing development of intra-industry standards for generative AI to how websites migrated to more secure encrypted protocols, which begin web addresses with “https.” Describing herself as a “pragmatic optimist,” Bulakh said she’s hopeful AI firms can work with governments to mitigate the technology’s abuse.

But, she noted, not every startup in the AI field is as ethical as her own. In the tech space, for every Reddit trying to enforce community norms, there is a 4chan, where nihilism reigns. The same goes for generative AI, where many companies take few steps, if any, to combat malicious use, saying it’s up to users to behave responsibly. “They allow you to train [a] voice model and copy anyone’s voice,” Bulakh said. “We have to understand that our societies are really vulnerable to disinformation. Our societies are really vulnerable to fraud as well. Our societies are not that tech savvy.”

And even the firms that have generated standards of conduct for how their products are used haven’t been able to prevent users from breaking those rules. A recent Washington Post investigation found that OpenAI’s ChatGPT allowed users to generate personalized arguments for manipulating an individual’s political views, despite the platform’s attempt to ban such uses.

The inherent difficulties of industry self-regulation have led some AI companies to call for government intervention. But even among these firms there are disagreements about what that should look like. A recent op-ed by Rand Corp. CEO Jason Matheny urged imposing know-your-customer rules on chipmakers who provide AI firms with raw computing power, akin to how banks must flag clients’ fishy-looking transactions. Reality Defender, an AI detection firm, asked the Federal Election Commission to develop methods for scanning “all political materials by all parties and all prospective candidates” for deepfakes. 

“I would vote for more of an approach where you focus more on individual rights, privacy rights, and you fix that as a first level because AI is actually trained on that,” Bulakh said, pointing to the European Union’s General Data Protection Regulation as an example.

Hill action uncertain

In the United States, while government concern over AI has grown, action has not followed. The FEC is reviewing a petition from Public Citizen to ban politicians from using generative AI to deliberately misrepresent their opponents, but Democratic and Republican commissioners have both questioned whether they have the authority to do so. 

Congress, meanwhile, has held only a few hearings and introduced a smattering of bills on the topic. Enacting a comprehensive AI bill is one of Senate Majority Leader Charles E. Schumer’s top priorities, but even other senators who want to see AI regulated have questioned that approach, making swift passage through an already gridlocked Congress that much more unlikely. “We’re probably not going to have to ban a bunch of things that aren’t currently banned. We’re not going to have to pass a lot of major legislation to deal with new threats,” Sen. Todd Young, R-Ind., recently told Politico.

Although AI experts like Farid should be able to detect deepfakes in the upcoming elections, technology alone won’t be able to prove, definitively, that a media clip is authentic, only that it shows no signs of artificiality. So, even if some damning audio clip emerges and it truly is genuine, the politicians on tape will have plausible deniability.

Farid pointed to how Trump, after initially apologizing for his remarks in the Access Hollywood tape released before the 2016 election and excusing them as “locker room banter,” reportedly started claiming it wasn’t real in 2017. “So think about that tape being released today. Is there any scenario where he and his followers and whoever is on his side of the political aisle doesn’t say, ‘Oh, it’s fake?’” Farid said.

The ability to prove an audio or video clip is AI-generated after it goes viral may not do much to win over true believers to actual truth. The belief by nearly 70 percent of Republicans that Biden did not win the 2020 presidential election despite no evidence of widespread fraud and the sworn testimony of Republican election officials is a sign of what Farid said is “the creation of alternate realities for the different parties.” 

On to this already raging wildfire, AI will throw a supertanker’s load of fuel. “It’s a combination of multiple technologies coming together, right? It’s the generative part; it’s the distribution channels. It’s the already highly polarized, highly political landscape; it’s the politicians having convinced you that you can’t trust the media, you can’t trust government, you can’t trust academics,” Farid said. “When you put all those pieces together, I think it’s pretty messy.”

The post AI deepfakes in campaigns may be detectable, but will it matter? appeared first on Roll Call.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.