Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Roll Call
Roll Call
Jim Saksa

Federal election laws not ready for deepfakes, experts warn

The promise of any technology is that it makes work easier. Levers make lifting easier, calculators make math easier, and artificial intelligence makes creating realistic audio and videos easier. And easier means cheaper.

That worries election law experts, since AI makes lying so cheap and easy that just about anyone can do it. There’s little in existing law to prevent AI-powered bad actors, including meddling foreign powers, from unleashing a torrent of campaign disinformation on an already saturated political landscape, they warn. 

“AI is to some extent just exacerbating problems that existed already,” said Daniel Weiner, director of the Brennan Center’s Elections and Government Program.

While most commercial advertisers are prohibited from telling bald-faced lies, there’s nothing akin to truth-in-advertising laws for campaign ads because of the First Amendment’s heightened protection for political speech. And federal election laws do not currently regulate AI explicitly. So candidates, who already are allowed to fib with nothing but the electorate’s potential opprobrium to stop them, can deploy “deepfakes” to make those falsehoods more convincing and prevalent.

In most contexts, creating a deepfake of a person in order to make them look bad would expose you to a defamation lawsuit, but it’s harder for public figures — like politicians — to win such cases. They must prove not only that you spread a reputation-smirching lie, but you did so recklessly or intentionally. Even if a politician could file such a lawsuit confidently, it would take months or years to adjudicate. 

AI will also widen the existing gaps in campaign disclosure laws, Weiner said, like how many online political ads don’t include the “I approve this ad” disclaimers you see on TV, or how so-called “dark money” groups can spend limitless amounts without disclosing their donors. 

It doesn’t help that the Federal Election Commission has an anemic track record of punishing bad actors. “Weak enforcement is a problem that has been endemic to campaign finance law at the federal level, extending far beyond this issue,” Weiner said.

Some in the political industrial complex have sworn off using AI for mischief, and the American Association of Political Consultants released a policy statement last year forbidding its members from using deepfakes to mislead the public. But others have already realized its potential, as when a rogue campaign consultant allegedly hired a New Orleans magician to create a deepfake of President Joe Biden urging New Hampshire voters to skip the January primary.

“It’s like acknowledging gravity to say that political operatives will use the tools available, if they’re legal, without regard to ethical consideration,” said Robert Weissman, president of Public Citizen. “But I think most of them won’t if it’s illegal, irrespective of how effective the enforcement might be.”

‘Foolish to bet on Congress’

Despite the lack of AI-focused federal regulations, the most egregious examples of its misuse, like the fake Biden robocall in New Hampshire, would likely be covered by a general ban on fraudulently misrepresenting campaign authority. They could also run afoul of anti-voter suppression laws found in most states. But less extreme exploits, like a synthetic Donald Trump slurring his words as he brags about the more unpopular parts of his record, would probably be legal, even if they’re extremely misleading.

“It’s the Wild West right now,” Weissman said.

After initially being rebuffed, Public Citizen successfully petitioned the FEC last summer to consider updating its rules to make clear that deliberately using AI-generated content to misrepresent a candidate or political party’s positions is illegal. But the commission has moved slowly on the matter. Weissman doesn’t expect to see a new rule in place before November’s elections.

Even if the FEC does update their rules, it would only apply to campaigns and political parties — outside actors wouldn’t be included, Weissman said. And under current laws, online publishers, like social media firms, face little liability for spreading lies. “A comprehensive [legislative] approach probably has to both impose some obligations on folks who create and disseminate deepfakes, but also on the platforms,” said Weiner.

While voters broadly support regulating synthetic media — 83 percent of voters support requiring disclaimers on AI-generated media used to influence an election, according to a recent Data for Progress poll — Congress seems unlikely to pass anything soon. 

A few proposals have attracted bipartisan interest. Sen. Amy Klobuchar introduced one bill that would ban deceptive AI-generated audio or visual media in federal campaigns, which attracted Republican Sens. Josh Hawley of Missouri, Susan Collins of Maine and Pete Ricketts of Nebraska as co-sponsors. And she introduced another with South Carolina Republican Sen. Lindsey Graham that would extend the FEC’s political advertising rules to the internet and require platforms to take “reasonable efforts” to stop foreign actors from placing ads. 

“AI-generated deepfakes of candidates and public officials are dangerous to our democracy,” the Minnesota Democrat said in a statement. “With the next election just around the corner, the effort to put in necessary guardrails on AI is a priority in the Senate.”

The cross-party concern gives advocates some encouragement, but not much. “It would be foolish to bet on Congress passing that or any other piece of legislation,” said Weissman.

Klobuchar also introduced a bill to require political ads to include a disclaimer whenever they use generative AI and, along with every other Democrat and independent in the Senate, co-sponsored another that would force dark money groups to disclose their donors. Those proposals have failed to attract any GOP support so far.  

Art of persuasion

While legal experts generally agree that expanding the FEC’s ad disclosure requirements to AI-generated media would likely survive First Amendment legal challenges, Stanford Law professor Nathaniel Persily thinks nailing down the legal details could be tricky. Persily noted that AI-powered editing software comes standard on new smartphones these days, making it harder to define when some digital manipulation goes from mere airbrushing to deepfaking. 

Some states are moving to fill the federal policy vacuum; a Public Citizen tracker shows that lawmakers in 43 states have introduced at least 70 bills regulating the use of AI in campaigns, of which seven have been enacted. Some require notices when AI-generated media is used in political ads, while others make using deepfakes to harm a candidate a criminal misdemeanor. 

Still, most Americans will live through their first AI-powered election with little-to-no regulations in place. How much that will matter is unclear, but AI has already arguably swayed at least one national election — in Slovakia.

Days before parliamentary elections last fall, deepfake audio of the leader of the large pro-European party purportedly discussing how to rig the election was posted to Facebook and other social media platforms. That party, which had been leading in some polls, ended up finishing second to one that campaigned on ending military aid to Ukraine.

Similarly, researchers say the Chinese Communist Party may have used generative AI to goose their ongoing disinformation efforts in Taiwan ahead of January’s presidential election. 

The question, said Persily, is how, and by how much, AI might amplify the current cacophony of disinformation in the United States. The reason to fear synthetic media is either because “we think it’s going to be uniquely persuasive,” or because “it has an ability to scale and reach a certain audience that non-synthetic media does not.”

Persily acknowledges that we’re predisposed to assume, at least at first blush, that an audio or video clip is real. But he doesn’t think deepfakes — even once they stop accidentally revealing themselves by giving people extra limbs and other errors — will necessarily do a better job convincing anyone than old-school methods of media manipulation. 

But they may spread more widely simply because of the technology’s novelty. “There’s not much of a difference in terms of the persuasive impact of so-called deepfakes and shallow fakes,” he said. “Especially if it takes a few days to debunk the fakes, if they will still have reached a level of virality and an audience, that is where the damage will already have been done.”

Other researchers think AI will be more persuasive, though, arguing it’ll enable campaigns to tailor their pitches — factual or otherwise — more precisely and create a “a highly scalable ‘manipulation machine’ that targets individuals based on their unique vulnerabilities without requiring human input.”

Persily admits he takes “the pessimistic view” on what can be done to combat disinformation. While he supports the disclosure and disclaimer requirements proposed by Public Citizen and the Brennan Center, along with various efforts to get social media platforms to limit the impact of disinformation generally, he said it’s just nipping at the edges. 

“The greatest division in the American public when it comes to disinformation is not between those who believe the truth and those who believe lies, but it’s between those who believe the truth is relevant, and those who don’t,” he said.

The post Election laws not ready for deepfakes, experts warn: ‘It’s the Wild West right now’ appeared first on Roll Call.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.