Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Sage Lazzaro

Lawmakers declare war on deepfakes that threaten to upend this year's presidential election

(Credit: Nicolas Economou—NurPhoto/Getty Images)

Hello and welcome to Eye on AI. 

What do Taylor Swift and the 2024 election have in common? No, she’s not running for president. Rather, both were the focus of a deluge of deepfake-related headlines this week. 

The singer is the latest celebrity to have her likeness replicated using AI for use in faux ads on social media, in this case for Le Creuset cookware. At the same time, lawmakers introduced new bills aimed at reining in deepfakes, and companies announced new technologies to make it easier to detect them.

In the U.S., the states have so far taken the lead on tackling election risks around deepfakes, or media content, usually video, that’s been manipulated using AI to falsely depict a person or event. That push by the states continued this week when South Carolina lawmakers introduced legislation that would ban the distribution of deepfakes of candidates within 90 days of an election, joining Washington, Minnesota, and Michigan which passed similar election-targeted bills last year. (Seven other states introduced such bills in 2023 but failed to advance them.) 

These types of laws aren’t completely new, and it was actually Texas, California, and Virginia that were the first states to enact them back in 2019. But deepfakes have become far more convincing and easy to create since, and many questions remain including how to actually enforce these laws, how the rapid pace of AI development might impact all of this, and what happens when a particularly damaging deepfake proliferates anyway. 

South Carolina’s 90-day timeline, which is matched by several other states, means anything older is fair game. And while these laws would punish anyone who circulates a deepfake meant to influence an election with a fine and possible jail time, it would come only after such videos are already widely shared — and potentially believed by millions of people. Disinformation spreads like wildfire on social media, and the platforms aren’t going to save the day. Facebook and Instagram-parent Meta previously said it will require political advertisers on its platforms to disclose if they used AI, but this doesn’t apply to posts shared by everyday users. 

In Congress, several proposals to regulate the use of AI-created deepfakes in political campaigns have stalled, though the U.S. House yesterday unveiled the “No AI Fraud” act aimed primarily at protecting musical artists and actors from AI deepfakes and voice clones, a move heralded by the screen actors union (and likely helpful to Swift). Federal agencies, however, are paying closer attention. FBI and CSA officials spoke on the topic of AI deepfakes and election integrity this week at a CNBC event, describing how their approach is to stop the bad actors, not the content. 

“We’re not the truth police. We don’t aspire to be,” said FBI Director Christopher Wray, who went on to stress the need to partner with private-sector companies to improve detection. 

AI detection tools have so far proven mostly unreliable, but there’s still hope for using AI to combat AI. McAfee this week announced a new technology that it says is 90% effective at detecting maliciously altered audio in videos. Fox along with Polygon Labs also this week unveiled a new blockchain protocol for media companies to watermark their content as authentic to help consumers know what’s fake and what’s real. Other organizations from Intel to Truepuc have long been working on the problem as well, though there’s no telling if deepfake detection will ever truly be solved. It’s possible it will become a cat-and-mouse game, similar to cybersecurity, where advancements in technology keep malware detectors and the like only a step ahead of bad actors. 

Taken together, this is all still just the tip of the deepfake iceberg. AI deepfake technologies are also being used for everything from kidnapping scams to targeting women with non-consensual pornography and other disturbing and harmful content. But as far as elections go, the stakes are particularly high in 2024, as nearly half the global population, not just Americans, will be heading to the polls for various candidates and causes. 

And with that, here’s the rest of today’s AI news. 

Sage Lazzaro

sage.lazzaro@consultant.fortune.com

sagelazzaro.com

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.