Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Mustafa Suleyman

Inflection AI co-founder Mustafa Suleyman: 'Ban the use of AI in elections–right now'

(Credit: David Paul Morris—Bloomberg/Getty Images)

2024 could be one of the most pivotal years in American history. As the founder of an AI company, I must warn that it shouldn’t be because of AI. As I argue in my new book, The Coming Wave: Technology, Power & the 21st Century’s Greatest Dilemma, all technology is political. From the printing press to modern weaponry, satellite communications to databases, states and technologies are intimately tied together.

While it may not come with an explicit political purpose, technology is a form of power. And from the earliest tools to today’s world of social media and generative AI, it comes with major social and political consequences.

Over the course of history, too few technologists have really grappled with this fact. It’s only long after sprawling chains of unintended consequences ripple out over society (take social media’s role in recent elections, for instance) that they truly wake up. Now we are facing a coming wave of transformative technology led by AI. It will have seismic consequences, including on the very future of the very concept of nation-states, but first, we will feel AI’s impact at the next election. We need to be one step ahead. AI is developing fast–and we are poorly prepared for the impact of this new wave as we hurtle toward a key presidential election.

My fear is that AI will undermine the information space with deepfakes and targeted, adaptive misinformation that can emotionally manipulate and convince even savvy voters and consumers. What happens when everyone has the power to create and broadcast material with incredible levels of realism? From a politician speaking multiple local dialects in India to a series of doctored videos of Congress members in the U.S., the first real-life examples are already out there. And these examples occurred before the ability to generate near-perfect deepfakes–whether text, images, video, or audio–became as easy as writing a query into Google.

Imagine that three days before an election a video of a presidential candidate using a racist slur spreads on social media. The campaign press office strenuously denies it, but everyone knows what they’ve seen. Outrage seethes around the country. Polls nosedive. Swing states suddenly shift toward the opponent, who, against all expectations, wins. A new administration takes charge. But the video is a deepfake, one so sophisticated it evades even the best fake-detecting neural networks. A grainy, authentic-looking video or an audio recording of a politician defaming a voting bloc can be engaging and convincing. However, trusting our eyes and ears is no longer possible.

Over the next few years, these technologies will have an even wider impact, fundamentally reshaping the balance of power, shoring up some firms and nations while completely undermining others, and rewiring labor markets and security infrastructures. But ahead of these sea changes is the flood of disinformation around elections. The problem here lies not so much with extreme cases as it does in subtle, nuanced, and highly plausible scenarios being exaggerated and distorted. Moreover, generative AI tools, for all their undoubted benefits, could be weaponized by hostile actors, including rogue states, introducing new hacking capabilities and systemic vulnerabilities into the heart of the political process.

Facing these threats, we need to shore up the state and protect society. But first of all, we must safeguard the electoral process, starting right now. Free and fair elections are the foundation of American society, and next year’s are going to be the first of the generative AI era. We’re already seeing hints of what AI might do to democracy, deliberately producing fake information to twist results. This is happening on American soil and already influencing outcomes. In response, we simply must ensure the integrity of the system is maintained, and this means explicitly and promptly banning the use of AI and chatbots in electioneering. These systems must be kept out of elections, starting with 2024. No ifs or buts. The democratic process is too precious and too vulnerable for a technology as new and powerful as AI.

In recent months, as the tide of AI has started to come in, calls for regulation have grown from all quarters, including tech companies themselves. Everyone agrees: Regulating AI is essential. But so far, there hasn’t been sufficient clarity or consensus on where and how to start. Instead, there’s the usual morass of ideas and agendas. Calling for regulation is one thing, getting into the detail with specifics is quite another. Here, however, is a simple, clear, and unarguable case for taking immediate action. It’s vital those of us working on this technology state unambiguously what needs to happen next: banning the use of AI in elections.

Legislating AI-driven electioneering would be one concrete step towards ameliorating the spiraling political consequences of the coming wave. And it shouldn’t be the last.

Mustafa Suleyman is the co-founder and CEO of Inflection AI. In 2010 he co-founded DeepMind, which was acquired by Google. He is the author of The Coming Wave.

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.