Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
David Meyer

What’s different—and what isn’t—about the new A.I. ‘extinction’ open letter

(Credit: JOEL SAGET—AFP/Getty Images)

It’s now been a couple of months since That A.I. Open Letter came out—you know, the one signed by Elon Musk and Steve Wozniak and a bunch of other tech luminaries, who warned about “potentially catastrophic effects on society” and therefore called for a six-month moratorium on the development of next-gen systems. 

Well, here’s another one, this time courtesy of the Center for AI Safety—and this time it’s so brief that the following is not a sample quote but the whole thing: “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

This time, the signatories include many leading players who sat out the previous open letter. Top of the list is OpenAI CEO Sam Altman, who criticized the earlier missive for missing sufficient “technical nuance about where we need the pause”—a 22-word statement hardly includes more nuance, but then again it gets rid of that contentious pause business altogether, so there’s that. 

We also now have Google DeepMind CEO Demis Hassabis, Anthropic president Daniela Amodei, and “godfather of A.I.” Geoffrey Hinton, who we now know was holding back from criticizing any company while he was still in Google’s employ (he quit last month before embarking on an A.I.-threatens-us-all doom tour). Microsoft CTO Kevin Scott is in there. No Musk or Woz, though, and no one from Meta—as a press release about the statement notes pointedly.

The signatories of the new statement also include a bunch of big names from outside the tech sphere, such as Harvard constitutional law guru Laurence Tribe, former Estonian President Kersti Kaljulaid, and prominent environmentalist Bill McKibben. 

I asked McKibben why he’d taken this stance, given the risk of taking oxygen away from the climate emergency cause. “Having watched the world ignore climate warnings 35 years ago, I’m always hopeful that we might actually address one of these challenges in timely fashion,” he said. 

So, what about that brevity? According to Center for AI Safety director Dan Hendrycks, longer statements can result in the core message being lost—and “people might object to small details.” As for the lack of policy prescriptions, Hendrycks told my colleague Jeremy Kahn: “I hope that this inspires additional thought on policies that could actually reduce these risks.”

The lack of detail was no doubt a big draw for getting the likes of Altman and Amodei on board—it bigs up the perceived power of the technology, while avoiding any concrete actions that could limit the A.I. leaders’ future options. 

But even that one threadbare sentence still encapsulates one of the most heavily criticized elements of the earlier, longer open letter: the direction of attention toward potential long-term risks, and away from immediate, demonstrable risks such as the spread of propaganda and the perpetuation of biases.

That’s not to say the “risk of extinction from A.I.” doesn’t exist. Maybe it does, though I remain skeptical. But A.I.’s risks don’t need to be existential to qualify as being of a “societal scale.” Sure, we know nuclear war could destroy civilization in a flash, but we also now know that social media frays both society’s bonds and the mental health of its young. Personally, I’m a lot more worried about A.I. having a similarly insidious effect on society—and this statement doesn’t even go there.

“We should be concerned by the real harms that corps and the people who make them up are doing in the name of ‘A.I.’, not abt Skynet,” tweeted the prominent computational linguist Emily Bender, who had long taken this view of such calls. 

There’s another issue with the statement, too—it seems likely to feed into what’s becoming a moral panic about A.I.’s supposedly existential threat. As I’ve written before, moral panics rarely make for good policy.

Kriti Sharma, who is chief product officer for legal tech at Thomson Reuters and also founder of the AI for Good organization, told me the statement and its signatories “are right to recognize the potential risks presented by A.I. so that we may collectively take appropriate steps to mitigate them…[and] engender trust and accuracy.” 

However, she added: “We need to look further than the risks and recognize that A.I. also offers enormous potential for society such as helping to facilitate access to justice or opening up access to health services, particularly among underserved communities. As we move forward, industry and government need to converge to put in place a framework which balances risk mitigation while also unlocking the opportunities A.I. offers in a safe and transparent way.”

Maybe nuance isn’t such a bad thing after all.

More news below.

Want to send thoughts or suggestions to Data Sheet? Drop a line here.

David Meyer

Data Sheet’s daily news section was written and curated by Andrea Guzman.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.