Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Salon
Salon
Science
Rae Hodge

Is AI really this big of a threat?

The artificial intelligence world faced a swarm of stinging backlash Tuesday morning, after more than 350 tech executives and researchers released a public statement declaring that the risks of runaway AI could be on par with those of "nuclear war" and human "extinction." Among the signatories were some who are actively pursuing the profitable development of the very products their statement warned about — including OpenAI CEO Sam Altman and Google DeepMind CEO Demis Hassabis. 

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," the statement from the non-profit Center for AI Safety said. 

But not everyone was shaking in their boots, especially not those who have been charting AI tech moguls' escalating use of splashy language — and those moguls' hopes for an elite global AI governance board. 

TechCrunch's Natasha Lomas, whose coverage has been steeped in AI, immediately unravelled the latest panic-push efforts with a detailed rundown of the current table stakes for companies positioning themselves at the front of the fast-emerging AI industry. 

"Certainly it speaks volumes about existing AI power structures that tech execs at AI giants including OpenAI, DeepMind, Stability AI and Anthropic are so happy to band and chatter together when it comes to publicly amplifying talk of existential AI risk. And how much more reticent to get together to discuss harms their tools can be seen causing right now," Lomas wrote

"Instead of the statement calling for a development pause, which would risk freezing OpenAI's lead in the generative AI field, it lobbies policymakers to focus on risk mitigation — doing so while OpenAI is simultaneously crowdfunding efforts to shape 'democratic processes for steering AI,'" Lomas added.

Other field experts promptly shot back at the tech execs' statement. Retired nuclear scientists, AI ethicists, tenured tech writers and human extinction scholars all called the industrialists to the carpet for the use of inflammatory language. 

"This is a 'look at me' by software people. The claim that AI poses a risk of extinction of the human race is BS," retired nuclear scientist Cheryl Rofer said in a Tuesday tweet. "We have real, existing risks: global warming and nuclear weapons." 

The use of scary language and fear as a marketing tool has a long history in tech.

Émile Torres, a historian of human extinction (and Salon contributor), was quick to point out the hypocrisy of tech giants' role in manufacturing an already unethical AI development environment. 

"You'll never see these people signing a document like this about prioritizing the mitigation of harms, some profound, already being caused by AI companies like OpenAI," Torres said is a series of tweets. "No, those harms are just 'mere ripples' and 'small missteps for mankind' in the grand cosmic scheme of thing."

https://twitter.com/xriskology/status/1663558607798145030 

"A few weeks ago [Altman] was pontificating about leaving the EU market due to proposed training data transparency requirements. Do not take these statements seriously," said tech writer Robert Bateman in a tweet. 

The use of scary language and fear as a marketing tool has a long history in tech. And, as the LA Times' Brian Merchant pointed out in an April column, OpenAI stands to profit significantly from a fear-driven gold rush of enterprise contracts. 

"[OpenAI is] almost certainly betting its longer-term future on more partnerships like the one with Microsoft and enterprise deals serving large companies," Merchant wrote. "That means convincing more corporations that if they want to survive the coming AI-led mass upheaval, they'd better climb aboard."

Government tech contracts can be just as lucrative as enterprise contracts for a burgeoning company at the head of a digital revolution — as Microsoft would know, with its financial foundations rooted in mass public-sector deployment. It's too early to speculate on whether government contracts may be a target market for a company like OpenAI, as it is for Clearview AI, the controversial facial recognition software often used by law enforcement agencies to monitor protests. But with Microsoft's latest announcement that some OpenAI features will be integrated into certain upcoming Windows systems — and the recent successes of Altman's Congressional charm offensive — lawmakers have reason to pause when considering the gravity of the tech executives' wording here.

Fear, after all, is a powerful sales tool. 

 
Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.