Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Radio France Internationale
Radio France Internationale
World
RFI

Artificial intelligence may lead humanity to extinction, industry leaders warn

Google, Microsoft and Alphabet logos and AI Artificial Intelligence words are seen in this illustration taken, May 4, 2023. REUTERS - DADO RUVIC

Top artificial intelligence (AI) executives including OpenAI CEO Sam Altman on Tuesday joined experts and professors in raising the "risk of extinction from AI", which they urged policymakers to equate at par with risks posed by pandemics and nuclear war.

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," more than 350 signatories wrote in a letter published by the nonprofit Center for AI Safety (CAIS).

As well as Altman, they included the CEOs of AI firms DeepMind and Anthropic, and executives from Microsoft and Google.

Also among them were Geoffrey Hinton and Yoshua Bengio – two of the three so-called "godfathers of AI" who received the 2018 Turing Award for their work on deep learning – and professors from institutions ranging from Harvard to China's Tsinghua University.

A statement from CAIS singled out Meta, where the third godfather of AI, Yann LeCun, works, for not signing the letter.

"We asked many Meta employees to sign," said CAIS director Dan Hendrycks. Meta did not immediately respond to requests for comment.

Regulation talks

The letter coincided with the US-EU Trade and Technology Council meeting in Sweden where politicians are expected to talk about regulating AI.

Elon Musk and a group of AI experts and industry executives were the first ones to cite potential risks to society in April.

"We've extended an invitation (to Musk), and hopefully he’ll sign it this week," Hendrycks said.

Recent developments in AI have created tools supporters say can be used in applications from medical diagnostics to writing legal briefs, but this has sparked fears the technology could lead to privacy violations, power misinformation campaigns, and lead to issues with "smart machines" thinking for themselves.

The warning comes two months after the nonprofit Future of Life Institute (FLI) issued a similar open letter, signed by Musk and hundreds more, demanding an urgent pause in advanced AI research, citing risks to humanity.

"Our letter mainstreamed pausing, this mainstreams exctinction," said FLI president Max Tegmark, who also signed the more recent letter. "Now a constructive open conversation can finally start."

AI pioneer Hinton earlier told Reuters that AI could pose a "more urgent" threat to humanity than climate change.

Last week OpenAI CEO Sam Altman referred to EU AI – the first efforts to create a regulation for AI – as over-regulation and threatened to leave Europe. He reversed his stance within days after criticism from politicians.

Altman has become the face of AI after his ChatGPT chatbot stormed the world. European Commission President Ursula von der Leyen will meet Altman on Thursday, and EU industry chief Thierry Breton will meet him in San Francisco next month.

(-Reuters)

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.