Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Technology
Dan Milmo Global technology editor

AI firms ‘unprepared’ for dangers of building human-level systems, report warns

A person working on computer
The safety index assessed leading AI developers across areas including current harms and existential risk. Photograph: Master/Getty Images

Artificial intelligence companies are “fundamentally unprepared” for the consequences of creating systems with human-level intellectual performance, according to a leading AI safety group.

The Future of Life Institute (FLI) said none of the firms on its AI safety index scored higher than a D for “existential safety planning”.

One of the five reviewers of the FLI’s report said that, despite aiming to develop artificial general intelligence (AGI), none of the companies scrutinised had “anything like a coherent, actionable plan” to ensure the systems remained safe and controllable.

AGI refers to a theoretical stage of AI development at which a system is capable of matching a human in carrying out any intellectual task. OpenAI, the developer of ChatGPT, has said its mission is to ensure AGI “benefits all of humanity”. Safety campaigners have warned that AGI could pose an existential threat by evading human control and triggering a catastrophic event.

The FLI’s report said: “The industry is fundamentally unprepared for its own stated goals. Companies claim they will achieve artificial general intelligence (AGI) within the decade, yet none scored above D in existential safety planning.”

The index evaluates seven AI developers – Google DeepMind, OpenAI, Anthropic, Meta, xAI and China’s Zhipu AI and DeepSeek – across six areas including “current harms” and “existential safety”.

Anthropic received the highest overall safety score with a C+, followed by OpenAI with a C and Google DeepMind with a C-.

The FLI is a US-based non-profit that campaigns for safer use of cutting-edge technology and is able to operate independently due to an “unconditional” donation from crypto entrepreneur Vitalik Buterin.

SaferAI, another safety-focused non-profit, also released a report on Thursday warning that advanced AI companies have “weak to very weak risk management practices” and labelled their current approach “unacceptable”.

The FLI safety grades were assigned and reviewed by a panel of AI experts, including British computer scientist Stuart Russell, and Sneha Revanur, founder of AI regulation campaign group Encode Justice.

Max Tegmark, a co-founder of FLI and a professor at Massachusetts Institute of Technology, said it was “pretty jarring” that cutting-edge AI firms were aiming to build super-intelligent systems without publishing plans to deal with the consequences.

He said: “It’s as if someone is building a gigantic nuclear power plant in New York City and it is going to open next week – but there is no plan to prevent it having a meltdown.”

Tegmark said the technology was continuing to outpace expectations, citing a previously held belief that experts would have decades to address the challenges of AGI. “Now the companies themselves are saying it’s a few years away,” he said.

He added that progress in AI capabilities had been “remarkable” since the global AI summit in Paris in February, with new models such as xAI’s Grok 4, Google’s Gemini 2.5, and its video generator Veo3, all showing improvements on their forebears.

A Google DeepMind spokesperson said the reports did not take into account “all of Google DeepMind’s AI safety efforts”. They added: “Our comprehensive approach to AI safety and security extends well beyond what’s captured.”

OpenAI, Anthropic, Meta, xAI, Zhipu AI and DeepSeek have also been approached for comment.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.