Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Independent UK
The Independent UK
Technology
Dan Haygarth

AI ‘could be last technology humanity ever builds’, expert warns in ‘doom timeline’

An expert has warned that artificial intelligence (AI) could be the “last technology humanity ever builds” as a so-called “doom timeline” predicting the point at which it could overtake humankind has been revised.

The AI 2027 research project, released in April 2025, presented a predicted future scenario in which AI develops a “superintelligence” capable of “fully autonomous coding” to make itself more powerful, and, in one outcome, eventually destroying humanity.

Research group AI Futures published the study, which was built around the prediction that 2027 was the most likely year that AI could automate coding and take control of its own progression.

According to the model, this development could lead it to outperform humans in the majority of cognitive tasks.

It suggests this would allow AI to develop “artificial superintelligence” later in 2027, thus accelerating its own development and ultimately leading to AI so advanced it could overcome and dominate humankind.

However, AI Futures has revised this timeline in a new update published at the end of December. This new study predicted it would take longer for AI to reach key capability milestones, including automated coding and superintelligence.

Explaining the revision in a post on social media platform X, project leader Daniel Kokotajlo said: “Things seem to be going somewhat slower than the AI 2027 scenario. Our timelines were longer than 2027 when we published and now they are a bit longer still.”

He said that his prediction would now be "around 2030” but added there remains a lot of uncertainty about the forecasted timeline.

The initial 2027 project included a scenario that by 2030, an all-powerful AI model would have built itself around the goal of making the world safe for itself, rather than people, and it would "eliminate potential threats” in doing so.

The model’s more drastic possible outcome sees humans becoming obsolete, with humanity wiped out by AI at the start of the next decade to create space for infrastructure that benefits and powers it.

However, the group’s updated AI Futures model now predicts that AI could develop the ability to code autonomously in the 2030s, rather than 2027, and does not include a date for a potential AI domination of mankind.

Its new modelling predicts about a three-year delay to the process, with 2034 its revised prediction for the development of superintelligence.

The AI 2027 model initially sparked debate among technology experts. Emeritus professor of neuroscience at New York University, Gary Marcus, compared it with a Netflix thriller as he described its narrative as “pure science fiction mumbo jumbo” in a post on Substack.

Dr Fazl Barez, a senior research fellow at the University of Oxford, specialising in AI safety, interpretability, and governance, told The Independent that he does not agree with the timeline laid out in the project, but he believes it sparks important discussions about mitigating the potential risks associated with AI.

He said: “Among experts, nobody really disagrees that if we don't figure out alignment and we don't figure out how to make the system safe, it could potentially be the last technology humanity ever builds.

“How far we are from that and how likely that is to happen is an open question.”

AI is moving at the ‘speed of light’, according to Dr Barez (Getty/iStock)

Dr Barez, who leads research initiatives within the AI Governance Initiative, said that the development of AI’s capabilities is currently moving a lot faster than advances in safety measures and mitigation, describing the technology’s acceleration as going as fast as the “speed of light”.

He added: “We haven't really figured out how to prevent either the bad consequences that come with it or the consequences that perpetuate and increase existing issues in society.

“A lot of the issues exist, it's just the use of technology could exacerbate the rate at which it can happen now.”

Though Dr Barez believes it is hard to place a timeline on the development of AI’s capabilities, he said work must be undertaken so that it remains in the service of human beings, rather than superseding them.

He said: “With any technology, the real problem from my perspective is the gradual disempowerment of humanity, where we are losing our ability to think for ourselves, do things for ourselves as our reliance on this technology increases.

“Today, you might ask the system to draft an email for you, but maybe tomorrow it does everything, from drafting to writing it according to its own values, to sending it and monitoring your inbox going forward.

“The real question we should really ask ourselves is how we develop this technology such that it has the economic impact that we want, but it's always for the benefit of humanity.

“It's always there to serve our purposes and goals, like the previous technologies, and not one that replaces us.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.