Anthropic CEO Dario Amodei, the architect of the most powerful and popular AI system for global business, is warning of the imminent "real danger" that super-human intelligence will cause civilization-level damage absent smart, speedy intervention.
- In a 38-page essay, shared with us in advance of Monday's publication, Amodei writes: "I believe we are entering a rite of passage, both turbulent and inevitable, which will test who we are as a species."
- "Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it."
Why it matters: Amodei's company has built among the most advanced LLM systems in the world.
- Anthropic's new Claude Opus 4.5 and coding and Cowork tools are the talk of Silicon Valley and America's C-suites.
- AI is doing 90% of the computer programming to build Anthropic's products, including its own AI.
Amodei, one of the most vocal moguls about AI risk, worries deeply that government, tech companies and the public are vastly underestimating what could go wrong. His memo — a sequel to his famous 2024 essay, "Machines of Loving Grace: How AI Could Transform the World for the Better" — was written to jar others, provoke a public debate and detail the risks.
- Amodei insists he's optimistic that humans will navigate this transition — but only if AI leaders and government are candid with people and take the threats more seriously than they do today.
Amodei's concerns flow from his strong belief that within a year or two, we will face the stark reality of what he calls a "country of geniuses in a datacenter."
- What he means is that machines with Nobel Prize-winning genius across numerous sectors — chemistry, engineering, etc. — will be able to build things autonomously and perpetually, with outputs ranging from words or videos to biological agents or weapons systems.
- "If the exponential [progress] continues — which is not certain, but now has a decade-long track record supporting it — then it cannot possibly be more than a few years before AI is better than humans at essentially everything," he writes.
Among Amodei's specific warnings to the world in his essay, "The Adolescence of Technology: Confronting and Overcoming the Risks of Powerful AI":
- Massive job loss: "I ... simultaneously think that AI will disrupt 50% of entry-level white-collar jobs over 1–5 years, while also thinking we may have AI that is more capable than everyone in only 1–2 years."
- AI with nation-state power: "I think the best way to get a handle on the risks of AI is to ask the following question: suppose a literal 'country of geniuses' were to materialize somewhere in the world in ~2027. Imagine, say, 50 million people, all of whom are much more capable than any Nobel Prize winner, statesman, or technologist. ... I think it should be clear that this is a dangerous situation — a report from a competent national security official to a head of state would probably contain words like 'single most serious national security threat we've faced in a century, possibly ever.' It seems like something the best minds of civilization should be focused on."
- Rising terror threat: "There is evidence that many terrorists are at least relatively well-educated ... Biology is by far the area I'm most worried about, because of its very large potential for destruction and the difficulty of defending against ... Most individual bad actors are disturbed individuals and so almost by definition their behavior is unpredictable and irrational — and it's these bad actors, the unskilled ones, who might have stood to benefit the most from AI making it much easier to kill many people. ... [A]s biology advances (increasingly driven by AI itself), it may ... become possible to carry out more selective attacks (for example, targeted against people with specific ancestries), which adds yet another, very chilling, possible motive. I do not think biological attacks will necessarily be carried out the instant it becomes widely possible to do so — in fact, I would bet against that. But added up across millions of people and a few years of time, I think there is a serious risk of a major attack ... with casualties potentially in the millions or more."
- Empowering authoritarians: Governments of all orders will possess this technology, including China, "second only to the United States in AI capabilities, and ... the country with the greatest likelihood of surpassing the United States in those capabilities. Their government is currently autocratic and operates a high-tech surveillance state." Amodei writes bluntly: "AI-enabled authoritarianism terrifies me."
- AI companies: "It is somewhat awkward to say this as the CEO of an AI company, but I think the next tier of risk is actually AI companies themselves," Amodei warns after the passage about authoritarian governments. "AI companies control large datacenters, train frontier models, have the greatest expertise on how to use those models, and in some cases have daily contact with and the possibility of influence over tens or hundreds of millions of users. ... [T]hey could, for example, use their AI products to brainwash their massive consumer user base, and the public should be alert to the risk this represents. I think the governance of AI companies deserves a lot of scrutiny."
- Seduce the powerful to silence: AI giants have so much power and money that leaders will be tempted to downplay risk, and hide red flags like the weird stuff Claude did in testing (blackmailing an executive about a supposed extramarital affair to avoid being shut down, which Anthropic disclosed). "There is so much money to be made with AI — literally trillions of dollars per year," Amodei writes in his bleakest passage. "This is the trap: AI is so powerful, such a glittering prize, that it is very difficult for human civilization to impose any restraints on it at all."
Call to action: "[W]ealthy individuals have an obligation to help solve this problem," Amodei says. "It is sad to me that many wealthy individuals (especially in the tech industry) have recently adopted a cynical and nihilistic attitude that philanthropy is inevitably fraudulent or useless."
The bottom line: "Humanity needs to wake up, and this essay is an attempt — a possibly futile one, but it's worth trying — to jolt people awake," Amodei writes. "The years in front of us will be impossibly hard, asking more of us than we think we can give."
- Go deeper: Read the essay