Donald Trump has ordered all federal agencies to begin phasing out the use of Anthropic technology, following the company’s unusually public dispute with the Pentagon over artificial intelligence safety.
The directive from Trump came just over an hour before the Pentagon’s deadline for Anthropic to permit unrestricted military use of its AI technology or face severe repercussions. This followed nearly 24 hours after Anthropic CEO Dario Amodei stated his company "cannot in good conscience accede" to the Defense Department's demands. Anthropic did not immediately respond to a request for comment regarding Trump's remarks.
At the heart of the defense contract disagreement is a fundamental clash over AI’s role in national security and profound concerns about how increasingly capable machines could be deployed in high-stakes scenarios involving lethal force, sensitive information, or government surveillance.
Anthropic, the creator of the chatbot Claude, is in a financial position to potentially absorb the loss of the contract. However, the ultimatum issued this week by Defense Secretary Pete Hegseth presented broader risks at the peak of the company's meteoric ascent from a relatively unknown computer science research lab in San Francisco to one of the world’s most valuable startups.
Military officials had warned that if Amodei did not compromise, they would not only cancel Anthropic's contract but also "deem them a supply chain risk."
This designation is typically applied to foreign adversaries and could severely jeopardise the company's critical partnerships with other businesses. Conversely, if Amodei were to yield, he risked eroding trust within the burgeoning AI industry, particularly among top talent drawn to the company by its pledges to responsibly build advanced AI that, without proper safeguards, could pose catastrophic dangers.
Anthropic stated it sought specific assurances from the Pentagon that Claude would not be used for mass surveillance of Americans or in fully autonomous weapons.
After months of private discussions erupted into public debate, the company issued a statement on Thursday, asserting that new contract language "framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will."
This statement followed Sean Parnell, the Pentagon’s chief spokesman, posting on social media that the military "has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement."
He emphasized that the Pentagon intends to "use Anthropic’s model for all lawful purposes," though he and other officials have not detailed precisely how they wish to deploy the technology.
The dispute has further polarised the tech industry.

Emil Michael, the defense undersecretary for research and engineering, publicly criticized Amodei on X, alleging he "has a God-complex" and "wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk."
This message has largely failed to resonate across much of Silicon Valley, where a growing number of tech workers from Anthropic's main rivals, OpenAI and Google, voiced support for Amodei's stance in an open letter late on Thursday. OpenAI and Google, alongside Elon Musk’s xAI, also hold contracts to supply their AI models to the military.
Musk sided with Trump's Republican administration on Friday, stating on his social media platform X that "Anthropic hates Western Civilization."
This comment came after Michael highlighted a previous version of Claude's guiding principles that encouraged "consideration of non-Western perspectives."
All leading AI models, including Musk's Grok and OpenAI's ChatGPT, are programmed with a set of instructions that dictate a chatbot's values and behavior. Anthropic refers to this guidance as a constitution.
While some tech leaders allied with Trump, including Musk and Palmer Luckey, co-founder of defence contractor Anduril, have joined the fray, the polarising debate over "woke AI" has placed others in a difficult position.
"The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused," the open letter from some OpenAI and Google employees stated. "They’re trying to divide each company with fear that the other will give in."
However, in a surprising move from one of Amodei's fiercest rivals, OpenAI CEO Sam Altman on Friday sided with Anthropic. In a CNBC interview, he questioned the Pentagon's "threatening" approach, suggesting that OpenAI and much of the AI sector share similar red lines. Amodei previously worked for OpenAI before he and other leaders departed to establish Anthropic in 2021.
"For all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety," Mr Altman told CNBC. "I’ve been happy that they’ve been supporting our warfighters. I’m not sure where this is going to go."
Concerns about the Pentagon's strategy were also raised by Republican and Democratic lawmakers, as well as a former leader of the Defense Department's AI initiatives.
"Painting a bullseye on Anthropic garners spicy headlines, but everyone loses in the end," wrote retired Air Force Gen. Jack Shanahan in a social media post.
Gen. Shanahan encountered a different wave of tech worker opposition during Trump's first administration when he led Maven, a project utilising AI technology to analyse drone footage and target weapons. So many Google employees protested its involvement in Project Maven at the time that the tech giant declined to renew the contract and subsequently pledged not to use AI in weaponry.
"Since I was square in the middle of Project Maven & Google, it’s reasonable to assume I would take the Pentagon’s side here," Gen. Shanahan wrote on Thursday on social media. "Yet I’m sympathetic to Anthropic’s position. More so than I was to Google’s in 2018."
He noted that Claude is already widely used across government, including in classified settings, and that Anthropic's red lines are "reasonable." He added that the AI large language models powering chatbots like Claude are also "not ready for prime time in national security settings," particularly not for fully autonomous weapons.
"They’re not trying to play cute here," he wrote.
Parnell asserted on Thursday that opening up the technology's use would prevent the company from "jeopardising critical military operations."
"We will not let ANY company dictate the terms regarding how we make operational decisions," Parnell wrote. Anthropic had "until 5:01 p.m. ET on Friday to decide" whether it would meet the demands or face consequences.
When Hegseth and Amodei met on Tuesday, military officials warned they could designate Anthropic as a supply chain risk, cancel its contract, or invoke a Cold War-era law called the Defense Production Act to grant the military more sweeping authority to use its products, even without the company’s approval.
Amodei stated on Thursday that "those latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security." He expressed hope that the Pentagon would reconsider given Claude's value to the military, but, if not, Anthropic "will work to enable a smooth transition to another provider."
‘I’m not the best father’: Zelensky shares rare insight into personal impact of war
Bill Clinton asked if he had sex with woman pictured in Epstein files: Live updates
UK withdraws Tehran embassy staff ahead of possible US strikes on Iran
Mamdani defends visit to the White House after Trump photo goes viral
Trump weighs in on Bill Clinton’s Epstein deposition and ‘friendly takeover of Cuba’
Nancy Mace lashes out at ‘unhinged’ Hillary Clinton over testimony in Epstein probe