Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Sharon Goldman

OpenAI strikes a deal with the Pentagon, just hours after Trump orders end to Anthropic contracts

sam altman (Credit: Prakash Singh—Bloomberg/Getty Images)

This story has been updated to reflect events that took place after its initial publication.

Sam Altman told OpenAI employees at an all-hands meeting on Friday afternoon that a potential agreement was emerging with the U.S. Department of War to use the startup’s AI models and tools, according to a source present at the meeting and a summary of the meeting seen by Fortune. A few hours later, Altman announced in a post on X that OpenAI and the Pentagon had reached an agreement.

The meeting and the deal came at the end of a week where a conflict between Secretary of War Pete Hegseth and OpenAI rival Anthropic burst into public acrimony, ending with the apparent cancellation of Anthropic’s contracts with the Pentagon and with the federal government in general.

Altman told employees at the all-hands that the government is willing to let OpenAI build its own “safety stack”—that is, a layered system of technical, policy, and human controls that sit between a powerful AI model and real-world use—and that if the model refuses to perform a task, then the government would not force OpenAI to make it do so.

Altman said that OpenAI would retain control over how technical safeguards are implemented and which models are deployed and where, and would limit deployment to cloud environments rather than “edge systems.” (In a military context, edge systems are a category that could include aircraft and drones.) In what would be a major concession, Altman told employees that the government said it is willing to include OpenAI’s named “red lines” in the contract, such as not using AI to power autonomous weapons, conduct domestic mass surveillance, or engage in critical decision-making.

OpenAI and the Department of War did not immediately respond to requests for comment. In his post announcing the deal, Altman wrote, “In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome.” After describing the terms of the deal, he added, “We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept.”

Anthropic had ‘upset’ the Pentagon

Sasha Baker, head of national security policy at OpenAI, and Katrina Mulligan, who leads national security for OpenAI for Government, also spoke at the OpenAI all-hands, according to the source. One of those officials said the relationship between Anthropic and the government had broken down because Anthropic cofounder and CEO Dario Amodei had offended Department of War leadership, including publishing blog posts that “the department got upset about.”

Anthropic, a company founded by people who left OpenAI over safety issues, had been the only large commercial AI maker whose models were approved for use at the Pentagon, in a deployment done through a partnership with Palantir. But Anthropic’s management and the Pentagon have been locked for several days in a dispute over limitations that Anthropic wanted to put on the use of its technology. Those limitations are essentially the same ones that Altman said the Pentagon would abide by if it used OpenAI’s technology.

Anthropic had refused Pentagon demands that it remove safeguards on its Claude model that restrict its use for domestic mass surveillance or fully autonomous weapons, even as defense officials insisted that AI models must be available for “all lawful purposes.”

The OpenAI all-hands came just after President Trump announced that the federal government will stop working with Anthropic, in a dramatic escalation of the government’s clash with the company over its AI models.

“I am directing every federal agency in the United States government to immediately cease all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again!” Trump said in a post on Truth Social. The Department of War and other agencies using Anthropic’s Claude models will have a six-month phase-out period, he said.

At the OpenAI all-hands, staff were told that the most challenging aspect of the deal for leadership was concern over foreign surveillance, and that there was a major worry about AI-driven surveillance threatening democracy, according to the source. However, company leaders also seemed to acknowledge the reality that governments will spy on adversaries internationally, recognizing claims that national security officers “can’t do their jobs” without international surveillance capabilities. References were made to threat intelligence reports showing that China was already using AI models to target dissidents overseas.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.