The U.S. military used Anthropic's Claude AI model during the operation to capture Venezuela's Nicolás Maduro, two sources with knowledge of the situation told Axios.
- Now, the blowback may threaten the company's business with the Pentagon.
The latest: After reports on the use of Claude in the raid, a senior administration official told Axios that the Pentagon would be reevaluating its partnership with Anthropic.
- "Anthropic asked whether their software was used for the raid to capture Maduro, which caused real concerns across the Department of War indicating that they might not approve if it was," the official said.
- "Any company that would jeopardize the operational success of our warfighters in the field is one we need to reevaluate our partnership with going forward."
- An Anthropic spokesperson denied that: "Anthropic has not discussed the use of Claude for specific operations with the Department of War. We have also not discussed this with any industry partners, including Palantir, outside of routine discussions on strictly technical matters."
Why it matters: The episode highlights the tensions the major AI labs face, as they enter into business with the military while trying to maintain some limitations on how their tools are used.
Breaking it down: AI models can quickly process data in real-time, a capability prized by the Pentagon given the chaotic environments in which military operations take place.
- Axios could not confirm the precise role that Claude played in the operation to capture Maduro. The military has used Claude in the past to analyze satellite imagery or intelligence. The sources said Claude was used during the active operation, not just in preparations for it.
- No Americans were killed in the raid. Cuba and Venezuela both said dozens of their soldiers and security personnel were killed.
Friction point: The Pentagon wants the AI giants to allow them to use their models in any scenario so long as they comply with the law.
- Anthropic, which has positioned itself as the safety-first AI leader, is currently negotiating with the Pentagon around its terms of use. The company wants to ensure in particular that its technology is not used for the mass surveillance of Americans or to operate fully autonomous weapons.
- The company is confident the military has complied in all cases with its existing usage policy, which has additional restrictions, a source familiar with those discussions told Axios.
What they're saying: "We cannot comment on whether Claude, or any other AI model, was used for any specific operation, classified or otherwise," the Anthropic spokesperson said.
- "Any use of Claude — whether in the private sector or across government — is required to comply with our Usage Policies, which govern how Claude can be deployed. We work closely with our partners to ensure compliance."
- Defense Secretary Pete Hegseth has leaned into AI and said he wants to quickly integrate it into all aspects of the military's work, in part to stay ahead of China.
- Senior Pentagon officials have expressed frustration with Anthropic's posture on ensuring safeguards, a source familiar with those discussions said.
The big picture: Anthropic is one of several major model-makers that are working with the Pentagon in various capacities.
- OpenAI, Google and xAI have all reached deals for military users to access their models without many of the safeguards that apply to ordinary users. It's unclear if any other models were used during the Venezuela operation.
- But the military's most sensitive work — from weapons testing to comms during active operations — happens on classified systems. For now, only Anthropic's system is available on those classified platforms.
- Anthropic also has a partnership with Palantir, the AI software firm that has extensive Pentagon contracts, that allows it to use Claude within its security products. It's not clear whether the use of Claude in the operation was tied to the Anthropic-Palantir partnership.
What to watch: Discussions are ongoing between the Pentagon and OpenAI, Google and xAI about allowing the use of their tools in classified systems. Anthropic and the Pentagon are also in discussions about potentially loosening the restrictions on Claude.
Editor's note: The headline and story were updated based on comments from a senior U.S. official.