Get all your news in one place.
100's of premium titles.
One app.
Start reading
Tom’s Hardware
Tom’s Hardware
Technology
Jowi Morales

The Pentagon announces AI deals with OpenAI, Google, Microsoft, Amazon, Nvidia, and more — LLMs to be deployed on classified Department of War networks ‘for lawful operational use’

The Pentagon.

The U.S. Department of War has announced deals with "seven of the world’s leading frontier artificial intelligence companies" for operational use. According to the Classified Networks AI Agreements press release, SpaceX, OpenAI, Google, Nvidia, Reflection, Microsoft, and Amazon Web Services will deploy their LLMs across the Pentagon’s classified networks “for lawful operational use.” The government said that this move will help turn the United States military into “an AI-first fighting force” and will help with “decision superiority across all domains of warfare.”

It seems that the AI tools that these companies offer will, for now, be limited to data analysis and help make decision-making faster and easier as the U.S. faces complex situations. These tools are accessible via GenAi.mil, the Pentagon’s official AI platform, through the Department of War’s network and are widely available for its personnel.

“Over 1.3 million Department personnel have used the platform, generating tens of millions of prompts and deploying hundreds of thousands of agents in only five months,” the Pentagon said. “Warfighters, civilians and contractors are putting these capabilities to practical use right now, cutting many tasks from months to days.”

Nevertheless, there have been concerns about the use of AI in military applications. Anthropic has famously refused to budge on the Department of War’s demand to lower its safeguards, saying that doing so could mean that its AI products could be used for mass surveillance or to create autonomous weapons. This move resulted in President Donald Trump banning the company from federal agencies, even going as far as designating it a supply chain risk for refusing to bow to the federal government’s demands.

While AI is certainly useful for distilling massive amounts of information and spotting patterns that humans can miss, it’s still not a 100% reliable tool for making decisions that could have a global impact. A researcher discovered this when they pitted GPT-5.2, Claude Sonnet 4, and Gemini 3 against each other in a wargame, with 95% of the outcome ending in a tactical nuclear strike. Three scenarios even ended in a strategic nuclear strike that would have ended the world.

But even though these AI tools are limited to analysis and support, with a human operator at the helm still responsible for every decision, there’s also the risk of automation bias. This is a person’s tendency to follow a computer’s suggestion despite contradictory information, especially as AI systems can process a ton of data so much more quickly than any human could. However, the data the AI is relying on could be false, erroneous, or misinterpreted, so it’s crucial that humans apply their intuition and experience before accepting AI suggestions at face value.

The U.S. military isn’t the only one experimenting with and deploying AI technologies in operational use. China, for example, has been showing off a 200-strong AI drone swarm that can be controlled by a single soldier, as well as ground-based drone wolfpacks armed with machine guns and grenade launchers for urban combat. While we cannot stop these armed institutions from deploying AI tools for intelligence-gathering, reconnaissance, and decision-making on the battlefield, we can only hope that they do not ignore safeguards and never give AI the triggers to any weapon.

Sign up to read this article
Read news from 100's of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.