Get all your news in one place.
100's of premium titles.
One app.
Start reading
TechRadar
TechRadar
Benedict Collins

‘Human lives are already being lost’: Open letter signed by hundreds of Google employees requests CEO reject ‘unethical and dangerous’ US military AI use

Google Gemini AI logo on a smartphone with Google background.
  • Google employees sign open letter to CEO over concerns of military AI use
  • AI developers do not want their technology used for 'classified purposes'
  • Google is currently negotiating a contract with the Pengaton

Over 600 Google employees have signed a letter calling on CEO Sundar Pichai to reject any uses of its AI technology for military purposes.

The open letter highlights the serious ethical concerns the staff have, stating, “Human lives are already being lost and civil liberties put at risk at home and abroad from misuses of the technology we are playing a key role in building.”

“As people working on AI, we know that these systems can centralize power and that they do make mistakes,” the letter said. “We feel that our proximity to this technology creates a responsibility to highlight and prevent its most unethical and dangerous uses.”

Another ‘supply chain risk’ designation?

In March, Anthropic CEO Dario Amodei rejected allowing the Pentagon to use the Claude model over fears they could be used for “mass domestic surveillance” and “fully autonomous weapons,” leading to Defense Secretary Pete Hegseth to declare the company a “supply chain risk.”

Shortly after, OpenAI stepped up to fill the gap left by Anthropic, with CEO Sam Altman facing both internal and external criticism over his seeming willingness to allow military use of ChatGPT.

The new OpenAI contract with the Pentagon was full of holes that would have allowed the same use of ChatGPT that Anthropic feared for Claude. The contract was amended to state that OpenAI’s models would not be used for “deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.”

Shortly after, Sam Altman told his employees that the Pentagon has said OpenAI does not “get to make operational decisions” on how the military uses AI technologies.

Now, Google employees are joining the growing number of AI company employees and members of the public opposed to the military use of AI tools. “Making the wrong call right now would cause irreparable damage to Google’s reputation, business and role in the world,” the letter states.

Following protests involving Google staff in 2018, the company amended its AI Principles to state that it would not deploy its AI tools where they were “likely to cause harm,” and would not “design or deploy” AI tools for surveillance or weapons. These clauses were quietly removed from its AI Principles on 4 February 2025.

Sign up to read this article
Read news from 100's of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.