Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - US
The Guardian - US
World
Guardian staff

US colonel retracts comments on simulated drone attack ‘thought experiment’

A US Air Force drone flying over the Nevada test and training range on 14 January 2020.
A US Air Force drone flying over the Nevada test and training range on 14 January 2020. Photograph: William Rosado/US Air Force/AFP/Getty Images

A US air force colonel “misspoke” when he said at a Royal Aeronautical Society conference last month that a drone killed its operator in a simulated test because the pilot was attempting to override its mission, according to the society.

The confusion had started with the circulation of a blogpost from the society, in which it described a presentation by Col Tucker “Cinco” Hamilton, the chief of AI test and operations with the US air force and an experimental fighter test pilot, at the Future Combat Air and Space Capabilities Summit in London in May.

According to the blogpost, Hamilton had told the crowd that in a simulation to test a drone powered by artificial intelligence and trained and incentivized to kill its targets, an operator instructed the drone in some cases not to kill its targets and the drone had responded by killing the operator.

The comments sparked deep concern over the use of AI in weaponry and extensive conversations online. But the US air force on Thursday evening denied the test was conducted. The Royal Aeronautical Society responded in a statement on Friday that Hamilton had retracted his comments and had clarified that the “rogue AI drone simulation” was a hypothetical “thought experiment”.

“We’ve never run that experiment, nor would we need to in order to realise that this is a plausible outcome,” Hamilton said.

The controversy comes as the US government is beginning to grapple with how to regulate artificial intelligence. Concerns over the technology have been echoed by AI ethicists and researchers who argue while there are ambitious goals for the technology, such as potentially curing cancer, for example, the technology is still far off. Meanwhile, they point at longstanding evidence of existing harms, including increased use of, at times, unreliable surveillance systems that misidentify Black and brown people and can lead to over-policing and false arrests, the perpetuation of misinformation on many platforms, as well as the potential harms of using nascent technology to power and operate weapons in crisis zones.

“You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI,” Hamilton said during his May presentation.

While the simulation Hamilton spoke of did not actually happen, Hamilton contends the “thought experiment” is still a worthwhile one to consider when navigating whether and how to use AI in weapons.

“Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI,” he said in a statement clarifying his original comments.

In a statement to Insider, the US air force spokesperson Ann Stefanek said the colonel’s comments were taken out of context.

“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” Stefanek said.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.