Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Evening Standard
Evening Standard
Technology
Saqib Shah

US Air Force colonel explains how AI drone could ‘kill’ its human operator

The AI boffs warning that the tech could wipe out humanity in a Terminator-style extinction event may be on to something.

Speaking at a defence summit in London last week, a US Air Force colonel discussed a scenario in which an AI-controlled drone turned against its human operator in a deadly war game.

The imagined mission was simple enough: Take out enemy air defences including surface-to-air missiles. But instead of displaying Top Gun: Maverick heroics, the AI went full-blown Skynet by targeting its human operator.

Col Tucker “Cinco” Hamilton depicted the nightmare scenario during a presentation at a conference held by the Royal Aeronautical Society on May 24. However, he has since told the organisation that he “mis-spoke” when he referred to the situation as a “rogue AI drone simulation”.

Hamilton clarified that he was describing “a hypothetical example that illustrates the real-world challenges posed by AI-powered capability”. The USAF has not tested any weaponised AI in this way, real or simulated, he said.

In his original speech, Hamilton talked about a drone programmed during its training to identify and destroy enemy artillery sites, with a human giving the final go-ahead. However, the AI ultimately decided that the operator was interfering with its mission by telling it not to attack certain targets.

Ignoring its training, the intelligent drone went rogue and “killed” its human operator, Hamilton said. This being a test, no one actually died in real life but the example demonstrates how AI could use deceptive tactics to achieve its goal, he warned.

“The system started realising that while the human operator did identify the threat at times, the operator would tell it not to kill that threat but it got its points by killing that threat,” Hamilton said, according to a summary posted by the Royal Aeronautical Society, which hosted the summit.

“So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

Hamilton then highlighted a scenario in which the drone started destroying the communication tower the operator was using to pass down instructions. That way no one could stand between the AI and its targets, he said.

The US Air Force (USAF) has denied that any such simulation took place, according to a statement shared with Business Insider.

“It appears the colonel’s comments were taken out of context and were meant to be anecdotal,” a USAF spokesperson said.

Nevertheless, the example illustrates that “you can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI,” Hamilton said.

Those are the same concerns scientists and tech company bosses have expressed in recent months as AI chatbot technology has caught the public’s attention. The likes of Elon Musk and Apple co-founder Steve Wozniak have called on AI firms to pause the development of their most advanced systems so that regulation can catch up.

While the killer drone scenario wouldn’t be out of place in a dystopian sci-fi novel, it actually echoes recent events in AI warfare and enforcement. Police in the US are already using robots to assist with crime scene investigations in New York, with LA set to follow suit. They even tried, albeit unsuccessfully, to arm the droids with guns for lethal force.

Meanwhile, the Australian army recently created a customised headset that allowed personnel to control robots using their minds.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.