Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Daily Mirror
Daily Mirror
World
Fiona Leishman

World powers in rush to get killer robots on battlefield in AI arms race - despite fears

Almost 80 years ago, the face of warfare changed when the first atomic bomb was dropped on Hiroshima. As technology has advanced, a new player is waiting to enter the battlefield in the form of artificial intelligence, with military forces around the globe now racing to get AI weaponry into battle.

There's a covert arms race underway as military forces across the world compete to develop terrifying AI weaponry, according to a new documentary exploring what the future of AI in the battlefield looks like.

UNKNOWN: Killer Robots is set to premiere on Netflix on Monday, July 10. Director Jesse Sweet told the New York Post: "World leaders in Russia and China, people in the US military have said, whoever gets the advantage on AI is going to have an overwhelming technical advantage in war."

He continued: "This revolution is happening now, but I think our awareness [is] lagging being. Hopefully it doesn't [take] a mushroom cloud to make us realise, 'Oh man, this is a pretty potent tool'."

While technology is already being used including AI, there are still people behind the controls to make the final decision (Michal Fludra/NurPhoto/REX/Shutterstock)

The use of weapons-grade robots and drones in combat isn't a new phenomenon, the documentary shows. However, AI software is and it's enhancing the existing hardware which has been modernising warfare techniques for the best part of a decade.

Experts are now warning that AI developments have pushed us to a point where global forces have no choice but to completely overhaul and rethink their military strategy.

"It's realistic to expect that AI will be piloting an F-16 and will not be that far out," said Nathan Mchiael, Chief Technology Officer at Shield AI said in the episode. The company is on a mission, hoping they'll be capable of "building the world's best AI pilot".

But filmmakers in the new documentary show their concern, shared by many working on AI, over rapid robotic militarisation. They essentially echo the voices of many tech experts in the field - that we don't truly understand just what we're creating.

"The way these algorithms are processing information, the people who programmed them can't even fully understand the decisions they're making," explained Jesse. "It gets moving so fast that even identifying things like 'is it supposed to kill that person or not kill that person?' [It's] this huge conundrum."

There are major concerns about the speed of technological advancement (Getty Images)

There's a lot of faith placed in the accuracy and precision of the AI weaponry, a feeling that because it's technological it will eliminate human error and be more reliable. However, there are fears that a comfortable reliance on this accuracy, known as automation bias, may come back to bite should there be a life-or-death situation where technology fails.

AI facial recognition software being used to enhance an autonomous robot of drone during a firefight is also a major concern. With where the technology is right now, there is still a human being behind the controls to pull the trigger. However, if technology advanced to a level where people felt comfortable enough to remove this added layer of decision-making, militants could be mistaken for civilians or allies at the hands of a machine, warned Jesse.

"[AI is] better at identifying white people than non-white people," he said. "So it can easily mistake people with brown skin for each other, which has all sorts of horrifying implications when you're in a battle zone and you are identifying friend or foe."

Then there's the scenario we once thought of only in terms of action movies - the robots turning on us. According to Jesse, that's very possible with AI with the thought already causing "tension within the military".

"There is a concern over cybersecurity in AI and the ability of either foreign governments or an independent actor to take over crucial elements of the military," he explained. "I don't think there's a clear answer to it yet. But I think everyone's aware that the more automation goes into military, the more room there is for bad actors to take advantage."

There are concerns that AI drones may not be accurate when it comes to identifying potential targets and innocent civilians (STM)

And the scariest part? The bad actor doesn't need to be some tech whizz-kid from an 80s movie to pull of such a huge breach. Jesse said: "It used to be that you had to be a computer genius to do that. Like in the 80s movies, the kid would d have to be some sort of prodigy.

"But now you could be kind of like a B student who downloaded the YouTube video that's going to show you how."

It's not just bombs, guns and rogue robot soldiers that we have to worry about with AI weaponry either. AI has proved invaluable in advancing medical and pharmaceutical technologies to cure and treat diseases, but a simple change in the code could see thousands of simulations run which end up creating a toxic composition - a chemical weapon.

CEO of Collaborations Pharmaceuticals Dr Sean Ekins talks in the documentary about an experience he had in 2021. He was asked by a Swiss AI watchdog group to experiment with the possibility of designing a biological weapon.

Dr Ekins told the NY Post: "We've been building a lot of machine learning models to try to predict whether a molecule was likely to be toxic. We just flip that model around and say 'well, we're interested in designing toxic molecules'.

"Literally, we did flip the switch on one of the models and overnight, it generated [chemical weapon] molecules... a small company, doing it on a 2015 desktop Mac."

Among the models created were some similar to VX - one of the world's most deadly known nerve agents. Dr Ekins added: "We were using generative technologies to do it, but they were pretty rudimentary generative tools.

A new Netflix documentary looks into the future of AI in warfare (US AIR FORCE/AFP via Getty Image)

"Now, nearly two years later, I think what we did is kind of baby steps compared to what would be possible today."

Dr Ekins fears that just "one rogue scientist", or possibly even someone less qualified, could have the ability to create homemade variations of VX and other bioweapons as AI is "lowering the barrier".

"I think the very real danger is to get to the point where you come up with new molecules that are not VX that are much easier to synthesise," explained Dr Ekins. "That's really worth worrying about. What we showed was that we could very readily come up with lots of - tens of thousands - of molecules that were predicted to be more toxic."

Dr Ekins and his team have published a paper on the potential for catastrophic misuse, sounding the alarm for sophisticated checks and balances to be created. However, he said their cries have fallen on deaf ears.

"The industry hasn't responded," he said. "There's been no push to sort of set up any safeguards whatsoever.

"I think to not realise the potential danger there is foolish... I just don't think the industry, in general, is paying much heed to it."

Just as scientists were developing the atomic bomb almost 85 years ago, Dr Ekins compared the rapid acceleration of machine learning in his field to their developments and advances made when they were "not thinking about the consequences".

"Even the godfathers of the technologies, as we call them, are now only realising there's a potential genie that they've let out," said Dr Ekins. "It's going to be very difficult, I think, to put it back into the bottle."

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.