Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Crikey
Crikey
Comment
Alice Osborne

Israel is using AI in Gaza. Who (or what) is responsible for war crimes?

It came to light last year that the Israel Defence Forces (IDF) is using an artificial intelligence-based system called “Habsora” (Hebrew for “The Gospel”) to generate targets for its strikes in Gaza at an astonishing rate. The IDF says on its website that it uses “artificial intelligence systems” to produce targets “at a fast pace”.  

One of the most important rules of international humanitarian law (IHL, otherwise known as the law of armed conflict) is that “indiscriminate attacks”, which are those that strike military objectives and civilians or civilian objects (like homes, schools and hospitals) without distinction, are absolutely prohibited. And although a civilian object can be transformed into a military object, it can’t be targeted unless the harm that would be caused is not excessive in relation to the military advantage that would be gained. To break these rules can amount to a war crime. 

Sources from the Israeli intelligence community who spoke to Israeli-Palestinian publication +972 Magazine (in partnership with Hebrew-language outlet Local Call) have alleged that in some cases there is no military activity being conducted in the homes that are targeted, based on the information provided by Habsora, nor are there combatants present. If that is true, the destruction of those homes and the deaths of the people who lived there may be a war crime. 

Another essential principle in IHL is the idea of command responsibility. This means a commander is criminally responsible for war crimes committed by their subordinates if the commander knew (or should have known) a war crime was imminent and didn’t put a stop to it. 

Applying the concept of command responsibility to actions taken, at least partly, based on information provided by AI is tricky. The question arises as to whether military commanders could hide behind AI-based decision-making systems to avoid command responsibility, and therefore avoid prosecution for potential war crimes.  

There’s a lot we don’t know about Habsora. We don’t know what data it is fed or the parameters it is given. We don’t know the underlying algorithm. We don’t know the true level of human involvement in the decision-making process. The IDF website says that it produces a “recommendation”, which is cross-checked against an “identification performed by a person” with the goal of there being a “complete match” between the two. Ideally, this means that although the AI system suggests targets, no concrete action (such as an air strike) is actually undertaken without total human involvement and discretion. 

Although we can make educated guesses, it is very difficult to say how Habsora actually works in practice or whether it will throw up any issues of command responsibility. However, the existence of Habsora leads to a much larger discussion about the increasing use of AI in warfare. The technology behind AI systems, particularly those that use machine learning (where the AI system creates its own instructions based on the data it is “trained” with), is racing ahead of the laws that try to regulate it. 

Without effective regulation, we leave open the possibility that life and death decisions will be made by a machine, autonomously from human intervention and discretion. That, in turn, leaves open the possibility that commanders could say, “Well, I didn’t know that was going to happen, so it can’t be my fault”. Then you get into the snarly problem of asking who “fed” the AI system the commands, data and other prompts based on which it made its decision. Is that person responsible? Or the person who told that person which commands, data and prompts to enter?

The closest international regulation we have at the moment is the 1980 Convention on Certain Conventional Weapons, which regulates weapons like anti-personnel mines, incendiary weapons and booby-traps (i.e. weapons that are at risk of striking military and civilian objects without distinction). It’s conceptually difficult to put AI and machine learning systems in the same basket as these types of weapons. 

We clearly need proper, specific regulation of weapons systems that use AI and machine learning, containing clear rules concerning how much decision-making we can outsource and explaining how people will be held responsible when their decision is based fully or partially on information produced by AI. Now, with the IDF’s public use of Habsora, we need these regulations sooner rather than later. 

At the end of the day, the rules of armed conflict only apply to humans. We can’t allow machines to get in the middle.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.