Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Salon
Salon
Politics
Areeba Shah

Experts alarmed over AI "killer robots"

As the U.S. Department of Defense and military contractors are focusing on implementing artificial intelligence into their technologies, the single greatest concern lies in the incorporation of AI into weapon systems, enabling them to operate autonomously and administer lethal force devoid of human intervention, a Public Citizen report warned last week. 

The Pentagon’s policies fall short of barring the deployment of autonomous weapons, commonly known as "killer robots," programmed to make their own decisions. Autonomous weapons “inherently dehumanize the people targeted and make it easier to tolerate widespread killing,” which is in violation of international human rights law, the report points out. 

Yet American military contractors are developing autonomous weapons, and the introduction of AI into the Pentagon’s battlefield decision-making and weapons systems poses several risks.

It also brings up questions about who bears accountability, pointed out Jessica Wolfendale, a professor of philosophy at Case Western Reserve University who studies ethics of political violence with a focus on torture, terrorism, war, and punishment. 

When autonomous weapons can make decisions or select targets without direct human input, there is a significant risk of mistaken target selection, Wolfendale said. In such a scenario, if an autonomous weapon mistakenly kills a civilian under the belief that they were a legitimate military target, the question of accountability arises. Depending on the nature of that mistake, it could be a war crime.

“Once you have some decision-making capacity located in the machine itself, it becomes much harder to say that it ought to be the humans at the top of the decision-making tree who are solely responsible,” Wolfendale said. “So there's an accountability gap that could arise that could lend itself to the situation where nobody is effectively held accountable.”

The Pentagon recognizes the risks and issued a DOD Directive in January 2023, explaining their policy relating to the development and use of autonomous and semi-autonomous functions in weapon systems. It mentions that the use of AI capabilities in autonomous or semi-autonomous weapons systems will be consistent with the DOD AI Ethical Principles. 

The directive says that individuals who authorize or direct the use of, or operate autonomous and semiautonomous weapon systems will do so with appropriate care and under the law of war, applicable treaties, weapon system safety rules, and applicable rules of engagement. It also states that the DOD will take “deliberate steps to minimize unintended bias” in AI capabilities.

However, the policy has several shortcomings, including that the required senior review of autonomous weapon development and deployment can be waived “in cases of urgent military need,” according to a Human Rights Watch and Harvard Law School International Human Rights Clinic review of the policy.

The directive “constitutes an inadequate response to the serious ethical, legal, accountability, and security concerns and risks raised by autonomous weapons systems,” their review says.

It highlights that the DOD directive allows for international sales and transfers of autonomous weapons. The directive also solely applies to the DOD and does not include other U.S. government agencies such as the Central Intelligence Agency or U.S. Customs and Border Protection, which may also utilize autonomous weapons.

There isn’t a lot of guidance in the current legal framework that specifically addresses the issues related to autonomous weapons, Wolfendale said. But sometimes, the exhilarating aspects of technology “can blind us or mask the severity of the ethical issues” surrounding it.

“There’s a human tendency around technology to attribute moral values to technology that obviously just don't exist,” she said.

The focus on the ethics of deploying these systems “distracts” from the fact that humans remain in control of the “politics of dehumanization that legitimates war and killing, and the decision to wage war itself,” Jeremy Moses, an associate professor at the Department of Political Science and International Relations at the University of Canterbury, whose research focuses on the ethics of war and intervention, told Salon.

“Autonomous weapons are no more dehumanizing or contrary to human dignity than any other weapons of war,” Moses said. “Dehumanization of the enemy will have taken place well before the deployment of any weapons in war. Whether they are precision-guided missiles, remote-controlled drone strikes, hand grenades, bayonets, or a robotic quadruped with a gun mounted on it, the justifications to use these things to kill others will already be in place.”

If political and military decision-makers are concerned about mass killing by AI systems, they can choose not to deploy them, he explained. Regardless of whether the use is killing in war, mass surveillance, profiling, policing, or crowd control, the AI systems don't do the work of dehumanization and they are not responsible for mass killing.

“[This] is something that is always done by the humans that deploy them and it is with the decision-makers that responsibility always lies,” Moses said. “We shouldn't allow the technologies to distract us from that.”

The Public Citizen report suggests that the United States pledge not to deploy autonomous weapons and support international efforts to negotiate a global treaty to that effect. However, these weapons are already being developed around the world and progressing rapidly. 

Within the U.S. alone, competition for autonomous weapons will be driven by geopolitical rivalries and further accelerated by both the military-industrial complex and corporate contractors. Some of these military contractors including General Dynamics, Vigor Industrial and Anduril Industries, are already developing unmanned tanks, submarines, and drones, according to the report. 

There are already autonomous systems like drones, which although don't make judgments without human intervention, are not unmanned themselves, Wolfendale pointed out. 

“So we already have a situation where it's possible for a military to inflict lethal force on individuals thousands of miles away while incurring no risk at all to themselves,” she added.

While some may defend drones because their ability to precisely target means they are less likely to commit war crimes, what that misses is that decisions about targets are based on all kinds of data, algorithms and entrenched biases that might lead weapons against legitimate targets, Wolfendale said.

“U.S. drone strikes in the so-called war on terror have killed, at minimum, hundreds of civilians – a problem due to bad intelligence and circumstance, not drone misfiring,” the Public Citizen report highlighted, adding that the introduction of autonomous systems will likely contribute to worsening the problem.

Promoters of AI in warfare will say that their technologies will “enhance alignment with ethical norms and international legal standards,” Moses said. But this demonstrates that there is a problem with the ethics and laws of war in general, in that they have become a “touchstone for the legitimation of warfare,” or “war humanizing,” as some would describe it, rather than the prevention of war. 

Weapons like drone strikes can “spread the scope of conflict far beyond traditional battlefields,” Wolfendale pointed out. 

When there isn’t a “definitive concrete cost” to engaging in conflicts since militaries can do so in a way that's “risk-free” for their own forces, and the power of the technology allows them to expand the reach of military force, this makes it unclear to see when conflicts will end, she explained. 

Similar actions are being carried out in Gaza, where the IDF has been experimenting with the use of robots and remote-controlled dogs, Haaretz reported. As the article points out, Gaza has become a “testing ground” for military robots where unmanned remote-control D9 bulldozers are also being used.

Israel is also using an Israeli AI intelligence processing system, called The Gospel, “which has significantly accelerated a lethal production line of targets that officials have compared to a ‘factory,’” The Guardian reported. Israeli sources report that the system is producing “targets at a fast pace” compared to what the Israeli military was previously able to identify, enabling a far broader use of force.

AI technologies like The Gospel function more as a tool for “post-hoc rationalization of mass killing and destruction rather than promoting 'precision,'” Moses said. The destruction of 60% of the residential buildings in Gaza is a testament to that, he said.

The dog-shaped walking robot that the IDF is using in Gaza was made by Philadelphia-based Ghost Robotics. The robot's primary use is to surveil buildings, open spaces and tunnels without jeopardizing Oketz Unit soldiers and dogs, according to the report. 

The use of such tools being discussed in media are “simultaneously represented as 'saving lives' whilst also dehumanizing the Palestinian people,” Moses said. “In this way, the technology serves as an attempt to make the war appear clean and concerned with the preservation of life, even though we know very well that it isn't.”

Moses said he doesn’t see the ethical landscape of war evolving at all. Within the past few decades, claims about more precise, surgical, and humanitarian war have increased public belief in the possibility of “good wars.” New weapons technologies almost always serve that idea in some way. 

“We should also be aware that the promotion of these types of weapons systems serves an economic function, with the military industry seeking to show that their products are 'battle-tested…'” Moses said. “The ethical debate is, once again, a distraction from that.” 

A real advance in “ethical thinking” about war would require us to treat all claims to clean and precise war with skepticism, regardless of whether it is being waged by an authoritarian or liberal-democratic state, he added. 

“War is always horrific and always exceeds legal and ethical bounds,” Moses said. “Robots and other AI technologies won't of themselves make that any better or worse. If we haven't learned that after Gaza, then that just serves to illustrate the current weakness of ethical thought on war.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.