Warfighting Robots Could Reduce Civilian Casualties, So Calling for a Ban Now Is Premature

If autonomous weapons are capable of reducing casualties, there may exist a moral imperative for their use

4 min read
Warfighting Robots Could Reduce Civilian Casualties, So Calling for a Ban Now Is Premature
Afghan children in the village of Chahar Darreh walk past armored vehicles in 2010. The author argues that autonomous robotic military systems might be capable of reducing civilian casualties when compared to the performance of human warfighters.
Photo: Miguel Villagran/iStockphoto

This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE. This article contains excerpts from “Lethal Autonomous Systems and the Plight of the Non-combatant," published in AISB Quarterly, No. 137, July 2013.

I've been engaged in the debate over autonomous robotic military systems for almost 10 years. I am not averse to a ban, but I'm convinced we should continue researching this technology for the time being. One reason is that I believe such systems might be capable of reducing civilian casualties and property damage when compared to the performance of human warfighters. Thus, it is a contention that calling for an outright ban on this technology is premature, as some groups already are doing.

“Humanity has a rather dismal record in ethical behavior in the battlefield. (...) There is clear room for improvement and autonomous systems may help address some of the problems."

It must be noted that past and present trends in human behavior in warfare regarding adhering to legal and ethical requirements are questionable at best. Unfortunately, humanity has a rather dismal record in ethical behavior in the battlefield. Potential explanations for the persistence of war crimes include: high friendly losses leading to a tendency to seek revenge; high turnover in the chain of command leading to weakened leadership; dehumanisation of the enemy through the use of derogatory names and epithets; poorly trained or inexperienced troops; no clearly defined enemy; unclear orders where intent of the order may be interpreted incorrectly as unlawful; youth and immaturity of troops; external pressure, e.g., for a need to produce a high body count of the enemy; and pleasure from power of killing or an overwhelming sense of frustration. There is clear room for improvement and autonomous systems may help address some of these problems.

Why technology can lead to a reduction in casualties on the battlefield

Is there any cause for optimism that autonomous robotic military systems can lead to a reduction in non-combatant deaths and casualties? I believe so, for the following reasons.

  • The ability to act conservatively: i.e., they do not need to protect themselves in cases of low certainty of target identification. Autonomous armed robotic vehicles do not need to have self-preservation as a foremost drive, if at all. They can be used in a self sacrificing manner if needed and appropriate without reservation by a commanding officer. There is no need for a 'shoot first, ask-questions later' approach, but rather a 'first-do-no-harm' strategy can be utilized instead. They can truly assume risk on behalf of the noncombatant, something that soldiers are schooled in, but which some have difficulty achieving in practice.
  • Unmanned robotic systems can be designed without emotions that cloud their judgment or result in anger and frustration with ongoing battlefield events.
  • Intelligent electronic systems can integrate more information from more sources far faster before responding with lethal force than a human possibly could in real-time.
  • When working in a team of combined human soldiers and autonomous systems as an organic asset, they have the potential capability of independently and objectively monitoring ethical behavior in the battlefield by all parties, providing evidence and reporting infractions that might be observed. This presence alone might possibly lead to a reduction in human ethical infractions.

There are profound technological challenges to be resolved, such as effective in situ target discrimination and recognition of the status of those otherwise hors de combat, among many others. But if a warfighting robot can eventually exceed human performance with respect to international humanitarian law adherence, that then equates to a saving of noncombatant lives, and thus is a humanitarian effort. Indeed if this is achievable, there may even exist a moral imperative for its use, due to a resulting reduction in collateral damage, similar to the moral imperative Human Rights Watch has stated with respect to precision guided munitions when used in urban settings. This seems contradictory to their call for an outright ban on lethal autonomous robots before determining via research if indeed better protection for non-combatants could be afforded.

“A warfighting robot can eventually exceed human performance with respect to international humanitarian law adherence, that then equates to a saving of noncombatant lives, and thus is a humanitarian effort. Indeed if this is achievable, there may even exist a moral imperative for its use."

Let us not stifle research in the area or accede to the fears that Hollywood and science fiction in general foist upon us. By merely stating these systems cannot be created to perform properly and ethically does not make it true. If that were so, we would not have supersonic aircraft, space stations, submarines, self-driving cars and the like. I see no fundamental scientific barriers to the creation of intelligent robotic systems that can outperform humans with respect to moral behavior. The use and deployment of ethical autonomous robotic systems is not a short-term goal for use in current conflict, typically counterinsurgency operations, but rather will take considerable time and effort to realize in the context of interstate warfare and situational context involving bounded morality.

I am not opposed to the removal of lethal autonomous systems from the battlefield, if international society so deems it fit, but I think that this technology can actually foster humanitarian treatment of noncombatants if done correctly. I have argued to those that call for a ban, they would be better served by a call for a moratorium, but that is even hard to envision occurring, unless these systems can be shown to be in clear violation of the laws of war. It's not clear how one can bring the necessary people to the table for discussion starting from a position for a ban derived from pure fear and pathos.

The Conversation (0)