Ban or No Ban, Hard Questions Remain on Autonomous Weapons

Banning autonomous weapons would be much more complicated than advocates have suggested

6 min read
Ban or No Ban, Hard Questions Remain on Autonomous Weapons
The Phalanx is a computer-controlled, radar-guided gun system that can automatically detect, track, and fire at incoming missiles and enemy aircraft. While some may consider weapons like the Phalanx as purely defensive, the authors argue that it is hard to differentiate clearly between offensive and defensive weapons, and this is just one of the many challenges that a recent proposal calling for a ban on offensive autonomous weapons would face.
Photo: U.S. Army

This is a guest post. The views expressed here are solely those of the authors and do not represent positions of IEEE Spectrum or the IEEE.

Last month, over 1,000 robotics and artificial intelligence researchers signed an open letter calling for a ban on offensive autonomous weapons, putting new energy into an already spirited debate about the role of autonomy in weapons of the future.

These researchers join an ongoing conversation among lawyers, ethicists, academics, activists, and defense professionals on potential future weapons that would select, engage, and destroy targets without a human in the loop. As AI experts, the authors of the letter can help militaries better understand the risks associated with increasingly intelligent and autonomous systems, and we welcome their contribution to the discussion.

By calling for a ban on autonomous weapons, the letter raises a host of complex issues, and it will take continued engagement by scientists to help address them. In this article, we discuss some historical precedents for weapons bans, as well as some of the specific challenges that an effective restriction on lethal autonomous weapons would face.

The open letter specifically seeks to ban “offensive autonomous weapons beyond meaningful human control." All three of the concepts captured in that statement—“offensive," “autonomous weapon," and “meaningful human control"—are ambiguous and lack common definitions. While some weapons are purely defensive, like the automated defensive systems (the Phalanx Close-In Weapon System, for example) that at least 30 countries employ today, academic research in international security shows that it is hard to differentiate clearly between offensive and defensive weapons. The key, instead, is how actors ultimately decide to use the weapons at their disposal.

“Autonomy is already used for many functions in offensive weapons, and has been for decades. These include computers that track and identify targets and cue them to human operators, as well as 'fire and forget' missiles and torpedoes . . . Useful definitions must precisely distinguish between existing uses of autonomy and future weapons."

While advocates suggest banning machines that make “decisions" to kill, this is much more complicated than it may appear. Definitions of “autonomous weapons" and “meaningful human control" must be informed by the reality that autonomy is already used for many functions in offensive weapons, and has been for decades. These include computers that track and identify targets and cue them to human operators, as well as “fire and forget" missiles (for example, the AMRAAM air-to-air missile) and torpedoes that autonomously home in on human-designated targets once launched. Useful definitions must precisely distinguish between these existing uses of autonomy (many of which date back to World War II) and future weapons that would search over wide areas for targets then decide whether to destroy them completely on their own.

The debate must also take into account the circumstances in which militaries are most likely to use autonomous weapons, and why. Autonomous weapons may bring to mind visions of humanoid robots stalking through populated areas, coldly deciding who lives and dies, but future weapons that target radars, tanks, ships, submarines, or aircraft on their own are far more likely. More intelligent systems, used in the right way, could help reduce civilian casualties in war, much as precision-guided weapons today allow militaries to precisely target specific enemy positions, avoiding the kind of indiscriminate area bombing that leveled cities in World War II. But many tasks in war will still require human judgment—for legal, ethical, or safety reasons.

“Autonomous weapons may bring to mind visions of humanoid robots stalking through populated areas, coldly deciding who lives and dies, but future weapons that target tanks, ships, submarines, or aircraft on their own are far more likely."

Advocates for banning autonomous weapons often point to recent successful bans on land mines, cluster munitions, and blinding lasers to show that a ban is plausible. Yet, there are enough examples of both successful and unsuccessful weapons bans throughout history for those for or against a ban to cherry pick examples. In the early 20th century, some tried and failed to effectively regulate submarines and air-delivered weapons on the grounds that they were unfair and indiscriminate. In fact, these technologies became ubiquitous enough, and proved useful enough, that they eventually became part of the standard arsenals of militaries. Bans on chemical and biological weapons initially struggled, but now have had more success, although these weapons still persist in the hands of rogue states, such as Syria. Chemical and biological weapons proved less useful over time for powerful countries than initially anticipated, and generated continuing moral and ethical qualms.

The most relevant examples of successful regulations may be the host of Cold War-era weapons that were regulated because they were seen as destabilizing, such as prohibitions on placing nuclear weapons in space or on the seabed. These restrictions arose largely not because of humanitarian concerns but rather for strategic reasons. The United States and the Soviet Union, despite their mutual hostility, still had a shared interest in avoiding instability, where conflict could quickly escalate out of control and certain types of weapons or deployment postures might incentivize a surprise attack. Even in a world of nuclear weapons, satellites, and intercontinental ballistic missiles, some weapons were seen as more dangerous than others.

This mixed history of arms control efforts suggests a couple key lessons for today:

  • Weapons cannot be regulated, restricted, or banned without clear distinctions between what is “allowed" and what is not. If nations cannot agree on where the line is between a semi-autonomous and an autonomous weapon, then they will not be able to avoid crossing that line even if they want to. In such a situation, a ban or regulation would be less likely to succeed.
  • An agreement to ban weapons is no guarantee of success. What those who seek a ban really desire is restraint—countries choosing to restrain the development and use of autonomous weapons. Agreementslegally binding or otherwisecan be useful tools for coordinating state action, but countries can violate treaties, publicly or in secret, or can choose simply not to join them. The challenge of ensuring that agreements have sufficient verification provisions to build trust between states will be especially difficult in the case of autonomous weapons, because it involves verification of software, rather than hardware. There are also successful examples of states restraining certain weapons, such as anti-satellite weapons or neutron bombs, without formal agreements, because they think those weapons are destabilizing and therefore believe developing them will not improve their security.
“If autonomous weapons turn out to be useful, someone will build them. Even if all of the major military powers agree to a ban, rogue states like Syria or North Korea are hardly interested in international goodwill, to say nothing of terrorist organizations . . . A disarmament regime that resulted in the most unsavory states having the upper hand in a conflict would hardly be a satisfactory outcome."
  • Countries choose restraint for a variety of reasons. Up to this point, the argument against autonomous weapons has been framed largely as a humanitarian issue by non-government organizations, many of whom were involved in previous bans on land mines and cluster munitions. Yet, Western militaries that follow the rule of law can argue that the laws of war already cover these issues sufficiently. Countries that don't care about the rule of law certainly are less interested in a ban (or might agree to a ban and then cheat). It is also clear that some kinds of automation can help reduce casualties, but the line between what would be helpful vs. harmful is not obvious ahead of time, which is why some legal experts argue we should not arbitrarily restrain ourselves ahead of time, but wait to see how the technology unfolds. Bans on cluster munitions and landmines succeeded in large part because activists simply went around governments by appealing directly to the public. This is less likely to work in the case of autonomous weapons because, unlike cluster munitions and land mines, there are no victims of autonomous weapons yet: it is a hypothetical future problem.

    Moreover, if a weapon's utility is marginal, then the international goodwill gained from adopting a ban may be sufficient. But when weapons are seen, correctly or not, to have significant military value, then mutual restraint is usually necessary. Countries will want to know that their competitors are also restraining themselves if they are to give up a seemingly-valuable weapon. Major military powers are unlikely to agree to a preemptive, legally-binding ban when the military utility of the technology they are giving up is unclear. However, it is conceivable that militaries might restrain development or use of certain autonomous weaponsand communicate that restraint to othersif they saw it as destabilizing. A major factor is whether militaries believe autonomy increases their control over events on the battlefield, like automation in factories, or decreases control, by letting loose dangerous and uncontrollable weapons.
  • If the technological hurdles are low enough, someone will always cheat. This is particularly the case for robotic systems, where much of the technology is driven by the commercial sector and is widely available to the public. If autonomous weapons turn out to be useful, someone will build them. Even if all of the major military powers agree to a ban, rogue states like Syria or North Korea are hardly interested in international goodwill, to say nothing of terrorist organizations. This means that whatever weapons are “allowed," they need to be sufficiently capable to defeat the weapons of those who “cheat." A disarmament regime that resulted in the most unsavory states having the upper hand in a conflict would hardly be a satisfactory outcome.

The argument against autonomous weapons made by the AI and robotics community comes from a position of great knowledge about this technology, along with concern. In the near term, AI experts are discovering that machine intelligence can perform better than humans in many instances, but on occasion produce weirdly counterintuitive results. In the long term, some experts fear that sufficiently advanced AIs could slip out of human control. If autonomous weapons pose novel risks, responsible militaries will want to understand them.

One of the most important things to move the discussion forward is a dialogue to better understand exactly why scientists are concerned about lethal autonomous weapons, and what it is that they fear. If they perceive these systems as uniquely dangerous, then last month's open letter will have to be the beginning, not the end, of the conversation between the AI and robotics communities and national security policymakers.

The Conversation (0)