Hey there, human — the robots need you! Vote for IEEE’s Robots Guide in the Webby Awards.

Close bar

Why Should We Ban Autonomous Weapons? To Survive

While debate has focused on moral and legal issues of autonomous weapons, too little attention has been paid to the material dangers of an AI-powered war

9 min read
Autonomous weapons include the Long Range Anti-Ship Missile, designed by the U.S. Navy and DARPA to hunt targets over a wide expanse.
An artist's rendering of the Long Range Anti-Ship Missile (LRASM), designed by the U.S. Navy and DARPA to hunt targets over a wide expanse. LRASM uses its own sensors and computers to discriminate enemy ships from civilian vessels.
Image: Lockheed Martin

This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

Killer robots pose a threat to all of us. In the movies, this threat is usually personified as an evil machine bent on destroying humanity for reasons of its own. In reality, the threat comes from within us. It is the threat of war.

In today's drone warfare, people kill other people from the safety of cubicles far away. Many do see something horrific in this. Even more are horrified by the idea of replacing the operator with artificial intelligence, and dispatching autonomous weapons to hunt and kill without further human involvement.

Proponents of autonomous weapons say their use is inevitable and natural, a mere extension of human will and judgment through the agency of machines. They question whether artificial intelligence will always be incapable of distinguishing civilians from combatants, or even of making reasonable tradeoffs between military gains and risk or harm to civilians. After all, they argue, people are often cruel and stupid, and soldiers under extreme stress sometimes go berserk and commit atrocities. What if autonomous weapons, used judiciously, could actually save lives of soldiers and civilians?

I'll agree that we can imagine circumstances in which using an intelligent autonomous weapon could cause less harm than a more destructive, dumb weapon, if those were the only choices. But human-controlled robotic weapons could often be just as effective, or it might be possible to avoid violence altogether. Autonomous weapons could malfunction, kill innocents, and nobody be held responsible. Which kind of situation would occur most often, and whether autonomous weapons would be more or less deadly than their prohibition, assuming everything else would be the same, is endlessly debatable.

“The major powers are developing autonomous missiles and drones that will hunt ships, subs, and tanks, and piecing together highly automated battle networks that will confront each other and have the capability of operating without human control"

But everything else won't be the same. Proponents claim that machine intelligence and autonomous weapons will revolutionize warfare, and that no nation can risk letting its enemies have a monopoly on them. Even if this is exaggerated, it shows the potential for a strong stimulus to the global arms race.

These technologies are being pursued most vigorously by the nuclear-armed nations. In the United States, they are touted as the answer to rising challenges from China and Russia, as well as from lesser powers armed with modern weaponry. The major powers are developing autonomous missiles and drones that will hunt ships, subs, and tanks, and piecing together highly automated battle networks that will confront each other and have the capability of operating without human control.

Autonomous weapons are a salient point of departure in a technology-fueled arms race that puts everyone in danger. That is why I believe we need to ban them as fast and as hard as we possibly can.

A BRIGHT RED LINE

It's a view I've held for almost three decades, and it wasn't inspired by the The Terminator, but by the 1988 incident in which a U.S. Navy air defense system mistakenly shot down an Iranian airliner. Although human error appears to have played the deciding role in that incident, part of the problem was excessive reliance on complex automated systems under time pressure and uncertain warnings of imminent danger—the classic paradigm for “accidental war."

At the time, as an intern at the Federation of American Scientists in Washington, D.C., I was looking at nanotechnology and the rush of new capabilities that would come as we learn to build ever more complex systems with ever smaller parts. We see that today in billion-transistor chips and the computers, robots, and machine learning systems they are making possible. I worried about a runaway arms race.

I was asked to come up with proposals for nanotechnology arms control. I decided it wasn't about banning teeny-tiny Kalashnikovs, but identifying the qualitatively distinct new things that emerging technologies would enable. One of my first ideas was a ban on autonomous kill decision by machines.

“I knew that most people would agree we should not have killer robots, but when I started talking about banning them, people would mostly stare"

I knew that most people would agree we should not have killer robots. This made lethal autonomy a bright red line at which it might be possible to erect a roadblock to the arms race. I also knew that unless we resolved not to cross that line, we would soon enter an era in which, once the fighting had started, the complexity and speed of automated combat, and the delegation of lethal autonomy as a military necessity, would put the war machines effectively beyond human control.

But when I started to talk about banning killer robots, people would mostly stare.

Military people angrily denied that anyone would even consider letting machines decide when to fire guns and at what or at whom. For many years the U.S. military resisted autonomous weapons, concerned about their legality, controllability and potential for friendly-fire accidents. Systems like the CAPTOR mine, designed to autonomously launch a homing torpedo at a passing submarine, and the LOCAAS mini-cruise missile, designed to loiter above a battlefield and search for tanks or people to kill, were canceled or phased out. As late as 2013, a poll conducted by Charli Carpenter, a political science professor at the University of Massachusetts Amherst, found Americans against using autonomous weapons by 2-to-1, and tellingly, military personnel were among those most opposed to killer robots.

Yet starting in 2001, the use of armed drones by the United States began to make the question of future autonomous weapons more urgent. In a 2004 article, Juergen Altmann and I declared that “Autonomous 'killer robots' should be prohibited" and added that “a human should be the decision maker when a target is to be attacked." In 2009, Altmann, a professor of physics at Technische Universität Dortmund, co-founded the International Committee for Robot Arms Control, and at its first conference a year later, I suggested human control as a fundamental principle. The unacceptability of machine decision in the use of violent force could be asserted, I argued, without need of scientific or legal justification.

In 2012, Human Rights Watch began to organize the Campaign to Stop Killer Robots, a global coalition that now includes more than 60 nongovernmental organizations. The issue rose to prominence with astonishing speed, and the United Nations Convention on Certain Conventional Weapons (CCW) held its first “Meeting of Experts on Lethal Autonomous Weapon Systems" in May 2014, and another the following year. This past April, the third such meeting concluded with a recommendation to form a “Group of Governmental Experts," the next step in the process of negotiating… something.

Many statements at the CCW have endorsed human control as a guiding principle, and Altmann and I have suggested cryptographic proof of accountable human control as a way to verify compliance with a ban on autonomous weapons. Yet the CCW has not set a definite goal for its deliberations. And in the meantime, the killer robot arms race has taken off.

FULL SPEED AHEAD

In 2012, the Obama administration, via then-undersecretary of defense Ashton Carter, directed the Pentagon to begin developing, acquiring, and using “autonomous and semi-autonomous weapon systems." Directive 3000.09 has been widely misperceived as a policy of caution; many accounts insist that it “requires a human in the loop." But instead of human control, the policy sets “appropriate levels of human judgment" as a guiding principle. It does not explain what that means, but senior officials are required to certify that autonomous weapon systems meet this standard if they select and kill people without human intervention. The policy clearly does not forbid such systems. Rather, it permits the withdrawal of human judgment, control, and responsibility from points of lethal decision.

Video: Lockheed Martin
Long Range Anti-Ship Missiles are "semi-autonomous" systems that hunt for enemy ships and identify and attack them without further human intervention. In the video at 3:20-3:40 a missile uses its own sensors and computers to distinguish the targets from innocent ships within an initial AOU (area of uncertainty).

The policy has not stood in the way of programs such as the Long Range Anti-Ship Missile, slated for deployment in 2018, which will hunt its targets over a wide expanse, relying on its own computers to discriminate enemy ships from civilian vessels. Weapons like this are classified as merely “semi-autonomous" and get a green light without certification, even though they will be operating fully autonomously when they decide which pixels and signals correspond to valid targets, and attack them with lethal force. Every technology needed to acquire, track, identify, and home in or control firing on targets can be developed and used in “semi-autonomous weapon systems," which can even be sent on hunt-and-kill missions as long as the quarry has been “selected by a human operator." (In case you're wondering, “target selection" is defined as “The determination that an individual target or a specific group of targets is to be engaged.") It's unclear that the policy stands in the way of anything.

In reality, the directive signaled an upward inflection in the trend toward killer robots. Throughout the military there is now open discussion about autonomy in future weapon systems; ambitious junior officers are tying their careers to it. DARPA and the Navy are particularly active in efforts to develop autonomous systems, but the Air Force, Army, and Marines won't be left out. Carter, now the defense secretary, is heavily promoting AI and robotics programs, establishing an office in Silicon Valley and a board of advisors to be chaired by Eric Schmidt, the executive chairman of Google's parent company Alphabet.

The message has been received globally as well. Russia in 2013 moved to create its own versions of DARPA and the of the U.S. Navy's Laboratory for Autonomous Systems Research, and deputy prime minister Dmitry Rogozin called on Russian industry to create weapons that “strike on their own," pointing explicitly to American developments. China, too, has been developing its own drones and robotic weapons, mirroring the United States (but with less noise than Russia). Britain, Israel, India, South Korea… in fact, every significant military power on Earth is looking in this direction.

“The United States has been leading the robot arms race, both with weapons development and with a policy that pretends to be cautious and responsible but actually clears the way for vigorous development and early use of autonomous weapons"

Both Russia and China have engaged in aggressive actions, arms buildups, and belligerent rhetoric in recent years, and it's unclear whether they could be persuaded to support a ban on autonomous weapons. But we aren't even trying. Instead, the United States has been leading the robot arms race, both with weapons development and with a policy that pretends to be cautious and responsible but actually clears the way for vigorous development and early use of autonomous weapons.

Deputy defense secretary Robert Work has championed the notion of a “Third Offset" in which the United States would leap to the next generation of military technologies ahead of its “adversaries," particularly Russia and China. To calm fears about robots taking over, he emphasizes “human-machine collaboration and combat teaming" and says the military will use artificial intelligence and robotics to augment, not replace human warfighters. Yet he worries that adversaries may field fully autonomous weapon systems, and says the U.S. may need to “delegate authority to machines" because “humans simply cannot operate at the same speed."

Work admits that the United States has no monopoly on the basic enabler, information technology, which today is driven more by commercial markets than by military needs. Both China and Russia have strong software and cyber hacking capabilities. Their latest advanced fighters, tanks, and missiles are said to rival ours in sophistication. Work compares the present to the “inter-war period" and urges the U.S. to emulate Germany's invention of blitzkrieg. Has he forgotten how that ended?

Autonomous weapons include the DARPA Sea Hunter submarineDARPA and the U.S. Office of Naval Research recently unveiled the Sea Hunter, an unmanned vessel designed to track enemy submarines. The current prototype doesn't have weapons, but during a ceremony in April, deputy defense secretary Robert Work raised the possibility of arming the Sea Hunter in the future.Photo: DARPA

A DISASTER WAITING TO HAPPEN

Nobody wants war. Yet, fearing enemy aggression, we position ourselves at the brink of it. Arms races militarize societies, inflate threat perceptions, and yield a proliferation of opportunities for accidents and mistakes. In numerous close calls during the Cold War, it came down to the judgment of one or a few people not to take the next step in a potentially fatal chain of events. But machines simply execute their programs, as intended. They also behave in ways we did not intend or expect.

“Networks of autonomous weapons could accidentally ignite a war and, once it has started, rapidly escalate it out of control. To set up such a disaster waiting to happen would be foolish"

Our experience with the unpredictable failures and unintended interactions of complex software systems, particularly competitive autonomous agents designed in secrecy by hostile teams, serves as a warning that networks of autonomous weapons could accidentally ignite a war and, once it has started, rapidly escalate it out of control. To set up such a disaster waiting to happen would be foolish, but not unprecedented. It's the type of risk we took during the Cold War, and it's similar to the military planning that drove the march to war in 1914. Arms races and confrontation push us to take this kind of risk.

Paul Scharre, one of the architects of Directive 3000.09, has suggested that the risk of autonomous systems acting on their own could be mitigated by negotiating “rules of the road" and including humans in battle networks as “fail-safes." But it's asking a lot of humans to remain calm when machines indicate an attack underway. By the time you sort out a false alarm, autonomous weapons may actually have started fighting. If nations can't agree to the simple idea of a verified ban to avoid this danger, it seems less likely that they will be able to negotiate some complicated system of rules and safeguards.

Direct authority to launch a nuclear strike may never be delegated to machines and a war between the United States and China or Russia might not end in nuclear war, but do we want to take that risk? There is no reason to believe we can engineer safety into a tense confrontation between networks of autonomous weapons at the brink of war. The further we go down that road, the harder it will be to walk back. Banning autonomous weapons and asserting the primacy of human control isn't a complete solution, but it is probably an essential step to ending the arms race and building true peace and security.

BACK TO BASICS

The fundamental problem is conflict itself, which pits human against human, reason against reason and machine against machine. We struggle to contain our conflicts, but passing them on to machines risks finding ourselves nominally still in command yet unable to control events at superhuman speed.

We are horrified by killer robots, and we can ground their prohibition on strong a priori principles such as human control, responsibility, dignity—and survival. Instead of endlessly debating the validity of these human prejudices, we should take them as saving grace, and use them to stop killer robots.

The Conversation (0)