Hey there, human — the robots need you! Vote for IEEE’s Robots Guide in the Webby Awards.

Close bar

Autonomous Weapons "Could Be Developed for Use Within Years," Says Arms-Control Group

Critics warn that robotic weapons will soon be able to make killing decisions on their own

6 min read

Erico Guizzo is IEEE Spectrum's Digital Innovation Director.

Autonomous weapons include drones designed to home in on the radio emissions of enemy air-defense systems and destroy them by crashing into them
Israel Aerospace Industries’ Harop drones are designed to home in on the radio emissions of enemy air-defense systems and destroy them by crashing into them. IAI says the weapon is an “autonomous platform” but offers a “man-in-loop attack” mode. Critics argue that systems like the Harop are precursors to fully autonomous weapons that will make targeting and attack decisions without human intervention.
Photo: Israel Aerospace Industries

A United Nations meeting this week in Geneva is debating the future of autonomous weapons—the controversial weapons systems capable of selecting and engaging targets without human intervention. Delegates from about 90 countries are discussing the far-reaching technical, legal, and ethical questions that these robotic weapons raise, and at the end of the week-long meeting they hope to agree on what to do next.

Autonomous weapon critic Mary Wareham from Human Rights WatchMary Wareham, global coordinator of the Campaign to Stop Killer Robots.Photo: HRW

For a group of arms control activists that has long been concerned about autonomous weapons, the next step countries should take is clear: they should ban these weapons. That’s the goal of the Campaign to Stop Killer Robots, a coalition of non-governmental organizations that was formed in late 2012 and whose steering committee includes groups like Human Rights Watch and the International Committee for Robot Arms Control.

On Monday Human Rights Watch released a memorandum to the delegates in Geneva calling for countries to “adopt an international, legally binding instrument that prohibits the development, production, and use of fully autonomous weapons.” The document also warned that such weapons “could be developed for use within years, not decades.”

To learn more about what’s at stake at the UN meeting and why autonomous weapons should be banned, IEEE Spectrum spoke recently with Mary Wareham, advocacy director of the arms division of Human Rights Watch, in Washington, D.C., and the global coordinator of the Campaign to Stop Killer Robots.

IEEE Spectrum: Some experts say autonomous weapons are potentially weapons of mass destruction, while others argue that they could actually reduce, not increase, casualties. What kinds of autonomous weapons are you concerned about?

Mary Wareham: When we look at autonomous weapons, our concern is about the degree of human control over their targeting and attack functions. Those are the functions that we think should always be under human control, and that’s what the debate is coming down to.

We have done our best to present what we call precursor systems, which do not represent fully autonomous weapons, but show us in real time and in real terms how weapons systems are becoming ever more autonomous to the degree that eventually the human is going to be taken out of the loop unless we do something about it.

Do we know what countries are developing autonomous weapons or are interested in pursuing them?

Israel and the United States have expressed interest in the benefits of those kinds of weapons systems, and they want to talk about that internationally. The United States is constrained by a policy [Directive 3000.09] that inhibits their use of lethal fully autonomous weapons systems but greenlights their development. And you see with this big offset strategy that’s been talked about a lot, that artificial intelligence is part of the next steps for the U.S. military.

We hear also from other countries, like Canada, Japan, U.K., and the Netherlands, that they are not currently developing these systems, or that they do not want these systems, but we also see developments that concern us in some of those countries and elsewhere. So there’s an urgency here. And the urgency is also driven by the fact that diplomacy moves incredibly slowly. We haven’t progressed in the last three years; the only thing that we’ve managed to do is to get the governments to start talking about this.

“Weapons systems are becoming ever more autonomous to the degree that eventually the human is going to be taken out of the loop unless we do something about it.”

That seems like progress given the complexity of this issue. What are the delegates going to discuss at this meeting in Geneva?

Delegates and other experts attending the meeting are going to revisit some of the same topics that they’ve discussed in the past, looking at legal and ethical issues, human rights and international law implications, as well as technical, operational, and security aspects.

They are also still trying to come up with a definition of fully autonomous weapon, and what human control means in that context—should it be an “appropriate level,” should it be “effective,” should it be “meaningful,” and that’s going to be another big part of the deliberations. I guess our fear at the Campaign is that many of the presentations made by the experts are too long, and there’s not enough time for countries to weigh in and significantly advance the discussion.

Is this meeting more important than the previous ones in 2014 and 2015 because of the upcoming Convention on Conventional Weapons (CCW) review conference in December, when an actual ban proposal could be considered?

Yes, the countries are going to be considering recommendations for action. And those recommendations, if they can come up with them, which is not an easy feat, then that’s what the CCW review conference at the end of the year is going to decide on. The CCW is a framework treaty, so it’s got five existing protocols dealing with different weapons systems and different concerns. Some of those are bans, some of those are restrictions. So for us in the Campaign, we’re excited to see this issue go into that forum, because it’s the place where you can add another protocol.

And at the CCW conference, what do you think could happen?

Three possible things can happen. The best case scenario, from our point of view, would be if these governments could agree to begin the process of negotiating a new protocol. And for us, that protocol should result in a ban, but it’s really up to the states concerned to decide if they want to restrict, regulate, or what they want to do and how they want to do that.

Another scenario is that they kind of agree to just keep discussing this topic. And for us that would be an entirely inadequate response, because they’ve been talking about it for the last three years and they haven’t actually done anything about it yet and we’re impatient for that to start happening.

The worst case scenario is that things fall apart at the review conference and nothing is agreed on. And that’s always a possibility, because any decision there requires the consensus of all of the countries that are participating. If things fall apart, it will be up to a group of countries that are committed to working with civil society to go outside of the CCW and negotiate a ban treaty for autonomous weapons. That’s what happened before as a result of failures at the CCW to deal with landmines in the 1990s and cluster bombs in the 2000s.

“Drone technology is starting to proliferate from the handful of countries that had it into more hands . . . We expect the same to happen with autonomous weapons if it’s not brought under control very quickly.”

Some have warned about the dangers of an AI arms race. But with all the interest that militaries seem to have in robotics and artificial intelligence, and the increasing presence of unmanned systems on the battlefield, isn’t such a race already happening?

That’s what we call proliferation of the technology, and you see that happening at the moment with armed drones. Technology is really starting to proliferate now from the handful of countries that had it into more hands, and eventually it will end up in the hands of non-state armed groups. We expect the same to happen with this kind of fully autonomous technology if it’s not brought under control very quickly.

But I guess we’re quite tightly focused on the actual weaponization of artificial intelligence, rather than sophisticated artificial intelligence more generally, and some of those concerns that are being expressed in the bigger debate. Another thing to emphasize here is that we’re not seeking to outlaw the technology, or autonomy, or artificial intelligence, at all. We’re just talking about the military applications and what happens when you weaponize the functions of targeting and attack. That’s what we want to concentrate on.

Why call for an outright ban, which some observers argue would not be effective, as opposed to more limited restrictions that would be easier to get countries to agree to, like guidelines that anyone pursuing these weapons would have to follow?

Since we first looked at this topic at Human Rights Watch, we came to the determination pretty quickly that there were so many concerns that this necessitates a complete prohibition on the development, production, and use of these weapons systems. That’s when our organization and other likeminded non-governmental groups came together to form the Campaign.

We’re more than happy to talk through regulations if people want to, but ultimately we don’t see that as satisfying the need that we see for a complete prohibition. And in our experience there’s a lot of reasons why a complete ban makes a lot more sense than partial regulation, especially in terms of enforcement and verification and other things. But ultimately this is the decision of governments to make. And they might go all the way or they might not go all the way.

This interview has been edited and condensed.

The Conversation (0)