Why the United Nations Must Move Forward With a Killer Robots Ban

If we don’t get a ban in place, there will be an AI arms race

3 min read
Russia's Uran-9 is an unmanned tank remotely controlled by human operators.
Russia's Uran-9 is an unmanned tank remotely controlled by human operators, who are "in the loop" to pull the trigger. Many observers fear that future AI-powered weapons will become fully autonomous, able to engage targets all on their own.
Image: Rosoboronexport via YouTube

This is a guest post. The views expressed here are solely those of the authors and do not represent positions of IEEE Spectrum or the IEEE.

Killer robots are on the agenda of a major United Nations meeting in Geneva this week.

As part of a U.N. disarmament conference, participating countries are deciding on Friday whether or not to start formal discussions on a ban of lethal autonomous weapons following on from three years of informal discussions.

Last July, thousands of researchers working in AI and robotics came together and issued an open letter calling for a preemptive ban on such weapons.

I was one of the organizers of the letter, and today I spoke at the U.N. for a third time calling once again for a ban.

“If we don't get a ban in place, there will be an arms race. And the end point of this race will look much like the dystopian future painted by Hollywood movies like

The reason I have been motivated to do this is simple. If we don't get a ban in place, there will be an arms race. And the end point of this race will look much like the dystopian future painted by Hollywood movies like The Terminator.

Even before this end point, such weapons will likely fall into the hands of terrorists and rogue nations. These people will have no qualms about removing any safeguards. Or using them against us.

And it won't simply be robots fighting robots. Conflicts today are asymmetric.
It will mostly be robots against humans. So unlike what some robot experts might claim, many of those humans will be innocent civilians.

This is a terrible and terrifying prospect. But we don't need to end there.

The world has decided collectively not to weaponize other technologies. We have bans on biological and chemical weapons. Most recently, we have banned several technologies including blinding lasers and antipersonnel mines.

And whilst these bans have not been 100 percent effective, the world is likely a better place with these bans than without.

These bans have not prevented related technologies from being developed. If you go into a hospital today, a “blinding" laser will actually be used to fix your eyes. But arms companies will not sell you one. And you will not find them on any battlefield.

The same should be true for autonomous weapons. We will not stop the development of the broad technology that has many other positive uses, like autonomous vehicles.

But if we get a U.N. ban in place, we will not have autonomous weapons on the battlefield. And this will be a good thing.

Like with blinding lasers, there is unlikely to be a regulatory authority or inspection regime for autonomous weapons. Instead, the ban would be implemented by more subtle measures like adverse publicity, and ultimately moral stigma.

Professional organizations like the IEEE are starting to act in this space.

Earlier this week, the IEEE announced an initiative to develop ethical standards for the developers of autonomous systems. The initial report warns that autonomous weapons would destabilize international security, lead to unintended military escalation and even war, upset strategic balance, and encourage offensive actions.

The IEEE report contains a number of recommendations, including the need for meaningful human control over direct attacks employing such weapons. It also says the design, development, or engineering of autonomous weapons beyond meaningful human control to be used offensively or to kill humans should be considered unethical.

From the reaction I have had talking about this issue in public, many people around the world support the view that a ban would be a good idea.

Even nine members of the U.S. Congress wrote to the secretaries of state and defense last week supporting the call for a preemptive ban.

All technology can be used for good or bad. We need to make a conscious and effective decision soon to take the world down a good path. My fingers are crossed that the U.N. will take the first step on Friday.

The Conversation (0)

How the U.S. Army Is Turning Robots Into Team Players

Engineers battle the limits of deep learning for battlefield bots

11 min read
Robot with threads near a fallen branch

RoMan, the Army Research Laboratory's robotic manipulator, considers the best way to grasp and move a tree branch at the Adelphi Laboratory Center, in Maryland.

Evan Ackerman
LightGreen

This article is part of our special report on AI, “The Great AI Reckoning.

"I should probably not be standing this close," I think to myself, as the robot slowly approaches a large tree branch on the floor in front of me. It's not the size of the branch that makes me nervous—it's that the robot is operating autonomously, and that while I know what it's supposed to do, I'm not entirely sure what it will do. If everything works the way the roboticists at the U.S. Army Research Laboratory (ARL) in Adelphi, Md., expect, the robot will identify the branch, grasp it, and drag it out of the way. These folks know what they're doing, but I've spent enough time around robots that I take a small step backwards anyway.

The robot, named RoMan, for Robotic Manipulator, is about the size of a large lawn mower, with a tracked base that helps it handle most kinds of terrain. At the front, it has a squat torso equipped with cameras and depth sensors, as well as a pair of arms that were harvested from a prototype disaster-response robot originally developed at NASA's Jet Propulsion Laboratory for a DARPA robotics competition. RoMan's job today is roadway clearing, a multistep task that ARL wants the robot to complete as autonomously as possible. Instead of instructing the robot to grasp specific objects in specific ways and move them to specific places, the operators tell RoMan to "go clear a path." It's then up to the robot to make all the decisions necessary to achieve that objective.

Keep Reading ↓ Show less