Hey there, human — the robots need you! Vote for IEEE’s Robots Guide in the Webby Awards.

Close bar

Human Rights Watch is Apparently Terrified of Military Robots, But You Shouldn't Be

Why it's not productive to preemptively ban autonomous military robots

5 min read

Human Rights Watch is Apparently Terrified of Military Robots, But You Shouldn't Be

Human Rights Watch, an international organization that advocates for human rights around the world, published a report about a week ago warning against the development of autonomous armed robots and suggesting that they be preemptively banned by international treaty. We're unashamedly pro-robot around here, but this is an issue that I do take very seriously, and there are some good reasons why I think that attempting to ban autonomous armed robots is a mistake.

Here's a short video that HRW put together outlining their position:

First, let me just say how much I HATE it when people talk about "killer robots" in this context, because it's an intentionally deceptive term that's deliberately designed to scare people who aren't familiar with how robots work. Calling a robot "killer" ascribes some sinister or dangerous motive to it, but robots don't have motives. They have programming, and humans program them. Even a robot that's "autonomous" is simply working through logic in a way that a human told it to: robots, including the autonomous ones, cannot really think for themselves.

Hrmph.

Anyway, HRW's misunderstanding about autonomous robots is summed up in the first few minutes of their video, with the following statement:

"Killer robots... would be able to be programmed, set free, and make the decisions about when, where, and who to attack."

Okay, see, step one is that you program the killer robots. When you set them free, all they're doing is executing the programs that you told them to execute. They're not making decisions about when or where or who to attack, they're just following your instructions. If you decide to program them with instructions like "attack random humans at a random time and random place," then okay, they'll do that, and we should probably be worried. But even in that absurdly extreme case, it's easy to identify the dangerous part of the equation, and it's not the robot, it's the human.

This next bit comes from Noel Sharkey, professor of artificial intelligence and robotics at the University of Sheffield, who I respect very much but have had substantial disagreements with in the past (and, as it turns out, the present):

"There's nothing in artificial intelligence or robotics that could discriminate between a combatant and civilian. It would be impossible to tell the difference between a little girl pointing an ice cream at a robot or someone pointing a rifle at it."

Call me optimistic, but I disagree with that assertion. I believe that robots have the capacity to quite easily surpass humans (in the very near future) when it comes to differentiating between a little girl with an ice cream cone and someone with a rifle, if for no other reason than robot have a lot more information at their disposal than humans do. A robot can use high-resolution cameras, infrared imaging, ultraviolet imaging, radar, LIDAR, data feeds from other robots, and anything else you can think of all at once to determine very quickly how tall a person is, how much they weigh, and whether they're holding an ice cream made of ice cream or a gun made of metal. They can do this without much concern for whether it's dark, bright, loud, smoky, rainy, if the robot next to them has just had its head blown off, or some combination of all of those things.

And if for some reason the robot can't reliably determine whether someone is threatening them with an ice cream or a slightly more dangerous weapon? That's fine. The robot can be programmed to not take action. It's that simple.

The rest of the video is mostly just speculation on all the things that could go wrong if existing systems transition from human-in-the-loop to full autonomy. It's worth pointing out that just last week, Deputy Defense Secretary Ashton Carter signed a series of directives outlining how military robots will never, ever, ever be operated without a human in the loop, providing for the implementation of mandatory hardware and software safety systems. As Danger Room reports:

Before the Pentagon agrees to develop or buy new autonomous or somewhat autonomous weapons, a team of senior Pentagon officials and military officers will have to certify that the design itself “incorporates the necessary capabilities to allow commanders and operators to exercise appropriate levels of human judgment in the use of force.” The machines and their software need to provide reliability assurances and failsafes to make sure that’s how they work in practice, too. And anyone operating any such deadly robot needs sufficient certification in both the system they’re using and the rule of law.

Whether or not you trust roboticists to develop autonomous or semi-autonomous weaponized systems safely, HRW's solution of preemptively banning such robots is not practical. Robots are already a major part of the military, and their importance is only going to increase as technology improves and more and more dangerous tasks are given over to robots that don't have families to go home to. You can't simply outlaw progress because you think something bad might happen, and attempting to do so seems, frankly, to be rather shortsighted and ignores all of the contributions that military robotics has made and continues to make to the civilian sector.

Essentially, my disagreement with HRW's proposal comes down to the fact that they are pessimistic about robotics, while I am optimistic. They see autonomous armed robots as something bad, while I see more potential for good. I believe that it's possible to program robots to act in an ethical manner, and I also believe that robots can act as ethically or more ethically than humans in combat situations. No program is bug-free, and I have no doubt that there will be accidents with autonomous weaponized systems. But what we should be asking ourselves is whether or not the deployment of autonomous armed robots will overall be detrimental or beneficial to humans in conflict.

Ultimately, robots may prove to be much, much better than humans at identifying enemy combatants. Robots don't get tired. They don't get stressed. They don't get emotional. Robots can be instructed not to use force to protect themselves. We have to remember that humans make mistakes all the time, terrible, horrible mistakes. And robots will make mistakes too. But with a robot, you can go in and program the robot to never make that mistake again, and then program all other robots to never make that mistake either.

The future is a robotic one, and like it or not, we're going to experience all of the different facets of that future. We can't pick and choose where technology will take us, and attempting to come up with some "international legally binding instrument" because we're scared isn't productive. What is productive is to learn as much as we can about the technology and create a constructive dialogue, learning slowly and safely what works and what doesn't without resorting to buzzwords and hype. The robotics world is full of very smart people who are thinking about this problem (and many others) very carefully, and I believe that there is an answer (or many answers), although perhaps not one that's Three Laws compliant.

I've only just scratched the surface of all this (and this is a freakin' long post already), so it's certainly worth your time to read the entire Human Rights Watch report. It's online here, or available in PDF format here. I'd also recommend this article on Danger Room, as well as this additional analysis on the HRW report from Kenneth Anderson and Matthew Waxman. And then let us know what you think by posting a comment below.

The Conversation (0)