Commentary On Killer Robots Is Mostly Bunk

The ethics of using armed robots in combat is an huge issue that's rapidly becoming more relevant -- but some people are hyping it

5 min read

Evan Ackerman is IEEE Spectrum’s robotics editor.

Commentary On Killer Robots Is Mostly Bunk

I can’t fault people for writing articles that make use of the term “killer robots.” It’s sexy, and it attracts attention. I mean, I kinda just did it myself, didn’t I? An article by Johann Hari for the opinions section of The Independent takes this several steps too far, however, by making false assertions about the motives and capabilities of unmanned combat robots:

Every time you hear about a “drone attack” against Afghanistan or Pakistan, that’s an unmanned robot dropping bombs on human beings. Push a button and it flies away, kills, and comes home. Its robot-cousin on the battlefields below is called SWORDS: a human-sized robot that can see 360 degrees around it and fire its machine-guns at any target it “chooses”.

Why is “chooses” in quotes? It’s in quotes because that’s not the way it works, the author knows that’s not the way it works, and he’s covering his ass. Here’s the next paragraph:

At the moment, most are controlled by a soldier – often 7,500 miles away – with a control panel. But insurgents are always inventing new ways to block the signal from the control centre, which causes the robot to shut down and “die”. So the military is building “autonomy” into the robots: if they lose contact, they start to make their own decisions, in line with a pre-determined code.

See those quotes again? If you’ve been reading this blog long enough, you should be able to figure out why they’re there. Obviously, the robots don’t “die.” And “autonomy” is in quotes because the previous paragraph talked firing a machine gun at autonomously chosen targets, which is not at all the way it works. In fact, the way it works is the exact opposite of what the author is insinuating with his quotation marks: when a combat robot loses signal, the only active actions it will take is to try to reacquire the signal again, or (in some cases) try to get home, even if it’s an impossibility. It won’t just start shooting at people.

This, really, is what bothers me most about these articles: They’re basically full of lies of a sort, designed to scare people who don’t know the facts. No, the author isn’t actually publishing false statements (I guess), but that stuff in quotes isn’t exactly true, and it’s only in there so that people who don’t take the time to find out what is true (most people) will use it to jump to the obvious, and wrong, and inevitably terrifying conclusion.

More, including a pretty funny video of a robot totally NOT killing the Japanese prime minister, after the jump.

The point behind combat robots is, of course, that it’s better to have a robot in combat than a human, because if something goes wrong, it’s better to have a destroyed robot than a dead person. So that’s good, right?

But the evidence punctures this techno-optimism. We know the programming of robots will regularly go wrong – because all technological programming regularly goes wrong. Look at the place where robots are used most frequently today: factories. Some 4 per cent of US factories have “major robotics accidents” every year – a man having molten aluminum poured over him, or a woman picked up and placed on a conveyor belt to be smashed into the shape of a car. The former Japanese Prime Minister Junichiro Koizumi was nearly killed a few years ago after a robot attacked him on a tour of a factory. And remember: these are robots that aren’t designed to kill.

Think about how maddening it is to deal with a robot on the telephone when you want to pay your phone bill. Now imagine that robot had a machine-gun pointed at your chest.

Robots find it almost impossible to distinguish an apple from a tomato: how will they distinguish a combatant from a civilian?

Don’t mind me while I pound my head against my keyboard… Let’s start with the easy one, the thing about Japanese Prime Minister Junichiro Koizumi being “nearly killed” a few years ago by an industrial robot that “attacked” him. Here’s a video of what I’m pretty sure is the attack:

I guess maybe he has asthma or something, and that’s hysterical laughter because everybody is in shock from their brush with death.

As for the man who got aluminum poured over him, or the woman who got smashed into the shape of a car… I can’t find any mention of these events, or events similar to these, and I have to believe that such things would have made the news, since “killer robots” are, after all, so sexy. I don’t want to say that the author made these incidents up to shock people, but if anyone can find references to anything like this, I’d be much obliged.

It’s certainly true that programming is fallible, and hardware is fallible, no matter what a robot is designed to do. I won’t belabor the fact humans are fallible too (and less easy to troubleshoot and reprogram) since I’ve done it before (a few times). But you can’t compare combat robots to automated telephone systems. That’s just stupid, and the only reason to tell someone to imagine that is to scare them. You might as well compare apples to tomatoes. Once again, I won’t belabor the fact that robots can be programmed with the same type of combat rules that humans follow, and if you see one red fruit that’s shooting at you and one red fruit that’s not, it isn’t too difficult to tell which one’s the tomato (because tomatoes are always the bad guys).

Johann Hari does raise a couple relevant points toward the end of the article… It is important to consider whether it becomes easier to participate in an armed conflict when robots are put at risk instead of humans, and what reaction the use of robots engenders in others. I still maintain that robots can be used responsibly, and that getting humans out of combat is a good thing. But either way, these are human issues, not robot issues. Robots are what we make them, and what we make of them, nothing more, and nothing less. Even autonomous robots are simply carrying out a series of commands programmed into them by a human; they’re not (in the strictest sense) making decisions on their own.

I’ve gone on long enough, so let me just say this: the ethics of using armed robots in combat is an huge issue that’s rapidly becoming more relevant, and it’s important to have intelligent and well informed debate on the subject.

This article, and articles like it, do not provide an intelligent and well informed perspective. This article is designed to scare people who are unfamiliar with robotics. I mean, it’s not even opinion… Opinion takes facts and gives a perspective, but you have to start with facts, not hyperbole.

I probably shouldn’t waste my time and energy getting so upset at crap like this, but the fact is, a lot of people read this kind of thing, and it gives a horribly negative impression of robotics in general, not just military robotics. It sets back the industry, it sets back the hobby, and it makes it harder for things like household and medical robots to get accepted into daily life.

[ The Independent ]

The Conversation (0)