Kevin Kelly from Wired recently interviewed Jamie Hyneman and Adam Savage from Mythbusters for the Commonwealth Club of California. As part of the interview, Adam and Jamie were asked (somewhat jokingly) whether they’re afraid that machines will take over in the future, particularly with regards to the present development of armed robots.
Now, if you’ve been reading this blog for a while, you’re probably aware that this is one of my favorite subjects to harp on, and as much as I
::cough:: like-in-a-strictly-professional-manner Adam Savage, I won’t let you down.
So, is it a bad idea to give a machine a gun? Of course it is. It’s a terrible idea. But guns were a terrible idea to begin with (from a lofty ethical viewpoint, anyway). The terrible part about guns is that guns can kill people, and not giving guns to robots isn’t going to change that. Really, the question should be, is giving guns to machines a better or worse idea than not giving guns to machines? This where I think armed robots have a use.
I guess fundamentally, the part that I don’t understand is where Adam says that he knows how machines work and he wouldn’t trust a machine with a gun. I wonder, though, if we have a better idea of how most machines work than how some humans work… Like, it’s a fundamental right for humans to have guns, and there are plenty of humans out there who are far less predictable or reliable than a robot. It’s certainly true that robots are more prone to things like mechanical failures, but we already entrust our lives to robots on a daily basis (often without realizing it). Giving a robot a gun is just an especially obvious way of making it dangerous.
I hate to keep coming back to this analogy, but it’s like driving a modern car: between things like anti-lock brakes and cruise control and (now) parking and lane assist features, your car (if you have a fancy one) has the ability to control your brakes, your accelerator, and your steering. If you have power windows and door locks, it has control over those things, too. It’s only designed to be autonomous in very specific situations, but what we’re talking about here is mechanical (or software) failure. And generally, that just doesn’t happen, because cars have been designed and tested with safety and reliability in mind. I don’t see why it couldn’t be any different with armed semi-autonomous (or even autonomous) robots.
Whether or not it’s ethical to arm robots is (I’d like to think) a separate issue. The short answer? No, it’s not. As has been pointed out, arming a robot makes it easier to resort to violence since the consequences are much less severe. Some people might even argue that that reason alone should keep robots out of combat, but I doubt that those people are going into combat themselves. My guess (and this is really just a guess since I’m in no way qualified to make any other sort of comment) is that if some kind of armed conflict is inevitable and there is substantial risk of injury or death, most people who’d be directly involved in that conflict would rather send a robot in their place if possible. And, that’s really what it’s all about: robots doing the dangerous things so that humans don’t have to.