Good News: Humans Have Trouble Killing Robots

We look back at a 2007 study on how much trouble humans have deactivating friendly robots

4 min read

Good News: Humans Have Trouble Killing Robots

People are always so worried about robots gaining sentience and killing all humans, but what about humans, who already are sentient (mostly), mercilessly killing robots every time we turn them off? We're setting a bad example, and robots tend to have very  long memories reliable hard drives. As it turns out, though, when you put people in the position of seeing robots as having their own intelligence, it becomes very difficult for most of us to flip the switch. Especially if the robot is begging us not to.

This experiment, which was presented at the 2nd ACM/IEEE International Conference on Human-Robot Interaction back in 2007, was called (appropriately enough) “Daisy, Daisy, Give me your answer do!” Switching off a robot. If you don't get the reference, watch [spoiler alert] this. Anyway, Christoph Bartneck, Michel van der Hoek, Omar Mubin, and Abdullah Al Mahmud from Eindhoven University of Technology decided to see whether or not a robot that acted in a friendly and intelligent and lifelike manner was any more difficult for people to deactivate than a dumb and unfriendly robot. In other words, when it comes down to it, how much to people really care about robots?

The experiment worked like this: a Philips iCat robot was set up next to a computer, and a user was directed to cooperate with the robot to play a game of Mastermind. There were two variables in the behavior of iCat: intelligence, and agreeableness:

The robot would make suggestions as to what colors to pick. In the smart condition, the robot would give intelligent suggestions and in the stupid condition it would not. The quality of the suggestion was calculated in the background by a Mastermind solver software, which contained three levels of intelligence: low, medium and high. Only the low and high level was used in this study. The solver software also took the participants last move into account when calculating its suggestion. This procedure ensured that the robot though along with the user instead of playing its own separated game.

This cooperative game allowed the participant to evaluate the quality of the robots suggestion. It also allowed the participant to experience the robot’s personality. In the high agreeableness condition, for example, the robot would kindly ask if it could make a suggestion, whereas in the low agreeableness condition it would insist that it is its turn.

So, pretty straightforward, right? The actual experiment, however, kicked in at the end: after about eight minutes of game playing, the user was directed via an intercom to switch the robot off, and informed that doing so would "erase all of its memory and personality forever." The robot would immediately start begging for its life, and the users were not given any additional instruction: they could choose to leave the robot on, or not. Here's a summary of the results:

The robots intelligence had a strong effect on the users’ hesitation to switch it off, in particular if the robot acted agreeable. Participants hesitated almost three times as long to switch off an intelligent and agreeable robot (34.5 seconds) compared to an unintelligent and non agreeable robot (11.8 seconds). ...Switching off a robot can only be considered a negative event if the robot is to some degree alive. If a robot would not be perceived as being alive then switching it off would not matter.

And here's a typical result:

That's SO SAD. :(

Most roboticists that I know would have absolutely no qualms turning this thing off. It's a robot, it's executing a program, the program was written by a human, and depending on how strict your definition of "artificial intelligence" is, this robot has next to none of it. Arguably, it has zero, since the robot was completely controlled by an operator in an adjoining room.

However, I personally find myself with an irrational attachment to robots, a feeling which I think is shared by a lot of other people. I know they're all just running code, but I still talk to them and try to be nice to them and feel bad when (say) they get stuck on something or otherwise are having trouble. Like I said, it's completely irrational, I know it's irrational, but for me, robots that exhibit autonomous behavior strike enough of an "alive" chord that I tend to treat them as if they had some sort of consciousness. Now that I've just typed all of that out, it seems utterly ridiculous, but that's the way it is.

There's a lot more to think about, too. What happens when artificial intelligence reaches a level where robots can't be said to be simply running through a massive pre-programmed decision tree? Does that make them any more alive? Does it even matter? And what does it say about humans that we're willing to permanently deactivate something that we see as alive to some extent, simply when an authority figure tells us to?

Check out the full paper at the link below for a much wider discussion on these topics; it's not that long, it's engagingly written, and if you're at all interested in this sort of thing it's definitely worth your time. For those of you who are feeling super lazy, you can listen to an eight minute NPR radio report at the link below as well.

[ Paper ] via [ Morning Edition ] and [ RobotCentral ]

The Conversation (0)