Robotics

How to Build a Moral Robot

If robots are going to drive our cars and play with our kids, we’ll need to teach them right from wrong

Image: Kristen Clark

Whether it’s in our cars, our hospitals or our homes, we’ll soon depend upon robots to make judgement calls in which human lives are at stake.

That’s why a team of researchers is attempting to model moral reasoning in a robot. In order to pull it off, they’ll need to answer some important questions: How can we quantify the fuzzy, conflicting norms that guide human choices? How can we equip robots with the communication skills to explain their choices in way that we can understand? And would we even want robots to make the same decisions we’d expect humans to make? 

TRANSCRIPT

NARRATOR: How do you teach a robot right from wrong?

It’s a question straight out of a sci-fi movie—but it’s also something we may have to grapple with a lot sooner than you might think.

Take a self-driving car, that has to choose between hitting a child or slamming its own passenger into a barrier.

Or imagine a rescue robot that detects two injured people in the rubble of an earthquake, but knows it doesn’t have time to save both.

BERTRAM MALLE: How does that robot decide which of these people to try to save first? That’s something we as a community actually have to figure out.

NARRATOR:It’s a moral dilemma. Which is why a team of scientists is attempting to build moral robots.

If autonomous robots are going to hang with us, we’re going to have to teach them how to behave—which means finding a way to make them aware of the values that are most important to us.

Matthias Scheutz is computer scientist at Tufts who studies human robot interaction—and he’s trying to figure out how to model moral reasoning in a machine.

But with morals, things get messy pretty quickly. Even as humans, we don’t really have any concrete rules about what’s right and wrong—at least, not ones we’ve managed to agree upon. What we have instead are norms—basically thousands of fuzzy, contradictory guidelines. Norms help us predict the way the people around us will behave, and how they’ll want us to behave.

MATTHIAS SCHEUTZ: Right now the major challenge for even thinking about how robots might be able to understand moral norms is that we don’t understand on the human side how humans represent and reason if possible with moral norms.

NARRATOR: The big trick—especially if you’re a robot—is that none of these norms are absolute. In one situation, a particular norm or value will feel extremely important. But change the scenario, and you completely alter the rules of the game.

So how can we build a robot that can figure out which norms to follow, and when?

Thats’ where the social psychologists at Brown Univeristy come in. They’ve started by compiling a list of words, ideas and rules that people use to talk about morality—a basic moral vocabulary.  The next step is figuring out how to quantify this vocabulary: How are those ideas related and organized in our minds?

One theory is that the human moral landscape might look a lot like a semantic network, with clusters of closely related concepts that we become more or less aware of depending on the situation.

MALLE: Our hypothesis is that in any particular context, a subset of norms is activated—a particular set of rules related to that situation. That subset of norms is then available to guide action, to recognize violations, and allow us to make decisions.

NARRATOR: The key here is that the relationships between these subnetworks is actually something you can measure. Malle starts off by picking a scenario—say, a day at the beach—and asking a whole bunch of people how they think they’re supposed to behave. What are they supposed to do? And what are they absolutely not supposed to do?

The order in which the participants mention certain rules, the number of times they mention them, and the time it takes between mentioning one idea and another—those are all concrete values. By collecting data from enough different situations, Malle thinks he’ll be able to build a rough map of a human norm network. In the future, a robot might come equipped with a built-in version of that map. That way it could call up the correct moral framework for whatever situation is at hand.

But even if that robot could perfectly imitate a human’s decision making process—is that something we’d really want? Malle suspects that we might actually want our robots to make different decisions than the ones we’d want other humans to make. To test this, he asks his research subjects to imagine a classic moral dilemma.

Picture a runaway trolley in a coal mine, that’s lost use of its brakes. The trolley has four people on board and is hurtling toward a massive brick wall. There’s an alternate safe track, but a repairman is standing on it—and he’s oblivious to what’s happening.

Another worker nearby sees the situation. He can pull a lever that would switch the train onto the second track, saving the passengers in the trolley but killing the  repairman. He has to choose.  

MALLE: So the fundamental dilemma is will you intervene and kill one person to save four? Or are you going to let fate take its course, and most likely four people will die.

NARRATOR: Malle presents this scenario a few different ways: some of the participants watch a human make the decision, some see a humanoid robot, and some see a machine-like robot. Then he asks participants to judge the decision the worker made.

Generally, participants blame the human worker more when he flips the switch—saving four lives but sacrificing one—than when he does nothing. Apparently, watching another person make a cold, calculated decision to sacrifice a human life makes us kind of queasy.  

But evidence suggests that we might actually expect a robot to flip the switch. The participants in Malle’s experiment blamed the robot more if it didn’t step in and intervene. And the more machine-looking the robot was, the more they blamed it for letting the four people die.

There’s one more interesting twist to this. If the robot or human in the story made an unpopular decision—but then gave a reason for that choice—participants blamed that worker less.

And this is really, really important, because it gets at a fundamental skill that robots are going to need: communication.

Back in Matthias Scheutz’s lab at Tufts, they’re working on that exact problem. They’ve programmed a little autonomous robot to follow some simple instructions: it can sit down, stand up, and walk forward.

But they’ve also given it an important rule to follow: Don’t do anything that would cause harm to yourself or others. If a researcher gives the robot an instruction that would violate that rule, the robot doesn’t have to follow that instruction. And it will tell you why it won’t.

The researcher can then give the robot new information. And the robot will update its understanding of its little world and decide on a different course of action.

This communication is essential because moral norms aren’t fixed. We argue and reason about morality—and often, we learn from each other and update our values as a group. And any moral robot will need to be part of that process.

We’re still a long way from building truly moral robot. But this is what the very first steps might look like.

NOTE: Transcripts are created for the convenience of our readers and listeners and may not perfectly match their associated interviews and narratives. The authoritative record of IEEE Spectrums video programming is the video version.

IEEE Spectrum
FOR THE TECHNOLOGY INSIDER

Follow IEEE Spectrum

Support IEEE Spectrum

IEEE Spectrum is the flagship publication of the IEEE — the world’s largest professional organization devoted to engineering and applied sciences. Our articles, podcasts, and infographics inform our readers about developments in technology, engineering, and science.