IEEE.orgIEEE Xplore Digital LibraryIEEE StandardsMore Sites
      Sign InJoin IEEE
      How to Build a Moral Robot
      Share
      FOR THE TECHNOLOGY INSIDER
      Explore by topic
      AerospaceArtificial IntelligenceBiomedicalComputingConsumer ElectronicsEnergyHistory of TechnologyRoboticsSemiconductorsSensorsTelecommunicationsTransportation
      IEEE Spectrum
      FOR THE TECHNOLOGY INSIDER

      Topics

      AerospaceArtificial IntelligenceBiomedicalComputingConsumer ElectronicsEnergyHistory of TechnologyRoboticsSemiconductorsSensorsTelecommunicationsTransportation

      Sections

      FeaturesNewsOpinionCareersDIYEngineering Resources

      More

      Special ReportsExplainersPodcastsVideosNewslettersTop Programming LanguagesRobots Guide

      For IEEE Members

      Current IssueMagazine ArchiveThe InstituteTI Archive

      For IEEE Members

      Current IssueMagazine ArchiveThe InstituteTI Archive

      IEEE Spectrum

      About UsContact UsReprints & PermissionsAdvertising

      Follow IEEE Spectrum

      Support IEEE Spectrum

      IEEE Spectrum is the flagship publication of the IEEE — the world’s largest professional organization devoted to engineering and applied sciences. Our articles, podcasts, and infographics inform our readers about developments in technology, engineering, and science.
      Join IEEE
      Subscribe
      About IEEEContact & SupportAccessibilityNondiscrimination PolicyTermsIEEE Privacy Policy
      © Copyright 2022 IEEE — All rights reserved. A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity.

      IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy.

      view privacy policy accept & close

      Enjoy more free content and benefits by creating an account

      Saving articles to read later requires an IEEE Spectrum account

      The Institute content is only available for members

      Downloading full PDF issues is exclusive for IEEE Members

      Access to Spectrum's Digital Edition is exclusive for IEEE Members

      Following topics is a feature exclusive for IEEE Members

      Adding your response to an article requires an IEEE Spectrum account

      Create an account to access more content and features on IEEE Spectrum, including the ability to save articles to read later, download Spectrum Collections, and participate in conversations with readers and editors. For more exclusive content and features, consider Joining IEEE.

      Join the world’s largest professional organization devoted to engineering and applied sciences and get access to all of Spectrum’s articles, archives, PDF downloads, and other benefits. Learn more →

      CREATE AN ACCOUNTSIGN IN
      JOIN IEEESIGN IN
      Close

      Access Thousands of Articles — Completely Free

      Create an account and get exclusive content and features: Save articles, download collections, and talk to tech insiders — all free! For full access and benefits, join IEEE as a paying member.

      CREATE AN ACCOUNTSIGN IN
      RoboticsTopicTypeSpecial ReportsVideo

      How to Build a Moral Robot

      If robots are going to drive our cars and play with our kids, we’ll need to teach them right from wrong

      Kristen Clark
      31 May 2016
      Image: Kristen Clark
      robot softwarenetworksrobot aitype:videohuman-robot interactionsocial robotsautonomous systemsethicsbiomedicalself-driving carstrolley problemmoralityDARPAautonomous vehiclesrescue robotscognitive science

      robots report icon

      Whether it’s in our cars, our hospitals or our homes, we’ll soon depend upon robots to make judgement calls in which human lives are at stake.

      That’s why a team of researchers is attempting to model moral reasoning in a robot. In order to pull it off, they’ll need to answer some important questions: How can we quantify the fuzzy, conflicting norms that guide human choices? How can we equip robots with the communication skills to explain their choices in way that we can understand? And would we even want robots to make the same decisions we’d expect humans to make? 

      TRANSCRIPT

      NARRATOR: How do you teach a robot right from wrong?

      It’s a question straight out of a sci-fi movie—but it’s also something we may have to grapple with a lot sooner than you might think.

      Take a self-driving car, that has to choose between hitting a child or slamming its own passenger into a barrier.

      Or imagine a rescue robot that detects two injured people in the rubble of an earthquake, but knows it doesn’t have time to save both.

      BERTRAM MALLE: How does that robot decide which of these people to try to save first? That’s something we as a community actually have to figure out.

      NARRATOR:It’s a moral dilemma. Which is why a team of scientists is attempting to build moral robots.

      If autonomous robots are going to hang with us, we’re going to have to teach them how to behave—which means finding a way to make them aware of the values that are most important to us.

      Matthias Scheutz is computer scientist at Tufts who studies human robot interaction—and he’s trying to figure out how to model moral reasoning in a machine.

      But with morals, things get messy pretty quickly. Even as humans, we don’t really have any concrete rules about what’s right and wrong—at least, not ones we’ve managed to agree upon. What we have instead are norms—basically thousands of fuzzy, contradictory guidelines. Norms help us predict the way the people around us will behave, and how they’ll want us to behave.

      MATTHIAS SCHEUTZ: Right now the major challenge for even thinking about how robots might be able to understand moral norms is that we don’t understand on the human side how humans represent and reason if possible with moral norms.

      NARRATOR: The big trick—especially if you’re a robot—is that none of these norms are absolute. In one situation, a particular norm or value will feel extremely important. But change the scenario, and you completely alter the rules of the game.

      So how can we build a robot that can figure out which norms to follow, and when?

      Thats’ where the social psychologists at Brown Univeristy come in. They’ve started by compiling a list of words, ideas and rules that people use to talk about morality—a basic moral vocabulary.  The next step is figuring out how to quantify this vocabulary: How are those ideas related and organized in our minds?

      One theory is that the human moral landscape might look a lot like a semantic network, with clusters of closely related concepts that we become more or less aware of depending on the situation.

      MALLE: Our hypothesis is that in any particular context, a subset of norms is activated—a particular set of rules related to that situation. That subset of norms is then available to guide action, to recognize violations, and allow us to make decisions.

      NARRATOR: The key here is that the relationships between these subnetworks is actually something you can measure. Malle starts off by picking a scenario—say, a day at the beach—and asking a whole bunch of people how they think they’re supposed to behave. What are they supposed to do? And what are they absolutely not supposed to do?

      The order in which the participants mention certain rules, the number of times they mention them, and the time it takes between mentioning one idea and another—those are all concrete values. By collecting data from enough different situations, Malle thinks he’ll be able to build a rough map of a human norm network. In the future, a robot might come equipped with a built-in version of that map. That way it could call up the correct moral framework for whatever situation is at hand.

      But even if that robot could perfectly imitate a human’s decision making process—is that something we’d really want? Malle suspects that we might actually want our robots to make different decisions than the ones we’d want other humans to make. To test this, he asks his research subjects to imagine a classic moral dilemma.

      Picture a runaway trolley in a coal mine, that’s lost use of its brakes. The trolley has four people on board and is hurtling toward a massive brick wall. There’s an alternate safe track, but a repairman is standing on it—and he’s oblivious to what’s happening.

      Another worker nearby sees the situation. He can pull a lever that would switch the train onto the second track, saving the passengers in the trolley but killing the  repairman. He has to choose.  

      MALLE: So the fundamental dilemma is will you intervene and kill one person to save four? Or are you going to let fate take its course, and most likely four people will die.

      NARRATOR: Malle presents this scenario a few different ways: some of the participants watch a human make the decision, some see a humanoid robot, and some see a machine-like robot. Then he asks participants to judge the decision the worker made.

      Generally, participants blame the human worker more when he flips the switch—saving four lives but sacrificing one—than when he does nothing. Apparently, watching another person make a cold, calculated decision to sacrifice a human life makes us kind of queasy.  

      But evidence suggests that we might actually expect a robot to flip the switch. The participants in Malle’s experiment blamed the robot more if it didn’t step in and intervene. And the more machine-looking the robot was, the more they blamed it for letting the four people die.

      There’s one more interesting twist to this. If the robot or human in the story made an unpopular decision—but then gave a reason for that choice—participants blamed that worker less.

      And this is really, really important, because it gets at a fundamental skill that robots are going to need: communication.

      Back in Matthias Scheutz’s lab at Tufts, they’re working on that exact problem. They’ve programmed a little autonomous robot to follow some simple instructions: it can sit down, stand up, and walk forward.

      But they’ve also given it an important rule to follow: Don’t do anything that would cause harm to yourself or others. If a researcher gives the robot an instruction that would violate that rule, the robot doesn’t have to follow that instruction. And it will tell you why it won’t.

      The researcher can then give the robot new information. And the robot will update its understanding of its little world and decide on a different course of action.

      This communication is essential because moral norms aren’t fixed. We argue and reason about morality—and often, we learn from each other and update our values as a group. And any moral robot will need to be part of that process.

      We’re still a long way from building truly moral robot. But this is what the very first steps might look like.

      NOTE: Transcripts are created for the convenience of our readers and listeners and may not perfectly match their associated interviews and narratives. The authoritative record of IEEE Spectrum’s video programming is the video version.

      The Conversation (0)

      Trending Stories

      The most-read stories on IEEE Spectrum right now

      RoboticsNewsTypeTopic

      Amazon Shows Off Impressive New Warehouse Robots

      Artificial IntelligenceTopicTypeNews

      Meta’s AI Takes an Unsupervised Step Forward

      Artificial IntelligenceTopicMagazineTypeFeatureRobotics

      AI-Guided Robots Are Ready to Sort Your Recyclables

      RoboticsTopicTypeSpecial ReportsVideo

      360 Video: Zoom Over Zanzibar With Tanzania’s Drone Startups

      Come along for the ride as drones soar over the farms and schools of Tanzania

      Evan Ackerman
      Evan Ackerman is a senior editor at IEEE Spectrum. Since 2007, he has written over 6,000 articles on robotics and technology. He has a degree in Martian geology and is excellent at playing bagpipes.
      Michael Koziol

      Michael Koziol is an associate editor at IEEE Spectrum where he covers everything telecommunications. He graduated from Seattle University with bachelor's degrees in English and physics, and earned his master's degree in science journalism from New York University.

      Eliza Strickland

      Eliza Strickland is a senior editor at IEEE Spectrum, where she covers AI, biomedical engineering, and other topics. She holds a master's degree in journalism from Columbia University.

      09 May 2019
      6:56
      Photo: IEEE Spectrum
      dronesgadgetstype:videoEast Africa dronesmappingTanzaniaAfricadelivery drones360 video

      With 360-degree video, IEEE Spectrum puts you aboard drones that are flying high above the Tanzanian landscape: You’ll ride along as drones soar above farms, towns, and the blue expanse of Lake Victoria. You’ll also meet the local entrepreneurs who are creating a new industry, finding applications for their drones in land surveying and delivery. And you’ll get a close-up view from a bamboo grove as a drone pilot named Bornlove builds a flying machine from bamboo and other materials.

      You can follow the action in a 360-degree video in three ways: 1) Watch on your computer, using your mouse to click and drag on the video; 2) watch on your phone, moving the phone around to change your view; or 3) watch on a VR headset for the full immersive experience.

      Keep Reading ↓Show less
      RoboticsTopicTypeSpecial ReportsVideo

      360 Video: Go on a Mission With Zipline’s Delivery Drones

      Immerse yourself in the action as Zipline catapults its drones into the Rwandan sky

      Evan Ackerman
      Evan Ackerman is a senior editor at IEEE Spectrum. Since 2007, he has written over 6,000 articles on robotics and technology. He has a degree in Martian geology and is excellent at playing bagpipes.
      Michael Koziol

      Michael Koziol is an associate editor at IEEE Spectrum where he covers everything telecommunications. He graduated from Seattle University with bachelor's degrees in English and physics, and earned his master's degree in science journalism from New York University.

      Eliza Strickland

      Eliza Strickland is a senior editor at IEEE Spectrum, where she covers AI, biomedical engineering, and other topics. She holds a master's degree in journalism from Columbia University.

      06 May 2019
      IEEE Spectrum
      dronestype:videoEast Africa dronesRwandadelivery dronesZipline360 video

      With 360 video, IEEE Spectrum takes you behind the scenes with one of the world’s first drone-delivery companies. Zipline, based in California, is using drones to deliver blood to hospitals throughout Rwanda. At an operations center in Muhanga, you’ll watch as Zipline technicians assemble the modular drones, fill their cargo holds, and launch them via catapult. You’ll see a package float down from the sky above a rural hospital, and you’ll get a closeup look at Zipline’s ingenious method for capturing returning drones.

      You can follow the action in a 360-degree video in three ways: 1) Watch on your computer, using your mouse to click and drag on the video; 2) watch on your phone, moving the phone around to change your view; or 3) watch on a VR headset for the full immersive experience.

      Keep Reading ↓Show less
      DIYTopicTypeVideo

      A Techie’s Tour of New York City

      Here are some NYC attractions that you won’t find in the guidebooks

      Stephen Cass

      Stephen Cass is the special projects editor at IEEE Spectrum. He currently helms Spectrum's Hands On column, and is also responsible for interactive projects such as the Top Programming Languages app. He has a bachelor's degree in experimental physics from Trinity College Dublin.

      17 Oct 2018
      A Techie's Tour Of NYC
      www.youtube.com
      type:videoNew York CityDIYtourismvideosrocketsNikola Teslahistorytechnologyeventshackerspacenew york citynikola teslanyc tourist videonyc tourist guidetech tour nycvideonyc tech tour

      Do your travel plans include New York City? Are you a techie? If the answer to those questions is yes, let IEEE Spectrum be your guide! We've put together a list of some of our favorite places to visit, including important locations in the history of electrotechnology (New York was once the center of the electrical and electronic world) and places where fun and interesting things are happening today. See where Nikola Tesla lived, check out cutting-edge artists working with technology, or take the kids to see an Atlas and Titan rocket.

      All the locations are accessible via the subway, and many are free to visit. If you do visit, take a selfie and post a link in the comments below.

      Keep Reading ↓Show less
      About IEEEContact & SupportAccessibilityNondiscrimination PolicyTermsIEEE Privacy Policy
      © Copyright 2022 IEEE — All rights reserved. A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity.