Polite Robot Overlords Will Be More Persuasive

To be nice, robot rhetoric should be less precise

3 min read

Polite Robot Overlords Will Be More Persuasive
Me Talk Pretty One Day: Experiments with Snackbot and others show that how robots communicate makes a big difference in human attitudes toward them.
Photo: Jiuguang Wang/Flickr

Baking cupcakes can be as much a matter of social interaction as it is a mechanical exercise. Never is this more true than when your kitchen partner is a robot. Their always-right, ego-deflating advice can be off-putting, reports social psychologist Sara Kiesler and her colleagues from Carnegie Mellon University, in Pittsburgh. But having them employ a different type of rhetoric could help soften the blow.

In one study, Kiesler’s former student Cristen Torrey, now at Adobe, observed how expert bakers shared advice with less-experienced volunteers. She recorded the interactions and extracted a few different approaches the experts used. For instance, “likable people equivocate when they are giving help,” Kiesler says. That is, they say things such as “Maybe you can try X” rather than simply “Do X.” They also soften their advice with extraneous words such as “Well, so, you can try X.”

So Torrey filmed a few of her own scenarios in which either robots or people shared advice with actors pretending to learn how to bake, using various combinations of the language the experts used. Then she asked a new group of volunteers to watch the videos and rate how likable, controlling, and competent the experts were. They found that equivocation, or hedging, made the experts appear more competent, less controlling, and more likable. The effect was even stronger for the robots, suggesting that people find robots less threatening than humans when the robots use humanlike language. Kiesler presented some of these results on 4 March at the ACM/IEEE International Conference on Human-Robot Interaction, in Tokyo.

“I think this is quite important, and most people who come maybe from a more engineering perspective, or computer science or technical perspective, are not paying attention to that,” says computer scientist Kerstin Dautenhahn of the University of Hertfordshire, in England. Dautenhahn works with experimental companion robots designed to help older people with daily tasks. “People don’t normally ask the question, ‘What can we do to make elderly people want to be helped?’ ” she says.

Part of why the volunteers accepted the robots’ advice might be that they were less worried about incurring an obligation or harming their own reputations when interacting with nonhuman helpers. That’s not to say that people can’t build up a relationship characterized by increasingly sophisticated expectations with robots over time. In fact, another of Kiesler’s students, Min Kyung Lee, found that they do. She ran a study with a robot called Snackbot. The 1.4-meter-tall white-and-orange-trimmed robot rolled around an office with a tray of goodies, speaking with office workers and offering them snacks.

Her study sought to understand what factors help a robot build a rapport with humans. Just as in human relationships, she found that variety helped. With half the workers, Snackbot referred to previous encounters, building up a shared social history. Three-quarters of the human participants reported that they liked the pseudosocial interactions. Lee calls that result, which she presented at the same conference last year, her most exciting, since other human-robot interaction studies have found that most people grow bored with robots that repeat themselves.

That’s useful information for coaches, dieticians, and designers of future robotic advisors, such as Autom. Such robots will need to learn to listen well, for starters. At first, Snackbot’s real-world voice recognition was so bad its operators ended up piping its audio to a remote operator, who selected responses from a prerecorded menu of statements. But today artificial intelligence is good enough to allow a robot to interact more independently with humans, says Dautenhahn. Those robots will need to borrow tricks on cultivating relationships and being polite from humans. Kiesler says her next step is to personalize how the robot interacts with each individual, much as an intern might learn to behave differently around a cheerful mentor or a grumpy boss.

Another important step will be to train people to have realistic expectations for their robots. Kiesler says, “It’s always been surprising to me how people assume when a robot speaks that it also thinks.”

About the Author

Based in Madrid, Lucas Laursen writes about a variety of technologies—especially technologies that turn animals or plants into fuel cells. Recently, he’s been covering human-robot interactions for IEEE Spectrum. We think maybe he had a bad run-in with BigDog.

 

This article is for IEEE members only. Join IEEE to access our full archive.

Join the world’s largest professional organization devoted to engineering and applied sciences and get access to all of Spectrum’s articles, podcasts, and special reports. Learn more →

If you're already an IEEE member, please sign in to continue reading.

Membership includes:

  • Get unlimited access to IEEE Spectrum content
  • Follow your favorite topics to create a personalized feed of IEEE Spectrum content
  • Save Spectrum articles to read later
  • Network with other technology professionals
  • Establish a professional profile
  • Create a group to share and collaborate on projects
  • Discover IEEE events and activities
  • Join and participate in discussions