Are you creeped out by realistic, humanlike robots?
To pay homage to the vast assortment of anthropomorphic automatons, lifelike mannequins, and CGI humans out there, IEEE Spectrum prepared a, dare we say, beautiful slideshow. Watch our Ode To the Uncanny Valley below and then tell us about your reaction.
Many people say they find such imagery eerie, creepy, scary, freaky, frightening. One explanation for such visceral reaction is that our sense of familiarity with robots increases as they become more humanlike—but only up to a point. If lifelike appearance is approached but not attained, our reaction shifts from empathy to revulsion.
This descent into creepiness is known as the uncanny valley. It was proposed by Japanese roboticist Masahiro Mori in a 1970 paper, and has since been the subject of several studies and has gained notoriety in popular culture, with mentions in countless YouTube videos and even on a popular TV show. The uncanny valley is said to have implications for video game design and is blamed for the failure of at least one major Hollywood animation movie.
Yet it remains a controversial notion in some robotics circles. Is it a valid scientific conjecture or just pseudoscience?
There is something appealing about a simple concept that can explain something profound about our humanity and our creations. It’s even more appealing when you see it as a graph (the one below is based on the Wikipedia version with some images added for fun; apparently the graph concocted by Mori was more elaborate, according to a note here).
You can see on both curves (solid line for still robots and dashed line for robots that move) how familiarity (vertical axis) increases as human likeness (horizontal axis) increases, until it plunges and then increases again—hence the valley in uncanny valley.
As a kind of benchmark, the uncanny valley could in principle help us understand why some robots are more likable than others. In that way roboticists would be able to create better designs and leap over the creepiness chasm. But what if there’s no chasm? What if you ask a lot of people in controlled experiments how they feel about a wide variety of robots and when you plot the data it doesn’t add up to the uncanny valley graph? What if you can’t even collect meaningful data because terms like “familiarity” and “human likeness” are too vague?
When Mori put forward the notion of the uncanny valley, he based it on assumptions and ideas he had on the topic. It was an interesting, prescient conjecture, given that there weren’t that many humanoid robots around, let alone a CGI Tom Hanks. But as scientific hypotheses go, it was more speculation than a conclusion drawn from hard empirical data. This is what he wrote at the end of his 1970 paper:
Why do we humans have such a feeling of strangeness? Is this necessary? I have not yet considered it deeply, but it may be important to our self-preservation.
We must complete the map of the uncanny valley to know what is human or to establish the design methodology for creating familiar devices through robotics research.
In a recent Popular Mechanics article, writer Erik Sofge discusses some of the problems with the theory:
Despite its fame, or because of it, the uncanny valley is one of the most misunderstood and untested theories in robotics. While researching this month’s cover story (“Can Robots Be Trusted?” on stands now) about the challenges facing those who design social robots, we expected to spend weeks sifting through an exhaustive supply of data related to the uncanny valley—data that anchors the pervasive, but only loosely quantified sense of dread associated with robots. Instead, we found a theory in disarray. The uncanny valley is both surprisingly complex and, as a shorthand for anything related to robots, nearly useless.
Sofge talked to some top roboticists about their views of the uncanny. Cynthia Breazeal, director of the Personal Robots Group at MIT, told him that the uncanny valley is “not a fact, it’s a conjecture,” and that there’s “no detailed scientific evidence” to support it. David Hanson, founder of Hanson Robotics and creator of realistic robotic heads, said: “In my experience, people get used to the robots very quickly. ... As in, within minutes.”
Sofge also talked to Karl MacDorman, director of the Android Science Center at Indiana University, in Indianapolis, who has long been investigating the uncanny valley. MacDorman’s own view is that there’s something to the idea, but it’s clearly not capturing all the complexity and nuances of human-robot interaction. In fact, MacDorman believes there might be more than one uncanny valley, because many different factors—in particular, odd combinations like a face with realistic skin and cartoonish eyes, for example—can be disconcerting.
Hiroshi Ishiguro, a Japanese roboticist who’s created some of the most striking androids, and a collaborator, Christoph Bartneck, now a professor at Eindhoven University of Technology, conducted a study a few years ago using Ishiguro’s robotic copy, concluding that the uncanny valley theory is “too simplistic.” Here’s part of their conclusions:
The results of this study cannot confirm Mori’s hypothesis of the Uncanny Valley. The robots’ movements and their level of anthropomorphism may be complex phenomena that cannot be reduced to two factors. Movement contains social meanings that may have direct influence on the likeability of a robot. The robot’s level of anthropomorphism does not only depend on its appearance but also on its behavior. A mechanical-looking robot with appropriate social behavior can be anthropomorphized for different reasons than a highly human- like android. Again, Mori’s hypothesis appears to be too simplistic.
Simple models are in general desirable, as long as they have a high explanatory power. This does not appear to be the case for Mori’s hypothesis. Instead, its popularity may be based on the explanatory escape route it offers. The Uncanny Valley can be used in attributing the users’ negative impressions to the users themselves instead of to the shortcomings of the agent or robot. If, for example, a highly realistic screen-based agent received negative ratings, then the developers could claim that their agent fell into the Uncanny Valley. That is, instead of attributing the users’ negative impressions to the agent’s possibly inappropriate social behavior, these impressions are attributed to the users. Creating highly realistic robots and agents is a very difficult task, and the negative user impressions may actually mark the frontiers of engineering. We should use them as valuable feedback to further improve the robots.
It’s a good thing that researchers are trying to get to the bottom of the uncanny valley (no pun intended). Advancing the theory by finding evidence to support it, or disprove it, would be important to robotics because human-robot interaction and social robots are becoming ever more important. If we want to have robots around us, we need to find out how to make them more likable, engaging, and easier to interact with, and naturally their looks play a key role in that regard. Moreover, human-looking robots could be valuable tools in psychology and neuroscience, helping researchers study human behavior and even disorders like autism.
Ishiguro recently told me that the possibility that his creations might result in revulsion won’t stop him from “trying to build the robots of the future as I imagine them.” I for one admire his conviction.
What do you think? Should we continue building robots in our image?