In the recent superhero film Iron Man , there’s a scene where Robert Downey Jr.’s character struggles to reach a device to power his failing heart. He stretches an arm up to the device, but collapses before he can grab it. Lucky for him, his trusty robot is nearby—it manages to anticipate what he wants and hand him the device just in time.
In the real world, we’ve yet to create artificial intelligences that can respond so intuitively to our needs. The quest to do so has pushed two groups of researchers in nearly opposite directions. One group, at Rensselaer Polytechnic Institute (RPI), in Troy, N.Y., has built Eddie, an AI that resides in the virtual world of Second Life and harnesses the power of a supercomputer to analyze a library of rules about human thinking. The other, MIT Media Lab’s Personal Robots Group, has built Leonardo, a furry, animatronic robot that learns as a child does, by interacting with people in the physical world. Within the last two years both Eddie and Leonardo have demonstrated a basic social ability that is the first step toward AI that understands how humans think.
”We’re not there yet, but a major turning point for AI is working out logic that can do justice to your views of another person’s mind,” says Selmer Bringsjord, an AI expert who heads the cognitive science department at RPI. For an artificial intelligence to fully interact and cooperate with people, it has to understand the concept of a mind separate from its own, he explains. Bringsjord and his team created Eddie with this goal in mind, and in March 2008, showed off some of its social skills in Second Life.
Eddie’s avatar met two other avatars, CrispyNoodle and BinDistrib, both controlled by humans. A red briefcase and a green briefcase lay open on a table, with the red briefcase containing a gun. While Eddie watched, CrispyNoodle asked BinDistrib to leave, then moved the gun from the red briefcase to the green one, and closed them both. When BinDistrib returned, CrispyNoodle asked Eddie to predict where BinDistrib would look for the gun. Eddie was able to correctly predict that BinDistrib would look for the gun in the red briefcase, even though it was no longer there.
The correct answer may seem obvious, but most children under 5 years old get it wrong, because they don’t understand how the other person can believe something that is untrue. Cognitive scientists use such false-belief tests to determine if a child can understand another person’s point of view—the beginning of social awareness.
Bringsjord’s team helps Eddie understand other people by translating human mental states into logic-based rules and theorems—”If Bob appears happy at a particular time, and nothing happens to change that, then he will still be happy at a later time”—in what researchers refer to as a top-down approach. This type of AI can only reason about human mental states insofar as Bringsjord’s team has included them in its knowledge database.
At MIT, Cynthia Breazeal’s Personal Robots Group has created social AI through the opposite approach—nurturing robotic intelligence through bottom-up learning, where simple imitation behaviors lead to social interaction. In a 2007 demonstration, Leonardo—which can’t walk but has 32 degrees of freedom in his expressive face alone—watched Matt Berlin, a MIT researcher, struggling to open a box that he mistakenly believed contained potato chips (it was really full of cookies). The robot responded by pulling a lever that opened the box that actually held chips.
Unlike Eddie, Leonardo has no preprogrammed knowledge of human thoughts. The robot started with a set of basic learning abilities and built-in social skills, and gradually, through imitation, learned to map certain human facial expressions or gestures to rudimentary intentions and goals. ”The core systems and learning algorithms are very well known and nothing fancy, but then we get more bang for your buck,” said Berlin. Leonardo requires relatively simple programming and little computing power to perform the same tasks compared with Eddie.
But the RPI group has ambitious plans—in the fall they want to attempt the holy grail of AI research: a Turing test. Right now, Bringsjord believes that his team has created AI capable of second- and third-order beliefs about other minds—in other words the AI can consider what a second intelligence believes about a third mind’s beliefs. Bringsjord’s team plans to combine Eddie’s AI program with the biographical background of a grad student and the power of IBM’s Blue Gene supercomputer to carry on a conversation with a human avatar. If a human judge can’t tell the difference between another human and the AI through online conversation, then the system will have passed the test.
Even such a feat will be a limited victory, however. The conversation won’t stray from certain topics, and such reasoning ”takes a lot of computational horsepower,” enough to challenge even the world’s fastest supercomputer, says Bringsjord.