MIT introduced Kombusto, their dragon robot designed to teach stuff to preschoolers, back in 2011. Since then, the Personal Robots Group has been doing a substantial amount of research and experimentation to figure out how best to utilize the robot to productively interact with children. We have some updates on how it’s been going, along with a look at the brand new robot that MIT is developing to work with kids for months at a time.
The overall goal for DragonBot (which, as far as I can tell, is a common platform used for many different projects) is to develop “personalized learning companions” for children. In other words, MIT is finding ways in which robots like DragonBot can effectively help kids learn.
DragonBot isn’t intended to work like that IBM Watson-based dinosaur robot; it’s not a primary source of knowledge, and it’s not actively teaching a whole bunch of new facts to kids who use it. Rather, DragonBot is intended to help with the process of learning itself, encouraging kids to be interactively engaged in whatever they happen to be learning about.
DragonBot is powered entirely by the Android phone that forms its face. The phone contains all the sensors the robot uses and performs all of the processing, which helps keep the cost of everything very low: in 2011, the DragonBot platform was estimated to cost about $1,000, which is perhaps expensive for individual consumers but affordable for schools. Being phone-based also gives DragonBot the ability to leverage the cloud, such that one DragonBot interacting with one child can learn from the experiences of all the other DragonBots interacting with other children, everywhere.
So, what does DragonBot do? Here are some examples:
We look at whether a sociable robotic learning/teaching companion could supplement children’s early language education.
The [teleoperated] robot was designed as a social character, engaging children as a peer, not as a teacher, within a relational, dialogic context. The robot targeted the social, interactive nature of language learning through a storytelling game that the robot and child played together. The game was on a tablet – the tablet showed a couple characters that the robot or child could move around while telling their story. During the game, the robot introduced new vocabulary words and modeled good story narration skills.
In a microgenetic study, 17 children played the storytelling game with the robot eight times each over a two month period. With half the children, the robot adapted its level of language to the child’s level – so that, as children improved their storytelling skills, so did the robot. The other half played with a robot that did not adapt.
We found that all children learned new vocabulary words, created new stories during the game, and enjoyed playing with the robot. In addition, children in the adaptive condition maintained or increased the amount and diversity of the language they used during interactions with the robot more than children who played with the non-adaptive robot.
This project aimed to test whether children can “catch” curiosity from a social robot. In other words, does playing with a robot that behaves like a curious child make children more curious.
Parle the DragonBot exhibited three types of curiosity behavior. It was excited about learning new things; it wondered about new exploration possibilities and it challenged the child. Children’s curiosity was assess using three measures: uncertainty seeking, free exploration and question generation.
Children playing with the curious robot had higher uncertainty seeking and free exploration than children playing with a non-curious robot. However, question generation was not altered by the interaction. We hypothesize that this is due to the fact that Parle exhibited free-exploration and uncertainty seeking behaviors, but did not ask any questions, thus did not promote children to ask questions.
Both of these projects involved teleoperation of the robots. Or, to be more specific, humans were secretly (unknown to the kids) controlling the robots from behind the scenes in what’s known as “Wizard-of-Oz” (WoZ) control. This is fine for research, but in order for DragonBots to actually work with a significant number of kids outside of the lab, the bots are going to have to learn to be autonomous. This next project shows how demonstration-based learning is making that happen.
This project extends the widely used Learning from Demonstration (LfD) paradigm to the domain of freeform social interaction. We developed a computational and experimental framework that records these WoZ demonstrations, and uses a hierarchical logistic regression model that allows the robot to choose actions autonomously, in the style of the human demonstrator. Because the demonstrations are sourced from Wizard-of-Oz interactions, we refer to our method as Learning from the Wizard (LfW).
85 participants took part in a randomized experiment. This experiment had three conditions that differed by what the child played with: Tablet-Only, without the robot; WoZ, with the teleoperated robot; and Autonomous, with the robot acting according to its learned model.
The randomized evaluation experiment showed that the autonomous robot was successfully able to successfully learn pedagogically important social behaviors. Overall, the WoZ robots (WoZ-train and WoZ-Experiment) were very similar to the autonomous robot, all of which were markedly different from interaction with a tablet alone. We also found that children interacted similarly with the autonomous and human-controlled robots, that children treated the autonomous robot more like a peer and were more likely to want to play with the autonomous robot again.
Now that DragonBot has some skills, it’s time to unleash it on kids. To do that, it’s going to need some optimization and robustification, and MIT has opted to go with a completely new design called Tega, a “real-world ready social robot.” Here's the concept:
Tega has 5 degrees of freedom, and uses the same sort of phone-based brain that was successful with DragonBot. Here’s what the alpha prototype looks like, at least as of mid-January:
When completed, Tega will be sent out on a three month deployment for interaction with children. I’m not sure if that means three months of unsupervised interaction between kids and this robot, but if so, I’m not sure how much robustness could possibly be enough.
Also, the MIT Personal Robots Group is led by Cynthia Breazeal, who’s now busy developing Jibo, also a social and educational robot. So there’s a lot of overlap between Jibo and Tega. Will the projects help or compete with each other?
Whatever happens, we’re definitely looking forward to seeing a completed version of Tega, and also seeing how much of a difference it can make to kids who need it.
Evan Ackerman is the senior writer for IEEE Spectrum's award-winning robotics blog, Automaton. Since 2007, he has written over 6,000 articles on robotics and emerging technology, covering conferences and events on every single continent except Antarctica (although he remains optimistic). In addition to Spectrum, Evan's work has appeared in a variety of other online publications including Gizmodo and Slate, and you may have heard him on NPR's Science Friday or the BBC World Service if you were listening at just the right time. Evan has an undergraduate degree in Martian geology, which he almost never gets to use, and still wants to be an astronaut when he grows up. In his spare time, he enjoys scuba diving, rehabilitating injured raptors, and playing bagpipes excellently.