MIT’s Personal Robotics Group has been one of the driving forces behind social robotics since… well, since they pretty much invented social robotics. Led by Professor Cynthia Breazeal, who is also founder of social robot startup Jibo, the MIT group has built an amazing collection of smart, cute, and squishy creatures, and now they have a new one. The latest, smartest, cutest, and squishiest social robot that MIT has been testing out is named Tega, and it’s already gotten to work, adorably teaching Spanish to preschoolers.
We spoke with Jackie Kory Westlund, a Ph.D. student in the MIT Media Lab who’s been doing research with Tega, about why it’s such a useful social assistive robotics platform and how to keep preschoolers from utterly destroying it with hugs.
To provide some context for Tega, have a look at a couple of the other robots developed by MIT’s Personal Robotics Group, which is really just an excuse to post one of my favorite robot videos of all time:
You can sort of imagine that Dragonbot and Tofu maybe got extra cuddly one lonely night at the Media Lab, and Tega was the result. Or at least, I can imagine that. Don’t judge me. But seriously, MIT started working on Tega a year or two ago as an optimized and more robust version of Dragonbot that was more practical to use unsupervised with kids. Tega retains Dragonbot’s Android phonebrain, with a more Tofu-like body that can squash and stretch. The robot has 5 degrees of freedom, including head up/down, waist-tilt left/right, waist-lean forward/back, full-body up/down, and full-body left/right. Through skilled and creative animations (programmed by Fardad Faridi, who’s now working on Jibo), Tega can display a wide variety of social emotions, as demonstrated by this half-naked beta version:
In a paper that was just presented at the 30th AAAI Conference on Artificial Intelligence, Goren Gordon, Samuel Spaulding, Jacqueline Kory Westlund, Jin Joo Lee, Luke Plummer, Marayna Martinez, Madhurima Dasa, and Cynthia Breazeal, from both MIT and Tel-Aviv University in Israel describe how they sent Tega out into the wild, allowing 34 preschoolers (ages 3-5) to interact with the robot over a period of two months:
Tega is specifically designed to be able to work with kids for extended periods. It’s much more robust than MIT’s earlier social robots, and it’s being aggressively play-tested by any children that can get their hands on it, although Kory Westlund tells us that it’s not quite ready to be left alone without adult supervision. Tega can run autonomously for hours at a time, using an accompanying tablet app to help it interact more directly with users. The kids mostly interact with a toucan character on the tablet, while Tega provides expressive feedback, helping them learn new words in Spanish.
Having two characters like this was a deliberate choice, because it allowed the toucan to become the teacher, while Tega was more of a peer and teammate for the kids. During the study, Tega gave pre-recorded verbal instructions and hints along with general encouragement, and would adapt its gaze to provide social cues to help the user know where and how they should be interacting with the system. Meanwhile, software was tracking the facial expressions of the kids and estimating their general emotional states, which was fed back into Tega’s behaviors.
Using social robots fully autonomously in social situations is very difficult, as Kory Westlund explains: “You can’t create a robot that can do everything in a free-flowing play scenario. You have to constrain the situation to what the robot can support.” As the researchers get more experience with how kids interact with Tega, the amount of stuff that it’ll be able to do by itself will increase.
Results suggested that Tega was effective, in that the preschoolers learned new Spanish words, and the words they were most likely to remember were the ones used most frequently in their interactions with Tega. What’s much more interesting, however, is that Tega’s physical behaviors had significant effects on the kids’ general positive feelings towards it (“valence,” in social science terms). Tega was able to increase valence by nodding, leaning over as if interested, or making happy noises, or decrease it through an expressive sad behavior.
Long term, the researchers are hoping that Tega can learn from each of the kids that it interacts with, and autonomously tailor its physical expressions to what makes them most comfortable in order to optimize their learning experience. Kory Westlund sees this as a valuable way to augment the way students are already learning, as opposed to replace them:
“The goal of a social assistive robot such as Tega is not to compete with tablet apps or human teachers, but to complement and supplement in areas where robots can do the most good. For example, learning language skills is an inherently social task that is not as well suited to an app on its own. The interactive, social way in which Tega and similar robots can engage children makes them fundamentally different from apps. We have been exploring how a social robot such as Tega can moderate and supplement human-human learning as a tutor or peer learning companion. Robots and humans are good at different things. For example, robots can be easily personalized for individual children’s needs and skill levels, and can provide additional support or practice for children who may need it. Robots can never replace human teachers, nor would we want them to!”
Human teachers, your jobs are safe.
[ Tega ]
Special thanks to Jackie Kory Westlund for speaking with us.