Robots Learning to Cook by Watching YouTube Videos

Robot, watch this YouTube video and then make me a sandwich

3 min read
Robots Learning to Cook by Watching YouTube Videos
Photo: University of Maryland

In the hierarchy of things that I want robots to do for me, cooking dinner is right up there with doing the laundry and driving my car. And writing all my articles. For now, the best we can do is just watch progress being made toward getting all of these things to work reliably (and affordably). We’ve seen plenty of examples of robotsthat can cook, but generally, they’re all following some level of pre-programmed instructions. Telling robots what to do and how to do it is one of the trickiest things about robotics, especially for end users, so it’s a good thing we can all just sit back and let them learn things by watching videos on YouTube.

This project is taking place at the University of Maryland, and this video does a very good job of not really saying all that much over the course of 2 minutes, but here it is anyway:

The research we’re talking about here is from a paper titled, “Robot Learning Manipulation Action Plans by ‘Watching’ Unconstrained Videos
from the World Wide Web.” The paper is really about visual processing: watching a human interacting with objects in a video, and then figuring out what that human is doing and how they’re doing it, with a final step of replicating those actions using the manipulation capabilities of a robot (Baxter, in this case).

The University of Michigan has a dataset called YouCook, which consists of 88 open-source third-person YouTube cooking videos. Each video was given a set of unconstrained natural language descriptions by humans, and each video also has frame-by-frame object and action annotations. Using these data, the UMD researchers developed two convolutional neural networks: one to recognize and classify the objects in the videos, and the other to recognize and classify the grasps that the human is using.

While object recognition is a familiar thing, recognizing grasps is important because the robot may have different end effectors that it uses for different grasping purposes, and different grasps can also provide hints about what actions might happen next. From the paper:

The grasp contains information about the action itself, and it can be used for prediction or as a feature for recognition. It also contains information about the beginning and end of action segments, thus it can be used to segment videos in time. If we are to perform the action with a robot, knowledge about how to grasp the object is necessary so the robot can arrange its effectors. For example, consider a humanoid with one parallel gripper and one vacuum gripper. When a power grasp is desired, the robot should select the vacuum gripper for a stable grasp, but when a precision grasp is desired, the parallel gripper is a better choice.

For this particular case, grasps were divided into six types: power grasps and precision grasps, each for a small object, large objects, or spherical object. Objects, meanwhile, were divided into 48 classes, ranging from “apple” to “whisk.” Based on the YouCook data set, the overall recognition accuracy that the system demonstrated was 83 percent, with a 68 percent success rate at translating the  grasp and object combinations into commands that a robot could then execute.

In future work, the researchers would like to develop finer grasp categorizations (more than just the six based on object size and whether power or precision is required), and then use those categorizations to better predict what action is happening in the video, or (ideally) what action is probably going to come next. By that we think the researchers are saying they’re scouring YouTube for a meal that they can sit back and watch their robots cook for them.

[ Paper ] via [ DARPA ] and [ Engadget ]

The Conversation (0)

How Robots Can Help Us Act and Feel Younger

Toyota’s Gill Pratt on enhancing independence in old age

10 min read
An illustration of a woman making a salad with robotic arms around her holding vegetables and other salad ingredients.
Dan Page

By 2050, the global population aged 65 or more will be nearly double what it is today. The number of people over the age of 80 will triple, approaching half a billion. Supporting an aging population is a worldwide concern, but this demographic shift is especially pronounced in Japan, where more than a third of Japanese will be 65 or older by midcentury.

Toyota Research Institute (TRI), which was established by Toyota Motor Corp. in 2015 to explore autonomous cars, robotics, and “human amplification technologies,” has also been focusing a significant portion of its research on ways to help older people maintain their health, happiness, and independence as long as possible. While an important goal in itself, improving self-sufficiency for the elderly also reduces the amount of support they need from society more broadly. And without technological help, sustaining this population in an effective and dignified manner will grow increasingly difficult—first in Japan, but globally soon after.

Keep Reading ↓Show less