Robots that can learn fromdemonstrations are capable of watching a human do something, and then copying (or even improving on) the motions that the human makes in order to learn new tasks. This is fine if you're good at the task that you're trying to teach the robot, but if you're bad at it, you and your robot student are going to run into some problems.
Daniel H. Grollman and Aude Billard from the Learning Algorithms and Systems Laboratory at EPFL, in Lausanne, Switzerland, are working on ways for robots to learn from demonstrations, even if those demonstrations are failures. In the following video, a human shows a robot how to prop up a block and toss a ball into a basket without actually succeeding at either task:
The researchers developed learning algorithms that allow the robot to analyze your behavior and mathematically determine what parts of the task you're getting right (or you think you're getting right) and where you're screwing up, and eventually, it teaches itself to perform the task better than you. At the moment, the robot isn't using an adaptive learning approach; it's just trying different things until it accomplishes the objective. But part of the appeal of this system is that it uses failed human examples to help it know the extent of what it should try. I can almost hear a robotic voice saying, "Human, it's okay to fail."
Grollman and Billard describe their work in a paper, "Donut As I Do: Learning From Failed Demonstrations," presented last week at the IEEE International Conference on Robotics and Automation (ICRA), in Shanghai, and they were honored with the Best Cognitive Robotics Paper award. Congrats!
[ Post updated to correct for the fact that the robot can't yet infer what your overall goal is... But they're working on it! ]