Damage Recovery Algorithm Could Make All Robots Unstoppable

Robots break all the time, and if they could just deal with it themselves, we'd all be a lot happier

4 min read
Damage Recovery Algorithm Could Make All Robots Unstoppable
Photo: Antoine Cully/UPMC

For the last three years, we’ve been watching as the hexapods created by Antoine Cully and Jean-Baptiste Mouret have been getting increasingly difficult to put out of action. Using an exceptionally clever algorithm, the robots have demonstrated that they can shrug off absurd amounts of damage, adapting within minutes to recover their mobility even if you chop a third of their legs off. 

Today, this research has made the cover of Nature, which is a Very Big Deal (at least if you’re a scholar), and it brings along with it some updates and even more potential for the future.

We’ve covered a lot of the theory and practice behind this research over the last several years (Nature being chronically and woefully behind the times relative to IEEE Spectrum). Now Cully and Mouret, from Université Pierre et Marie Curie (UPMC), in Paris, report that their findings can be applied not only to legged robots but also to a new form factor: a planar robotic arm. This illustrates how it’s possible to endow just about any robot with resiliency via this algorithm, as long as it’s got enough degrees of freedom to enable adaptive movement.

What’s unique about this approach isn’t just its versatility, but also its speed of execution. Consider an hexapod equipped with such adaptive skills. If you’re a jerk and step on one of the robot’s legs, snapping it off, it’s going to impact how quickly and efficiently the robot moves. At this point, the robot is like, “uh oh, something bad happened, I can’t walk anymore.” But instead of having to figure out which leg is broken and how, or doing any sort of self-analysis at all, the robot simply starts trying a whole bunch of different gait behaviors through “intelligent trial and error,” converging on something that works by exploring an enormous pregenerated set of potentially effective motions in about two minutes. From the press release:

“Once damaged, the robot becomes like a scientist,” explains lead author Antoine Cully. “It has prior expectations about different behaviors that might work, and begins testing them. However, these predictions come from the simulated, undamaged robot. It has to find out which of them work, not only in reality, but given the damage. Each behavior it tries is like an experiment, and if one behavior doesn’t work the robot is smart enough to rule out that entire type of behavior and try a new type. For example, if walking mostly on its hind legs does not work well, it will next try walking mostly on its front legs. What’s surprising is how quickly it can learn a new way to walk. It’s amazing to watch a robot go from crippled and flailing around to efficiently limping away in about two minutes.”

Recovering from damage is just one application for this algorithm: it can also be used to adapt to different terrain, or to develop new behaviors for unforeseen situations. Since all of this happens autonomously, and the robot can learn new behaviors as often as is necessary, it would be ideal for (say) planetary exploration, where no matter what happens, the robot is going to have to just find some way of continuing to operate as best as it can without any direct human assistance. And really, thinking about this stuff in terms of something as extreme as planetary exploration is wildly optimistic, because robots break all the time here on Earth, and if they could just deal with it themselves (up to a point, anyway), we’d all be a lot happier.

img Damage conditions the hexapod robot learned to overcome. Image: Antoine Cully/UPMC

As part of the Nature release, Cully and Mouret (Danesh Tarapore, from UPMC, and Jeff Clune, from the University of Wyoming, are co-authors on the paper) put together some additional information, including anecdotes about a few things that surprised them as they were working on their project, which are definitely worth reading:

Two years ago, we had a scheduled visit from high-profile scientists. Because our university wanted everything to look great for these important visitors, the university applied wax to the floor the day before. The floor was clean and shiny, which looked nice, but it was also much more slippery! We tried the gait our algorithm had previously learned, which we had tested many times to make sure it would work for these visitors, but it did not work at all. Fortunately, however, our robot can adapt! We launched our adaption algorithm, and a few minutes later, our robot was walking again on the newly waxed floor. As you can imagine, our visitors loved our work.

Another surprise was the following: To create a diversity of behaviors, we used evolution to produce a variety of different ways to walk. We did that by selecting for many different types of walking, measured as robots that have their feet touching the ground different percentages of the time (100%, 75%, ..., 25%, 0%). We thought evolution of course would not be able to solve the 0% case, but it surprised us! It flipped over on it's back and crawled on its elbows with its feet in the air.

Fun anecdotes should be mandatory for all publications.

img Image: Antoine Cully/UPMC

While it’s hard to top getting your research paper on the cover of Nature, this project is certainly not over: at the very least, we know that they have funding through 2016, and that in addition to hexapods and arms, they’re also working with wheeled robots, and looking towards implementing this technique on whatever else they can get their hands on. It should work on just about any robotic system, but specifically, they’re aiming for “robots for disaster-response operations like those of the DARPA Robotics Challenge.” Sounds awesome! Can you get it up and running by next Thursday? We’re pretty sure that those robots are going to need it.

[ Nature ]

The Conversation (0)

How the U.S. Army Is Turning Robots Into Team Players

Engineers battle the limits of deep learning for battlefield bots

11 min read
Robot with threads near a fallen branch

RoMan, the Army Research Laboratory's robotic manipulator, considers the best way to grasp and move a tree branch at the Adelphi Laboratory Center, in Maryland.

Evan Ackerman
LightGreen

This article is part of our special report on AI, “The Great AI Reckoning.

"I should probably not be standing this close," I think to myself, as the robot slowly approaches a large tree branch on the floor in front of me. It's not the size of the branch that makes me nervous—it's that the robot is operating autonomously, and that while I know what it's supposed to do, I'm not entirely sure what it will do. If everything works the way the roboticists at the U.S. Army Research Laboratory (ARL) in Adelphi, Md., expect, the robot will identify the branch, grasp it, and drag it out of the way. These folks know what they're doing, but I've spent enough time around robots that I take a small step backwards anyway.

The robot, named RoMan, for Robotic Manipulator, is about the size of a large lawn mower, with a tracked base that helps it handle most kinds of terrain. At the front, it has a squat torso equipped with cameras and depth sensors, as well as a pair of arms that were harvested from a prototype disaster-response robot originally developed at NASA's Jet Propulsion Laboratory for a DARPA robotics competition. RoMan's job today is roadway clearing, a multistep task that ARL wants the robot to complete as autonomously as possible. Instead of instructing the robot to grasp specific objects in specific ways and move them to specific places, the operators tell RoMan to "go clear a path." It's then up to the robot to make all the decisions necessary to achieve that objective.

Keep Reading ↓ Show less