HyQ Steps Across Gaps Despite Getting Yanked Around

IIT's quadruped has a new footstep planner that is robust against shoves and gaps

3 min read
HyQ
Photo: IIT

If your robotics lab has a quadruped, it’s become almost a requirement that you post a video of the robot not falling over when walking across some kind of particularly challenging surface. And quadrupeds are getting quite good at keeping their feet, even while negotiating uneven terrain like steps or rubble. One way to do this is without any visual perception at all, simply reacting to obstacles “blindly” by positioning legs and feet to keep the body of the robot upright and moving in the right direction. This can work for terrain that’s continuous, but when you start looking at more dangerous situations like gaps that a robot’s leg could get stuck in, being able to use vision to plan a safe path becomes necessary.

Vision, though, is a real bag of worms, kettle of fish, bushel of geese, or whatever your own favorite tricky metaphor is. Adapting foot placement based on visual feedback takes both reliable sensing and the processing power to back it up, but even under the best of circumstances, there’s only so much that an onboard system can usually handle. At the Italian Institute of Technology, roboticists have used a convolutional neural network to reduce the time that it takes for the HyQ quadruped to plan its foot placement by several orders of magnitude, and it can now make dynamic adaptations, allowing it to withstand an extra helping of abuse from its human programmers.

When HyQ is being yanked around in the video above, what it’s showing is that the robot is able to adjust where it’s placing its feet, even after starting to take a step. Most robots plan their steps by saying, “I’m going to put my foot in that spot over there, ready, go!” This works just fine, except when something happens between the time that the robot lifts its foot up in one place and puts it down in another. HyQ’s new controller allows it to replan almost continuously, enabling adjustments on the fly whether it’s in the middle of a step or not, making it much more robust to external disturbances, whether caused by slippery surfaces, mistakes in foot placement, or shoves from human meanies.

The rest of the video shows an example of a situation in which visual adaptation is critical to the health and happiness of the robot—gap crossing. Without visual feedback, gaps are potentially lethal to those skinny little robot legs. Rather than churn through an entire software stack devoted to interpreting sensor data and calculating optimal foot placement, HyQ instead uses a convolutional neural network trained on a bunch of terrain templates including gaps, bars, rocks, and other nasty things to interpret the 3D map of the area in front of it created by its onboard sensors. The neural network is up to 200 times faster at computation for footstep selection than traditional planning systems, which both enables the continuous planning and opens up the option to do more complex planning in the future, like specifying different gaits or body orientations to make the robot even more adaptable. And while it’s not in the video, the researchers tell us that HyQ can walk across those gaps even while it’s being yanked around.

Octavio Villarreal and Victor Barasuol, from the Dynamic Legged Systems lab at IIT, led by Claudio Semini, will be presenting this work at two IROS workshops on Friday: Development of Agile Robots, and Machine Learning in Robot Motion Planning. If you’re in Madrid, stop by and check it out, and if you’re not, ask yourself whether your commitment to robotics really could be just a bit more serious.

[ IIT ]

The Conversation (0)

How the U.S. Army Is Turning Robots Into Team Players

Engineers battle the limits of deep learning for battlefield bots

11 min read
Robot with threads near a fallen branch

RoMan, the Army Research Laboratory's robotic manipulator, considers the best way to grasp and move a tree branch at the Adelphi Laboratory Center, in Maryland.

Evan Ackerman
LightGreen

“I should probably not be standing this close," I think to myself, as the robot slowly approaches a large tree branch on the floor in front of me. It's not the size of the branch that makes me nervous—it's that the robot is operating autonomously, and that while I know what it's supposed to do, I'm not entirely sure what it will do. If everything works the way the roboticists at the U.S. Army Research Laboratory (ARL) in Adelphi, Md., expect, the robot will identify the branch, grasp it, and drag it out of the way. These folks know what they're doing, but I've spent enough time around robots that I take a small step backwards anyway.

This article is part of our special report on AI, “The Great AI Reckoning.”

The robot, named RoMan, for Robotic Manipulator, is about the size of a large lawn mower, with a tracked base that helps it handle most kinds of terrain. At the front, it has a squat torso equipped with cameras and depth sensors, as well as a pair of arms that were harvested from a prototype disaster-response robot originally developed at NASA's Jet Propulsion Laboratory for a DARPA robotics competition. RoMan's job today is roadway clearing, a multistep task that ARL wants the robot to complete as autonomously as possible. Instead of instructing the robot to grasp specific objects in specific ways and move them to specific places, the operators tell RoMan to "go clear a path." It's then up to the robot to make all the decisions necessary to achieve that objective.

Keep Reading ↓ Show less