Simple Robots Perform Complex Tasks With Environmental Modifications

Robots learn to modify their environment to make the impossible possible

3 min read
Modular robot completing tasks by deploying and crossing bridges and ramps made of building blocks
Photo: University of Pennsylvania/ModLab

There’s an expectation in robotics that in order to be useful, robots have to be able to adapt to unstructured environments. Unstructured environments generally refer to places outside of a robotics lab or other controlled or semi-controlled situation, and could include anything from your living room to a tropical rainforest. An enormous amount of effort and creativity goes into designing robots that can reliably operate in places like these, with a focus on developing methods of sensing, locomotion, and manipulation that handle all kinds of different situations. It’s a very hard problem; even for humans, it’s hard, so we do a very human thing. We cheat.

Cheating, in this context, means instead of adapting to an environment, you instead adapt the environment itself, modifying it so you can complete different tasks. Humans do this all the time, by using stepping stools to reach high places, adding stairs and ramps to overcome obstacles, attaching handles to objects to make manipulation easier, and so on. A robot that could do similar sorts of things has the potential to be far more capable than a robot that is simply passively adaptable, and at the IEEE International Conference on Robotics and Automation last week, we saw some new research that’s making it happen.

SMORES-EP is a robot from the University of Pennsylvania’s ModLab. It’s made up of an arbitrary number of independent wheeled cubes that can attach to one another magnetically in multiple configurations to form a larger cooperative robot that’s more capable than any single module alone. While robots like these are generally comprised of identical modules, it’s not all that much of a stretch to consider ways in which it might be beneficial to incorporate other objects into the system that might be actively or passively useful. 

For example, modular robots can have trouble dealing with crossing gaps or climbing up stairs or ledges, because modular robots don’t scale well beyond a handful of individual modules. Rather than trying to find a way for the robot to handle obstacles like these, researchers from UPenn and Cornell decided to teach the robot to modify its environment by giving it access to blocks and ramps that it could (autonomously) use to make obstacles less obstacle-y.

This behavior is completely autonomous: The system is given a high-level task to accomplish, and the ramps and blocks are placed in the environment for it to use if it decides that they’d come in handy, but it doesn’t have explicit instructions about what to do every time. The video above shows some example tasks, but the system has no problem generalizing to other tasks in other environments that may require different environment augmentations.

Ramps and blocks are just two examples of objects that robots could use to augment their environment. It’s easy to imagine how a robotic system could carry augmentations with it (or the materials to construct them), or perhaps even scavenge materials locally, building things like ramps out of dirt or rocks. Heterogeneous teams of robots could include construction robots that modify obstacles so that scout robots can traverse them. And mobility is just one example of environmental augmentation: Perception is a challenge for robots, but what if you had a robot with lots of fancy sensors scout out an environment, and then place fiducials or RFID markers all over the place so that other robots with far cheaper sensors could easily navigate around and recognize objects? Of course, doing things like this may have an impact on any humans in the environment as well, which is something that the robots will likely have to consider. There’s a lot of potential here, and we’re excited to see what the researchers make of it.

“Perception-Informed Autonomous Environment Augmentation With Modular Robots,” by Tarik Tosun, Jonathan Daudelin, Gangyuan Jing, Hadas Kress-Gazit, Mark Campbell, and Mark Yim from the University of Pennsylvania and Cornell University, was presented at ICRA 2018 in Brisbane, Australia.

The Conversation (0)

How the U.S. Army Is Turning Robots Into Team Players

Engineers battle the limits of deep learning for battlefield bots

11 min read
Robot with threads near a fallen branch

RoMan, the Army Research Laboratory's robotic manipulator, considers the best way to grasp and move a tree branch at the Adelphi Laboratory Center, in Maryland.

Evan Ackerman
LightGreen

This article is part of our special report on AI, “The Great AI Reckoning.

"I should probably not be standing this close," I think to myself, as the robot slowly approaches a large tree branch on the floor in front of me. It's not the size of the branch that makes me nervous—it's that the robot is operating autonomously, and that while I know what it's supposed to do, I'm not entirely sure what it will do. If everything works the way the roboticists at the U.S. Army Research Laboratory (ARL) in Adelphi, Md., expect, the robot will identify the branch, grasp it, and drag it out of the way. These folks know what they're doing, but I've spent enough time around robots that I take a small step backwards anyway.

The robot, named RoMan, for Robotic Manipulator, is about the size of a large lawn mower, with a tracked base that helps it handle most kinds of terrain. At the front, it has a squat torso equipped with cameras and depth sensors, as well as a pair of arms that were harvested from a prototype disaster-response robot originally developed at NASA's Jet Propulsion Laboratory for a DARPA robotics competition. RoMan's job today is roadway clearing, a multistep task that ARL wants the robot to complete as autonomously as possible. Instead of instructing the robot to grasp specific objects in specific ways and move them to specific places, the operators tell RoMan to "go clear a path." It's then up to the robot to make all the decisions necessary to achieve that objective.

Keep Reading ↓ Show less