DARPA Robotics Challenge: Interview With Gill Pratt

Gill PrattThe U.S. Defense Advanced Research Projects Agency announced yesterday an ambitious robotics program aiming to revolutionize disaster response robots. The DARPA Robotics Challenge is the brainchild of DARPA program manager Dr. Gill Pratt, a researcher and educator with numerous inventions to his credit.* We spoke with Dr. Pratt about the goals of the new effort and how it could change robotics in a big way.

Q: DARPA funds lots of robotics programs. What’s the goal and focus of this new effort?

A: The program is really aimed at developing human-robot teams to be able to help in disaster response. Here the human is at a distance from the robot and will supervise the robot to do a number of tasks that are quite challenging. And we think it will be very exciting. … It’s important to note that this isn’t just for a nuclear power plant situation. The next disaster may not be a nuclear plant. For that reason, we want to leverage the human tools that are likely to be out there. It’s all about adaptability—what’s the most adaptable system that can be used during that first day or two of the disaster when you have a chance to reduce the scope of the disaster by taking action. That’s what the challenge is about.

Q: Is the program designed to advance humanoid robot technology? Do robots entering the challenge have to be humanlike machines?

The DARPA Robotics Challenge is decidedly not exclusive to humanoid systems. The three big ideas here are, first, we need robots that are compatible with shared environments, even though the environments are degraded, and second, we need robots that are compatible with human tools. The reason for that is that typically we don’t know where the disaster is going to be, and right now the stock of tools, all the way from vehicles to hand tools, are really made for people to operate, for maintenance or construction, and so we want the robot to be able to use all those tools. The third thing is compatibility with human operators in two ways: one is that the robot is easy to operate without particular training, and second is that the human operator can easily imagine what the robot might do. For that to be true, the robot needs to have a form that is not too different from the human form. But I think that some variation actually might work. For instance, if it had more arms than we have, or if it had more legs than we have, or if it had a mobility platform that was different than legs and could get around in the same environment and use the same tools that we use, that would be fine to do those types of tasks. We are not pushing a particular robot architecture or type; rather we’re saying what the interface needs to be like, both for the operator and for the tools and environment.

DARPA robotics challenge disaster reponse robots
Illustration of a disaster response scenario part of the DARPA Robotics Challenge: The robot on the right uses a power tool to break through a wall, and the one on the left turns a valve to close a leaking pipe. Image: DARPA

Q: The disaster response scenario you came up with looks really hard. Is it realistic to expect teams will succeed?

A: Some people have said, incorrectly, that we expected that teams would not be able to complete the first challenge [during Phase 1 of the program], but that’s actually not true. The challenge will be adjusted as we gain experience with the teams over this first phase, before the first live challenge in December 2013. What we’re going to make sure is that the live challenge is difficult but not impossible. And then we expect that in the second live challenge we’ll be doing the same thing, and that in fact we’ll show off skills and performance that are better than what we had before.

Q: How would you adjust the difficulty if there are just two actual competitions, one for each phase of the program?

A: Basically we have several parameters, several "knobs" that we can use to adjust the difficulty. One is so that we can change how hard the tasks and environment are—for instance the material that the robot needs to break through. We can do anything from drywall to concrete, and we’ll have to see what’s easy and what’s hard. We also have a "knob" on the communications link. We have an ability to modulate what the communication is like between the operators and the robots. We’ll turn down the bandwidth, increase latency and jitter, and do some modulation of the availability of the communications link that will make the challenge more difficult as we throttle that more, of course, and easier as we open the channel up between the two. Another thing we’re considering is having the first live challenge set up in a way where each of the events is separate, so that if teams do well on some but not on others, they’d still score. We’ll be developing that, and whether we run one team from start to finish, or whether we run all the teams in the first event, and then all the teams in the second and so forth, we aren’t sure of that yet. But since the challenge involves a set of tasks, if you kind of miss on one, you could move on to the next one.

Q: And at what point are you getting feedback from teams saying, "This is just too hard"?

A: In the simulation phase, which is the first nine months, we’re going to be getting lots of feedback from the teams saying how hard things are. We’re devoting program resources to the development of the simulation engine. … We want to ask for participation not just from traditional robotics performers but also from people that are in the graphics world, video games, and areas like that, because we think they have a lot to add there. We’re going to be paying NIST and some other places to do validation of the results from the simulation and to be a hardware "shakeout site" for all teams to make sure that we actually develop a tool that will have the impact on the robotics world that [the electronics circuit simulator] SPICE had on the EE world. … I really want the broader community, not just the usual robotics, legged-locomotion guys, to think about this. And to think of the opportunity that they have here to team up with people in response to the BAA [Broad Agency Announcement] and also the upcoming [solicitation] from the simulation provider to help really develop a transformational tool.

Q: How would these people participate in the competition? The simulation engine will come from one single provider, right?

A: It will come from a provider, but it will be open-source, and we plan to offer financial incentives for a wide range of contributions. So it’s important to distinguish between the BAA [released yesterday] and the solicitation that we’ll have from that one provider to the entire world to contribute to the simulation tool. It’s a critical component of the challenge.

Q: One of the tasks requires a robot to drive a vehicle. Why is that part of the challenge, and is that a way of imposing human size and features to the robot?

A: That wasn’t the intention. It’s really a question of available tools. As an example, in the Fukushima disaster, they had fire trucks on hand, but after the first reactor explosion, they could not have people drive those fire trucks because the contamination was high. We want these robots to really be compatible with all tools, from earth-moving machinery all the way down to a screwdriver. We know that in a disaster, presently, most of those tools are not going to be setup to be driven by a robot or teleoperated—that they’re mostly going to be the tools that are at hand, and that includes vehicles. If you look at [a typical human] environment the number of vehicles is very large, so that is a form of tool that we think is a resource in a disaster that tends to get used. Now, it is true that some have been converted for teleop work; an example is the QinetiQ teleop interface for a Bobcat. And that’s great, if you can get it to the site in time. But in most cases, you have to use the tools that are at hand, and so the construction equipment, including vehicles, is what is at hand. Notice this is not a repeat of the other DARPA challenges that developed cars that drove by themselves. The robot will be under supervisory control from a person, and so it’s not that we expect the robot to drive on a path all on its own. This is to show its dexterity in using various tools, and vehicles are within that set of tools. Now there’s a second benefit that we get from it which has to do with the power supply for the robot itself. We’re not going to disallow tethers in order to power the robot, but the tether will not be able to go all the way back to the operator. So what the utility vehicle can help with is, you can put the power supply for the robot on the flatbed of the vehicle and then the tether can go to the robot, and the vehicle is now a movable base that the robot can operate a certain distance from.

Q: Can I also have a wired communications link between the robot and the mobile base vehicle?

A: In the BAA we say no, but we also require a wired hookup to the robot, and so we may end up using the mobile base as a RF point, we are not sure yet. As an example, in Fukushima, the RF environment inside the buildings was very bad, because of all the shielding that was there. This is a way of getting around that issue. But mostly it’s the power issue that we get the advantage from.

Q: Can you clarify the point about a robot being allowed to divide its body into more than one unit to complete tasks? Can a robot leave parts of itself behind?

A: Robots competing in the challenge can’t leave parts behind. We want to give maximum freedom to the performers to choose the topology of the robot that they think is best. So if during an event it’s best to have a small part move off and then perform a task and then come back, that’s perfectly fine. But we want to exclude teams developing a robot that is specialized for each task, and then just using one at a time for each. So it’s a one-robot challenge, but the robot can split apart and come back together if desired.

Q: DARPA will fund a company to build a robot to be used by the software teams—but this doesn’t seem like something you can just take existing technology and integrate into a platform. How can you be sure you’ll be able to build a robust, capable robot that the software teams can use? It looks like you’re solving the challenge that you’re proposing…

A: We are not solving the challenge by developing a robot for the software teams. The robot itself is only part of the challenge. As an example, a robot that has two legs that walks on its own and does the balancing task, that has been done; it’s been done by the Japanese and by several U.S. groups also. And so the GFE [Government Furnished Equipment] robot could be one of those platforms. But that’s not what the challenge is. The challenge is going to be, given these particular tasks, where does the robot place its feet, where does it place its hands, how does it turn the valve, how does it open the door. The locomotion part of it, which is to balance and decide where the feet are going to be placed to walk forward, that’s actually not the hard part. In particular, how do you make best use of the human [operator] in this team, to give the supervisory commands to the machine, to say, grab that handhold, turn that valve, pick up this bolt… those are the more difficult parts.

Q: Still, from the scenario, it looks like the technology required is too far off, and though you might be able to have a robot do one or two tasks, performing them all with a single robot seems really far-fetched.

A: We think that it's actually “DARPA hard,” but not an impossible thing to do. And the reason that we’re spending the funds on this is actually to push the field forward and make this capability a reality. We’re also trying to widen the supplier base for the capability that would help here. So we picked a pretty hard goal, that’s absolutely true. It’s a goal that has a lot of risk, but a lot of reward as well, and that’s really the theme of what DARPA tries to do. If we look at the driverless-car world before the DARPA challenges occurred, there were a lot of research efforts that showed the cars moving a small distance down the road, on a curve, and maybe recognizing some fraction of the time where they were with respect to the road. And I think that the previous challenges really pushed the field forward to the state where now other firms have picked this up and are making those cars. Some day, not too far from now, we’ll just get in our car and sit and talk to the person who’s next to us and not worry about how to drive. And that would be an amazingly great thing. I expect the same sort of thing will happen with the new challenge we're launching.

* Gill Pratt and Matt Williamson invented the series elastic actuator at MIT and patented it in 1995.

This interview has been edited and condensed.

Related Stories

Automaton

IEEE Spectrum's award-winning robotics blog, featuring news, articles, and videos on robots, humanoids, automation, artificial intelligence, and more.
Contact us:  e.guizzo@ieee.org

Editor
Erico Guizzo
New York, N.Y.
Senior Writer
Evan Ackerman
Berkeley, Calif.
 
Contributor
Jason Falconer
Canada
Contributor
Angelica Lim
Tokyo, Japan
 

Newsletter Sign Up

Sign up for the Automaton newsletter and get biweekly updates about robotics, automation, and AI, all delivered directly to your inbox.

Advertisement
Advertisement