In an earlier post today, we distilled half a dozen DARPA-dense docs into an easy-to-follow overview of the DARPA Subterranean Challenge (SubT), a new competition that will task teams of humans and robots to explore complex underground environments.
In this post, we have an interview with SubT program manager Timothy Chung, whom we met late last year at DARPA’s D60 Conference. “I think for many of the technologies we’re seeking to advance—it’s one of those, aim for the moon, even if you miss you hit the stars type of an approach,” he told us about the new challenge. “So we envision some component technologies being immediately operationally of value, but we’ve set the bar ambitiously high enough for it to be DARPA-worthy and also provide a vision for how that kind of impact could be magnified if and when we’re successful.”
Here’s our full interview with him:
IEEE Spectrum: What are the SubT courses going to be like?
Dr. Timothy Chung: We’re really interested in finding brand-new technologies for exploring and searching underground environments—not just in one type of underground environment, including human-made tunnel systems, the urban underground, and naturally occurring cave environments. All three of those are superficially similar challenge sets, but they also bring very unique, different challenge sets as well, and we’re all interested in being able to develop the technology that can span all of them. And so what we’ll do after putting each team through the wickets individually is get that epic grand-finale course with challenge elements from all three of those subdomains.
Is DARPA mostly expecting robotic technologies to drive this challenge?
The fun part about this is, as with any DARPA challenge, that you tend to leave the aperture wide open. It’s one of those cases where the DARPA program manager can exhibit some humility in saying that “I don’t know what the right answer is, and I’d love for the world to help me discover it.” We’re interested in cases where it’s dangerous to send in humans because not only could it be hazardous, but we don’t even know if it’s hazardous—it’s an unknown environment. So what are our options? It could be robots, but it could also be other kinds of critters that you might send in there, like biological systems that maybe are more robust to the types of conditions you might be interested in. Or it could be advances in see-through-the-earth type signal processing technologies or other things of that nature.
So quite frankly, we imagine that there are robotic approaches that might be interesting here, but honestly, most of those other ideas are still fair game. And that’s both a blessing and a curse from a challenge perspective, because you want to be wide aperture so you that can capture a lot of those potentially out-there ideas, while still providing really good infrastructure against which we can have a fair competition. That’s DARPA’s mission, DARPA’s flavor, and that’s what we’re working towards.
We’re mostly interested in the robots, but is it too early to ask whether any teams have suggested specific biological systems?
It is early, but what I can say, is that there have been different ideas for what that would take that could leverage previous DARPA programs. DARPA was interested in sensors on dragonflies, as one example. I think that there are opportunities for folks, it’s a question of systems integration, and ability to demonstrate the technologies integrated into systems that can have the range or endurance or mobility potentially to get it there. It turns out insects might be really resilient, but they’ll take a long time to cover the more than kilometer ranges that we’re thinking of for SubT. So now we have an expediency trade-off that we want to explore and that might help bound the problem, and we’ll see what other solutions rise to the top.
Do you think that the competition itself will take place on underground environment analogs rather than an actual underground environment, in the same way that the DRC used a location analog to facilitate the competition?
I’m quite deeply committed to representing the environment accurately, and it will likely require us to physically go underground. We want test sites where it’s not just the first order effects of that environment, but also the second and third order impact—for example, the moisture content in the soil and the geology of the mineral content, or the curvature that the tunnels, all of that makes an impact on the viability of the technology. I want that to exist, that realism, because I want to see a path that would really provide an impact that folks could really use. Maybe not all of the technology, but some of the technology.
In my engagements with the fire departments of major metropolitan areas, it’s been great to hear what their problems have been, and they’re not necessarily interested in waiting years upon years for technologies to mature. They would love the capability, for example, to dart in, explore the space, scan where some dangers or hazards might be, and report back on what kind of gear they should take in, so they don’t have to send somebody in to figure it out, come back, and then all get rebreathers or other hazmat type stuff on. And the same goes for mine rescue communities. I think for many of the technologies we’re seeking to advance—it’s one of those, aim for the moon, even if you miss you hit the stars type of an approach. So we envision some component technologies being immediately operationally of value, but we’ve set the bar ambitiously high enough for it to be DARPA-worthy and also provide a vision for how that kind of impact could be magnified if and when we’re successful.
It the competition likely to be similar to the DRC in that at some point, you will say, “Here are the tasks, here is the layout, you can build this at home and practice?” Or is it going to be more of a “Here are the general tasks you need to complete, show up and surprise! Here’s what the course looks like!”
My intent is for the latter. We’re offering this virtual competition that, unlike the DRC, is a competition all the way throughout. It’s intended to recognize that the robotics community greatly leverages simulation coupled with experimentation, and that if there’s a way that we can learn lessons in in simulation, then we should do that. The virtual test bed and the whole infrastructure associated with it will allow teams (both systems and virtual) to gain hours for testing purposes and build up confidence that will allow them to show up to a site with very little or zero prior knowledge and be able to conquer the course. I’m an autonomy researcher, and I recognize that many of these challenges are ambitious, but we’re pushing on some of the things that we aspire to, and we want to push across that reality gap, which we’re really excited about.
Calibrating this challenge must be really difficult—how are you going to make sure that it’s achievable, at least a little bit, such that you’ll have winning teams and result in people developing practical things technologies?
I think that’s a fundamental question for the challenge, and part of what makes it fun for me as a PM from a design point of view. It’s incumbent upon my team to get out there and understand where the state-of-the-art is and calibrate ourselves, so that we can identify the next benchmark that is, let’s say, an order of magnitude better in a number of different ways.
We can adjust the scale of the environments that one might want to tackle, or the speeds at which the mission is conducted—I’m not giving you a definitive answer because there’s no formula that allows us to calibrate exactly, but I think a large part of it has been identifying where things are, identifying where trends are, drawing a line out to the future, and then dialing it a little farther because we do want to make it DARPA worthy and DARPA impactful.
What are the restrictions that the teams will have to compete under? Are you expecting one platform, or multiple platforms working together to tackle such diverse underground environments? Do teams need to have a solution that fits in some sort of real or hypothetical box, or could I throw a thousand robots at the problem if I had them?
In trying to stay true to the vision of wide aperture, I’m intentionally not trying to prescribe one or more or a number of robots. I’m interested in covering, exploring, searching, rather large areas and not giving you a whole lot of time to do it, and so one might argue that that helps incentivize distributing and dividing-and-conquering the mission, so that’s one approach. The second idea here of course is that, this is the real world, nominally, and things will go wrong. I say outright to all teams that attrition is a possibility, and your robot may not necessarily come out exactly the same way it went in. And that’s also a testament to what we’re interested in: understanding autonomy in the context of basically zero human-manual interventions. We don’t want graduate students or technicians or engineers running in there to right a fallen robot, or swapping out batteries, or reconnecting a plug that’s come undone, or what-have-you.
The third way we might incentivize things is, it’s not about finding one object and doing that multiple times—a one trick pony in terms of an object detector or a feature classifier may not get you full marks. So now we’re thinking about how do you compose different capabilities, and you could potentially load it all up onto one exquisite platform, but in the face of all the other considerations, maybe you don’t want to put all your eggs in that one basket. You may want to have some assets that carry certain sensors and other assets that carry different sensors, and of course pay the overhead penalty of having now potentially teams of robots .
The caveat we throw out there is, there will be points at which your system will have to fit through a manhole-sized hole or a human crawlable space. That’s somewhat of a way of a volumetric constraint, albeit loosely, because you can imagine all sorts of way to navigate through there. In terms of number, it’s not necessarily a question of one or a thousand, but is it worth it to you, as a team, either either from a cost-investment point of view or the potential overhead of having to maintain a thousand robots, that your approach merits you trying it that way.
Because of the nature of the environment, is the expectation that these systems will be fully autonomous during the competition?
The concept here is to be able to provide situational awareness in a timely fashion back to home base, but we anticipate that it’s going to be very difficult to communicate, because that’s part of the capability we’re seeking. But one could imagine that there’s a high level type of interaction that you could have if you choose to pay the cost of sending data and maintaining a way to pass data or commands back into the system—there’s potential opportunity where that might help benefit the performance of the team.
Are you at concerned that teams will design their systems to complete the challenge successfully, as opposed to designing systems that are generally useful for all different kinds of underground environments?
That’s often the case for any kind of competition: You design the competition, you define the rules, and then folks will try to find the path that meets the rules. I think we’ve made a careful attempt to craft the guidelines and the rules in such a way that winning the competition and meeting many of these objectives are well aligned. In places where there is misalignment that we discover, I think the community is eager to show that there is the opportunity to align them again, so I think they’d be forgiving to some extent if we were to do some course corrections as necessary. But again, we’ve tried to do a very good job of going out and understanding not only of what the capabilities are currently, but what the needs are, and trying to match them in how we’ve designed the competition overall.
[ DARPA SubT ]