DARPA Subterranean Challenge: The Scoring Rules

How autonomous robots in the underground scavenger hunt pick up points for their teams

4 min read

Image of a whistle paired with red and yellow cards, all on top of a laptop keyboard.
Photo: iStock

We’re in the middle of the DARPA Subterranean Challenge (today is the third of four days of scored runs). We're also literally in the middle of things because today’s the one day that media are allowed on site. For the past few days, we've just been following along with DARPA's live stream of the event on YouTube as well as #SubTChallenge on Twitter. But all they’ve shown is robots going into the mine and (sometimes) coming out again. Or falling over.

We do get to see the scores at the end of the day, but most of the time, it’s not clear what's going on inside the mine that results in a robot scoring a point for its team. So, let's take a look at what the robots are actually doing in there, and what it takes to score during SubT.

Here are the basics on SubT Tunnel Circuit scoring:

The NIOSH mine will be separated into two different competition courses: “Safety Research” and “Experimental.” Teams will have up to four (4) scored runs through the Tunnel Circuit event, two (2) on each course.

For the Systems Teams, the final ranking in the Tunnel Circuit will be determined based on the sum of a team’s top score on each competition course. The highest scoring run on the Safety Research course will be added to the highest scoring run on the Experimental course.

Each of those runs is completely independent, in that teams aren't allowed to use, say, mapping data that they collected from a previous run on the same course. And DARPA is making things even more difficult by altering the courses between runs—which is what the organizers spent all day yesterday doing while the teams took a day to work on their robots.

Artifacts

The scoring itself is actually pretty simple. You can think of it like an underground scavenger hunt:

The main scoring objective is the need to search for, detect, and provide spatially referenced locations of artifacts relevant to each of the three subdomains. These artifacts could vary in their size, quantity, and detection signatures (e.g., visual, thermal, chemical). Teams will know what to look for, but the locations and distribution of the artifacts within the course will not be known. It is expected that the number of artifacts will be in the range of 10-30 and multiple copies of each artifact type are possible. The total number of artifacts, but not the number of each type, will be disclosed to the competitors.

For the Tunnel Circuit, teams are looking for a total of 20 artifacts, which will be some combination of the following: 

SubTThe five types of artifacts hidden throughout the SubT Tunnel Circuit: cell phone, backpack, fire extinguisher, drill, and survivorPhotos: DARPA

Teams have 60 minutes to find as many artifacts as they can, and transmit their types and locations in the form of three dimensional coordinates back to base (and to DARPA). The location that the robots report has to be correct within five meters of the actual location of the artifact, which is measured from the Localization Point you can see in the image above. These locations are all relative, with the origin point being based off of markers that are posted on the outside of the mine entrances. If a robot reports the correct artifact type in the correct location, the team earns a point. Incorrect reports don't cost anything, although teams are limited to 40 reports per run in total so that they don’t just report artifacts everywhere.

Time

Time is not a primary scoring criteria, although DARPA points out that “the course layout is designed such that finding a majority of the artifacts requires significant speed in covering the course.” If teams are tied on points, though, time does come into play, in the following sequence of tiebreakers:

  • Earliest time that the last artifact was successfully reported, averaged across the team’s best runs on each course
  • Earliest time that the first artifact was successfully reported, averaged across the team’s best runs on each course
  • Lowest average time across all valid artifact reports, averaged across the team’s best runs on each course

Mapping

While mapping isn’t part of the score, it is a requirement: “Systems Teams must provide real-time 3D volumetric map updates to the DARPA Command Post at a minimum frequency of one update per 10 seconds.” And DARPA reminds teams that they should be thinking beyond this particular challenge:

Each team’s provided map will be used to visualize and validate the team’s progress and evaluate the team’s perception and mapping capability. Both stakeholders and spectators will likely form opinions about the quality of a team’s solution based on its map representation. As rapid and  remote mapping is an important capability to DARPA, the quality of a team’s mapping capability may be used, in part, to inform continued funding decisions.

Since the competition is still ongoing, we’re not expecting much more scoring detail until probably Friday at the earliest. What we’d really like to see at some point would be a walkthrough or map of each course showing the locations of the artifacts, combined with team-generated mapping data and first person video feeds from the robots themselves. We'll be pestering both DARPA and the teams themselves for this stuff as soon as we can, but until then, keep following the live stream and #SubTChallenge on Twitter.

[ SubT ]

The Conversation (0)