We’ve been collecting DARPA Robotics Challenge-related videos for the last several months, and this post is an attempt to put a bunch of them together in a way that showcases the current state of the robots of the DRC Finals just before the competition starts. Looking through these will show you how capable many of the teams are right now (or within a few weeks or so), providing a metric for where your expectations should be for the competition itself. Of course, past performance is no guarantee of future results. But as you watch, these videos will give you an idea of what’s fast, what’s slow, what robots seem to be doing well, and what robots seem to be doing amazing.
Note that these videos are at least a week or two out of date, and they’re totally biased towards teams that have been, you know, actually posting videos on YouTube, so there might be robots that are doing equally well but you won’t see them here. Nonetheless, this is the best cross-section of pre-event capabilites we’ve got, and it should give you a pretty good sense of what to expect when the Finals kick off on Friday.
On Monday, the DRC Teams all got a course walkthrough, and we’ll get ours on Thursday, so look for a post tomorrow afternoon or evening with lots of details. Meantime, here’s a video from a DRC “testing event” on a mockup course that some teams participated in back in April:
And here’s Team IHMC Robotics running through a mockup of the whole damn thing:
That run included having their ATLAS robot get out of the vehicle. That’s impressive.
Also impressive is that Team MIT also has a complete runthrough:
And Team WPI-CMU is almost there too:
Wondering which are the teams to beat? You just saw three of ’em.
We won’t know about the surprise task until Friday, of course, but we do know that it will be a stationary manipulation task, and here’s a reasonable guess from Team KAIST:
Team IHMC’s video (above) also suggests that it might be a plug task, while MIT is guessing a switch and a button.
Bipedal robots walk. It’s what they do. They may have to deal with different kinds of surfaces (even on the flat bits of the course), and Team VALOR’s ESCHER has been practicing:
Meanwhile, Team MIT has demonstrated a very fast (and stable) walking gait:
As has Team WPI-CMU:
Walking is hard, and falling is a huge risk. Some robots are doing their best to avoid walking, including Team Tartan Rescue’s CHIMP (which has leg treads), Team RoboSimian’s quadruped robot (which has butt wheels), and Team DRC-HUBO at UNLV’s HUBO humanoid, which has knee wheels:
We’ve also got two wheeled quadrupeds, Team Grit and Team NimbRo:
And if you look closely, you’ll see Team Aero’s impressive set of leg treads:
Not Falling Over
Stability is inherently difficult for bipeds, especially while moving, but not falling over is both difficult and critical to completing the course in time (and in one piece). Both IHMC and MIT have developed algorithms that are at least somewhat resilient to disturbances, and MIT’s even works while the robot is walking:
Falling Over and Getting Up
We have not seen any videos of live DRC robots falling over and then getting themselves up again. That should tell you something. If you’ve got one, send it to us, we’ll be SUPER IMPRESSED. That said, Team ROBOTIS is at least able to get up from the ground:
Team HKU’s ATLAS can (almost) do the same, but the team figures that if they fall, it’ll probably be on the terrain task, so they won’t be totally flat on the ground anyway:
The driving portion of the course will be a lot like the DRC Trials, except longer. Team WALK-MAN has a bunch of good driving footage:
Egress from Vehicle
Egress (getting out of the vehicle) will probably be the most difficult task in the DRC Finals. Team IHMC showed that they could do it (in the course video above), and Team KAIST also makes it work in their compilation video (about :40 in):
MIT, meanwhile, can get out of the vehicle while it’s being shaken (!):
Good for disaster operations during aftershocks!
Doing things involves seeing things. Most teams aren’t posting a lot of this stuff, but MIT is, and it’s very important to the context of everything that the robots will be doing in the Finals, so let’s have a look at how their system lets humans interact with the robot.
Here’s a sensor visualization (what a human operator would see) of MIT’s ATLAS running a course:
And this is the same run from the robot’s perspective, integrating point clouds from lidar and stereo cameras and geometric models of the robot and scene objects into one view (played back at about 6x speed). Note that during the DRC, this is probably too much data to be streamed back to a remote user over DARPA’s intentionally intermittent communications link:
For a little bit of additional cleverness, MIT has realized that turning point clouds into objects is a potentially difficult perception problem for robots, but it’s easier for people to do quickly:
We’d expect that techniques like these could make a significant difference to which teams are more efficient at completing tasks through the use of guided autonomy.
Just remember, the DRC Finals are very important because they will help us develop robots that will mean that no human ever has to do this ever again: