DARPA Urban Challenge Robots Pass Driver's Test

The previous DARPA Grand Challenge competition -- a trip through the Nevada desert taken by autonomous vehicles -- took two tries to get right; the first year, not a single vehicle made it across the finish line. The second year was a much better showing -- four vehicles finished -- and winner Stanford University took away the $2 million prize.

This year's DARPA Urban Challenge took the robots out of the desert and into a (simulated) city. Teams had to build vehicles capable of "executing simulated military supply missions while merging into moving traffic, navigating traffic circles, negotiating busy intersections, and avoiding obstacles." Since this was the first year of this style of competition, many people wondered if it would have the same problems as the first year in the desert -- lots of failures and no one completing the course.

We needn't have worried. Of the 11 vehicles that were allowed to enter the final round of the competition, six finished the course -- though only three teams, Carnegie Mellon, Stanford, and Virginia Tech, finished under the 6 hour time limit.

So what drives these vehicles (since it's not humans)? The short answer: lots of sensors and lots of computing power. Nearly all the vehicles had some sort of array of laser range scanners arranged on the front -- though while MIT used more than 10, the UPenn entry got away with just 2. A key player in that technology was Velodyne, developers of a high-def LIDAR unit based on their work in the first two DARPA Challenges -- they stayed out of this year's event in order to continue developing their LIDAR technology. Additionally, LIDAR units designed by IBEO and SICK (an old favorite of DARPA teams) were other popular additions to the sensor suite. Stereo vision complimented the laser sensors, and of course, differential GPS receivers and inertial measurement units (IMUs) were must-haves.

While hardware integration is no easy task, software was just as daunting. A layer of hardware interface ("What does the LIDAR say?") under a layer of navigation and control ("Where am I, where do I have to go, how far do I turn the steering wheel, and how fast do I have to go?) under a layer of behavior ("Hm, a stopped car. Wait behind it, or drive around it?") makes for some intense coding. Take the Carnegie Mellon vehicle, which required over 300,000 lines of code to run the 2007 vehicle. Some COTS tools made this easier for teams such as Virginia Tech, who used LabView to "provide the major functions of the vehicle including image acquisition and processing, systems communication, vehicle health monitoring, and vehicle control. A NI Compact RIO system [provided] steering, throttle, and braking control, as well as reading CAN-bus sensors," said NI representative Trisha McDonell.

With the impressive success of the vehicles on Saturday, is my human-driven car suddenly old fashioned? Not so, say the experts. Forbes had a nice article on the competition, and specifically quoted Stanford team leader Sebastian Thrun:

In the eyes of Stanford's team leader, Sebastian Thrun ... the world is still years away from driverless autos. "I'm positively enthused that this race has a winner," he said. "But weâ''re witnessing the painful birth of a new technology, and this is the first of many hours of labor."

 

Fair enough, Dr. Thrun. I'll settle for a car that can park itself for the time being.

Special thanks to John Voelcker for insight and photos from the field

Advertisement

Automaton

IEEE Spectrum's award-winning robotics blog, featuring news, articles, and videos on robots, humanoids, automation, artificial intelligence, and more.
Contact us:  e.guizzo@ieee.org

Editor
Erico Guizzo
New York, N.Y.
Senior Writer
Evan Ackerman
Berkeley, Calif.
 
Contributor
Jason Falconer
Canada
Contributor
Angelica Lim
Tokyo, Japan
 

Newsletter Sign Up

Sign up for the Automaton newsletter and get biweekly updates about robotics, automation, and AI, all delivered directly to your inbox.

Advertisement