Self-driving Cars Get Good Navigation on the Cheap

Univ. of Michigan researchers use videogame tech to cut the cost of pinpointing a car’s location

2 min read

Self-driving Cars Get Good Navigation on the Cheap
Photo: University of Michigan

Researchers are making rapid progress in developing systems designed to give cars ever more autonomy. The benefits are myriad; and some, like making a dramatic reduction in the tens of thousands of traffic fatalities that occur each year, are priceless. Still, automakers are loathe to add new, costly components without a clear sense of how the additions will justify a higher sales price. Researchers, keenly aware that every penny counts, are looking for ways to cut the cost of making cars smarter.

One example is a new software system from a team at the University of Michigan in Ann Arbor that allows a car to see its surroundings and determine its location using a single video camera instead of several laser scanners.

Lidar, the three-dimensional laser scanning technology used to create real-time maps of a car’s environment that are compared with pre-drawn maps, does the job quite effectively. But Ryan Wolcott, a U-M doctoral candidate in computer science and engineering, has come up with an alternative that he says pinpoints a car’s location with the same level of accuracy as laser ranging but is many times cheaper. Wolcott described his thinking in a paper titled, "Visual Localization within LIDAR Maps for Automated Urban Driving," that was named best student paper at the Conference on Intelligent Robots and Systems in Chicago in September. Wolcott noted that:

"The laser scanners used by most self-driving cars in development today cost tens of thousands of dollars, and I thought there must be a cheaper sensor that could do the same job…Cameras only cost a few dollars each and they're already in a lot of cars. So they were an obvious choice."

Just like navigation systems including the one Google is fine-tuning for its vehicle, the system dreamed up by Wolcott and his collaborator, Ryan Eustice, a U-M associate professor of naval architecture and marine engineering, stores thousands of maps of a given area. But instead of comparing the real-time data to a flood of two-dimensional maps, their system turns the prerecorded data into a series of three-dimensional pictures the way a video game would. This, say Wolcott and Eustice, lets the navigation system compare these 3-D images with the images captured by a conventional video camera as the car cruises along city streets.

They immediately banged up against a problem: how to process the staggering amounts of video data the system would generate, all in real time. Video gaming again inspired the solution. They built in graphics processors that are common to video game consoles.

“One of the challenges was to build a system that could do that heavy lifting and still deliver an accurate location in real time,” Eustice said in a press release. "When you're able to push the processing work to a graphics processing unit, you're using technology that's mass-produced and widely available. It's very powerful, but it's also very cost-effective.”

The pair tested the software on the streets of downtown Ann Arbor. A human was always in control of the vehicle, but in the background, the navigation system made the same set of comparisons it would perform if it were driving autonomously. According to Wolcott and Eustice, it was accurate to within centimeters. They plan to do more testing when Michigan’s Mobility Transformation Facility testing center—which features a private, enclosed city grid—opens this summer.

The Conversation (0)