A week ago, Google held a press briefing at its expansive campus in Mountain View, California, aimed at showing off the progress it has made in its drive toward producing a self-driving car. The results, according to all who witnessed the company’s fleet of Lexus SUVs make their way around the streets of Mountain View without so much as a hiccup, were, in a word, impressive. (Well, The New York Times’ John Markoff called it “boring,” which is about as high a complement as you can pay an engineering project whose ultimate intention is to remove the “excitement”—meaning the tens of thousands of deaths each year on U.S. roads—from automobile travel.)
Christopher Urmson, a former Carnegie Mellon University computer scientist who heads the project, gave a brief overview of the project’s evolution from its original goal of driving 100 000 miles safely in highway traffic conditions to driving on city streets. (The fleet has now passed the 700 000-mile mark.) Urmson said that driving local routes was “100 times more difficult than freeway driving.” But despite the constant barrage of stimuli local roads offer, the cars apparently do a (not) bang-up job.
Before Google’s self-driving car can do for all of us what it did for reporters last week, the Internet search giant has another massive project to undertake. As we reported in the Automaton blog back in 2011, it's all about the maps:
First, it relies on very detailed maps of the roads and terrain, something that Urmson said is essential to determine accurately where the car is. Using GPS-based techniques alone, he said, the location could be off by several meters.
Andrew Chatham, who heads the team’s mapping effort, described it thusly last week as reported in The Atlantic:
“We tell it how high the traffic signals are off the ground, the exact position of the curbs, so the car knows where not to drive," he said. "We'd also include information that you can't even see like implied speed limits.” This keeps the burden on the car’s software to a minimum. “We tell it what the world is expected to look like when it is empty,” said Chatham. “And then the job of the software is to figure out how the world is different from that expectation.”
Therein lies a big problem—and the next, rather massive, challenge. The vehicles in Google’s fleet definitely qualify as smart cars. But if you plunked one down in a random city that the Google team has yet to exhaustively map, it would be akin to Superman being unable to switch out of his guise as mild-mannered reporter Clark Kent. When the digitized female voice says “Autodriving,” indicating the switch to automated-driving mode, it would be no more capable than the car you drive to work every day.
Google has so far done the work of having human drivers traverse the roads in the vicinity of its campus. But that’s only 3200 kilometers of road out of the roughly 6.4 million kilometers comprising the U.S. road network.
And although Google has sent drivers down all these roads for its Google Maps project (I can’t remember the last time I even saw a paper map), the level of detail required for automated driving is far higher than what you need to ensure that you don’t get lost on the way to the restaurant your friends raved about during their last vacation.
So, how long will this mapping take? Though Google cofounder Sergey Brin has said publicly that the suite of sensors and the compilation of gadgets that apply the captured data would be commercially available by 2017, Urmson suggested a date closer to 2020. By then, perhaps, Google will have spread its mapping, and the “boredom” of automated driving, over a much greater swath of the United States.
Willie Jones is an associate editor at IEEE Spectrum. In addition to editing and planning daily coverage, he manages several of Spectrum's newsletters and contributes regularly to the monthly Big Picture section that appears in the print edition.