Google's Autonomous Cars Are Smarter Than Ever at 700,000 Miles

Photo: Google

Google's self-driving car.

Google has just posted an update on its self-driving car program, which we've been watching closely for the past several years. The cars have surpassed 700,000 autonomous accident-free miles (around 1.13 million kilometers), and they're learning how to safely navigate through the complex urban jungle of city streets. Soon enough, they'll be better at it than we are. Much better.

We’ve improved our software so it can detect hundreds of distinct objects simultaneously—pedestrians, buses, a stop sign held up by a crossing guard, or a cyclist making gestures that indicate a possible turn. A self-driving vehicle can pay attention to all of these things in a way that a human physically can’t—and it never gets tired or distracted.

This is why we're so excited about a future full of autonomous cars. Yes, driving sucks, especially in traffic, and we'd all love to just take a nap while our cars autonomously take us wherever we want to go. But the most important fact is that humans are just terrible at driving. We get tired and distracted, but that's just scratching the surface. We're terrible at dealing with unexpected situations, our reaction times are abysmally slow, and we generally have zero experience with active accident avoidance if it involves anything besides stomping on the brakes and swerving wildly, which sometimes only make things worse.

An autonomous car, on the other hand, is capable of ingesting massive amounts of data in a very short amount of time, exploring multiple scenarios, and perhaps even running simulations before it makes a decision designed to be as safe as possible. And that decision might (eventually) be one that only the most skilled human driver would be comfortable with, because the car will know how to safely drive itself up to (but not beyond) its own physical limitations. This is a concept that Stanford University was exploring before most of that team moved over to Google's car program along with Sebastian Thrun. 


Photo: Google
What's that black box?

Now, I may be making something out of nothing here, but if we compare the car in the image that Google provided with its latest blog post with an earlier Google car from 2012 (or even the Google car in the video), you'll notice that there's an extra piece of hardware mounted directly underneath the Velodyne LIDAR sensor: a polygonal black box (see close-up, right). I have no idea what's in that box, but were I to wildly speculate, my guess would be some sort of camera system with a 360-degree field of view.

The Velodyne LIDAR is great at detecting obstacles, but what Google is working on now is teaching their cars to understand what's going on in their environment, and for that, you need vision. The cars always had cameras in the front to look for road signs and traffic lights, but detecting something like a cyclist making a hand signal as they blow past you from behind seems like it would require fairly robust vision hardware along with some fast and powerful image analysis software. 

Or, it could be radar. Or more lasers. We're not sure, except to say that it's new(ish), and that vision is presumably becoming more important for Google as they ask their cars to deal with more complex situations with more variables.

Another interesting tidbit in the update (posted by Chris Urmson, director of Google's Self-Driving Car Project) is the phrase about "teaching the car to drive more streets in Mountain View before we tackle another town." The Google cars can deal with changing environments and some level of dynamic uncertainty, but it needs a reliable basemap to use as a point of reference for lane width, traffic light formation, crosswalks, lane curvature, and more.

So, it might not be possible to tell them to drive you somewhere that they never been. To continue to speculate (because it's fun!), this might suggest how Google is planning on eventually making money on all of this: rather than making and selling autonomous cars, they'll maintain a continually updated database of road data that either car manufacturers or end users will have to subscribe to in order for their cars to operate autonomously.


Google's cars are still not ready for end users like you and I; they may be able to deal with 90 or 95 percent of situations autonomously, but that last 5 to 10 percent to make it to 100 percent autonomy (which is what's required) is probably as hard as all of the research, programming, experience, and machine learning that's gone into the cars up to this point.

It's impossible to plan ahead for every single scenario that an autonomous car might have to handle, so the key to unleashing the autonomous car is going to be a system that can learn and make decisions on the fly. In fact, it's almost certain that there will be accidents, because autonomous cars are still going to have to deal with all the other human drivers on the road (and also because no software is perfect).

But if we can get past our reluctance to place more trust in an autonomous system than we do in ourselves, autonomous cars have the potential to completely revolutionize our transportation infrastructure.

Via [ Google ]

Advertisement

Cars That Think

IEEE Spectrum’s blog about the sensors, software, and systems that are making cars smarter, more entertaining, and ultimately, autonomous.
Contact us:  p.ross@ieee.org

Senior Editor
Philip E. Ross
New York City
Assistant Editor
Willie D. Jones
New York City
 
Senior Writer
Evan Ackerman
Berkeley, Calif.
Contributor
Lucas Laursen
Madrid
 

Newsletter Sign Up

Sign up for the Cars That Think newsletter and get biweekly updates, all delivered directly to your inbox.

Advertisement