The Associated Press is reporting on the number of accidents that autonomous cars have been in since September, when California officially issued permits for companies to test autonomous cars on public roads. At first glance, the accident rate is alarmingly high: four cars have been in accidents out of the 50 that Google (and other companies) currently have on the road, resulting in an accident rate significantly higher than is typical for a vehicle driven by a human. This sounds bad, but if you look at what actually happened, it’s nothing to worry about at all.
Right now, Google is testing 23 autonomous Lexus SUVs in California, Delphi is testing two, and five other companies also have testing permits, bringing the total number of autonomous cars on California public roads up to about 50. Of these 50, four have been in accidents since September, including three of Google's vehicles and one belonging to Delphi. California law keeps collision reports confidential, which is why we haven’t heard about this before, but the Associated Press was able to speak to a confidential source familiar with the accident reports.
So, why is this nothing to worry about? Let's look at the facts from the AP report:
Two accidents happened while the cars were in control; in the other two, the person who still must be behind the wheel was driving.
That makes the latter two just car accidents, not autonomous car accidents.
Google and Delphi said their cars were not at fault in any accidents, which the companies said were minor.
In other words, someone else crashed into the autonomous car, meaning a human was at fault, which suggests that the whole autonomous car aspect is irrelevant. We don't know this for sure, of course, and it's possible that the fact that the car was autonomous did contribute in some way to the accidents, but Google doesn't seem to think so, as the company referenced the accidents in a statement as "minor fender-benders, light damage, no injuries, so far caused by human error and inattention."
This just emphasizes one of the reasons why autonomous cars are so important: they’re better drivers than we are. They’re always paying attention, and never get tired or distracted or bored. Having said that, like any robotic system that depends on a lot of complicated hardware and software working together, autonomous cars are vulnerable to errors, and even if an accident hasn’t happened yet, it’s definitely going to.
Let’s just assume for the sake of argument that one of these accidents was explicitly caused by an autonomous car driving in autonomous mode. How would that change things?
The fantastic thing about robotic cars is that they’re recording what’s going on around them, as well as what they’re thinking, all the time. After an accident, it would be possible for engineers to replay what happened in detail, and follow the chain of logic that the car followed to reach the decision that led to the accident. The specific cause of the accident could then be identified, and then, more than likely, engineers could develop a way of making sure that the car would never, ever have that accident again. Furthermore, the update could be instantly propagated to every other autonomous car, making them all that much safer.
Needless to say, humans don’t work this way, and we just keep having the same sorts of accidents over and over again. Sigh.
The other way an autonomous car accident will change things, particularly if it’s an accident that results in an injury, is that it’s going to be a public relations nightmare, and possibly a legal nightmare as well. Nobody is going to care how safe autonomous cars have been, or will be, because as soon as that first major accident happens, the headline is going to be about “the dangers of robotic cars,” or something like that.
As we (and many others) have pointed out in the past, humans are terrible, horrible, no-good, very-bad drivers. We’re just not designed for it. But, we’ve somehow just come to accept the fact that tens of thousands people die every year in car accidents. It’s just normal.
Some amount of time from now, fifty years perhaps, it’ll probably be illegal for humans to drive on public roads. What will be normal, at that point is for an autonomous car accident to make headlines simply because car accidents are just that rare. Until that happens, it’s important to understand that autonomous cars are a developing technology that will be an enormous benefit to all of us, both in terms of safety and convenience, but that as a developing technology, it’s going to take a lot of patience, effort, understanding, and acceptance before we’re finally ready to give up the wheel completely.
Update: on Medium, the director of Google’s self-driving cars program, Chris Urmson, discusses the 11 accidents that the happened to the cars since they’ve been on the road, and reiterates that “not once was the self-driving car the cause of the accident.” Even though the cars weren’t to blame, Google can still use the accidents (and near misses) to improve their autonomous driving skills:
All the crazy experiences we’ve had on the road have been really valuable for our project. We have a detailed review process and try to learn something from each incident, even if it hasn’t been our fault.
Google doesn’t usually discuss what’s going on inside the brains of its autonomous cars (or the humans who work with them), and it's fascinating to see all the examples that Urmson gives of how the cars try to adapt to humans driving recklessly. Check it out here.
Evan Ackerman is a senior editor at IEEE Spectrum. Since 2007, he has written over 6,000 articles on robotics and technology. He has a degree in Martian geology and is excellent at playing bagpipes.