Tesla Autopilot Crash: Why We Should Worry About a Single Death

Elon Musk says we should focus on the thousands of lives that could be saved by robot cars, but ethics is more than math

7 min read
Tesla Model S Autopilot
Image: Tesla Motors via Vimeo

This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

Only recently, Tesla Motors revealed that one of its self-driving cars, operating in Autopilot mode, had crashed in May and killed its driver. How much responsibility Tesla has for the death is still under debate, but many experts are already reminding us of the huge number of lives that could be saved by autonomous cars.

Does that mean we shouldn't worry much about the single death—that we should look away for the sake of the greater good? Is it unethical to focus on negative things that could slow down autonomous-driving technology, which could mean letting thousands of people die in traffic accidents?

Numbers do matter. Car crashes kill the equivalent of a 747 jet-plane full of people every week in the United States, said Dr. Mark Rosekind, administrator of the U.S. National Highway Traffic Safety Administration (NHTSA). That's more than 32,000 road deaths per year in the U.S.

Interestingly, the hundreds of people who die on our roads every week don't get the same attention as a plane crash. Traffic fatalities are so commonplace that we've become numb to them.

Unlike humans, self-driving cars don't get sleepy, distracted, drunk, road-ragey, and the many other things that cause about 90 percent of crashes today. So, robot cars could be a really important technology.

Elon Musk, CEO and co-founder of Tesla Motors, also appealed to numbers in defending his company. He deflected a question by a reporter on why he thought it wasn't materially relevant to disclose the crash when it happened months ago:

“Indeed, if anyone bothered to do the math (obviously, you did not) they would realize that of the over 1M auto deaths per year worldwide, approximately half a million people would have been saved if the Tesla autopilot was universally available. Please, take 5 mins and do the bloody math before you write an article that misleads the public."

This is to say, focus on the good we can do, not on the single crash which is statistically insignificant to the lives potentially saved. Sure, numbers matter, but ethics is more than math. Here's why a moral accounting ledger is not enough.

Different people die

Looking at the numbers alone tells us only a partial story. With robot cars, crash patterns will likely be different—people injured or killed will probably not be the same ones who would otherwise be victims, and this needs to be considered.

In the fatal Tesla crash, the roof of the car was sheared off as it drove underneath the tractor-trailer crossing the road in front of it, in a T-bone collision. This is an incredibly rare event today. In a non-Autopiloted car, the human would have seen the big truck on the road (instead of watching a Harry Pottermovie, as reports suggest) and hit the brakes or swerved to avoid it. He wouldn't have died.

Google had previously allowed a blind man to sit behind the wheel of its autonomous car, to help showcase the technology's promise: it could give mobility to many people who today are not licensed to drive. Not just the blind, but other disabled people as well as children could have new freedom without relying on another person to transport them around.

When we speculate whether the broader public would accept robot cars that are imperfect but still safer than today's cars, we should remember that it's more than about the numbers. Would we really accept higher statistical safety, if that came with new risks and accident types that we could easily avoid today?

But if self-driving crashes will continue to happen as a matter of physics, or technology errors (such as improper servicing or software bugs), or technology limitations (as seems to be the case in the Tesla crash), or other causes, then statistically they'll happen with these new drivers, too. The victims of those future crashes might not have been victims today, because blind and other non-licensable drivers would not be operating a car today.

So, when we speculate whether the broader public would accept robot cars that are imperfect but still safer than today's cars, we should remember that it's more than about the numbers. Would we really accept higher statistical safety, if that came with new risks and accident types that we could easily avoid today?

No one can fully predict how these future accidents may occur; if we knew that, technology developers could avoid them. But we already know that computers can do some things very well that humans cannot, and vice versa. We can do simple things that computers have a hard time with, such as recognizing squirrels and potholes. In the Tesla crash, the car's cameras couldn't recognize a white truck against a bright sunlit background—a mistake that human drivers are unlikely to make.

Accident types may matter to us, especially if different people die. If you were struck and hurt by a drunk driver, that would be regrettable; we understand that humans can be idiots, and at least someone can be held responsible. But it seems to be something else if you were in a serious accident because of a trivial turn of events, like splashed mud on your car's sensors that impairs its autonomous driving.

Humans can be irrational

Of course, being hurt or killed by a drunk driver isn't really different than being hurt or killed for some other reason on the road: the result and suffering are the same. But there's an element of control that we humans like to cling onto, for better or worse.

According to research, we generally believe that we're above-average in intelligence, and that we're above-average in driving skills. (By definition of “average," obviously that's impossible.) Thus, we think we can save ourselves in emergency situations with our superior driving, even though human error is the cause of most accidents. In other words, we might not expect to personally reap the safety benefits of a self-driving car, but we'd still suffer the downside of new, ridiculously dumb accidents.

For similar reasons, some travelers irrationally fear airplanes: they're much safer than cars statistically, but there's something about not being in control personally of the plane that makes us uneasy. Right now, “meaningful human control" is the linchpin issue in the debate on autonomous military robots, and it's related to the concept of human dignity. Likewise, we're very uncomfortable with the possibility that a robot car might choose to take a life, if it's the lesser of two unavoidable evils.

To be clear, being irrational doesn't justify a preference to be killed by a drunk driver rather than a silly design problem. But it may help explain why the debate isn't over just by pointing at numbers. Consumer adoption of self-driving vehicles won't just be driven by logic and statistics, but also by perceptions and emotions.

Human irrationality may also explain why Tesla retains some responsibility in the accident. Before allowing its customers to use Autopilot, Tesla requires them to promise to always be alert . . . And users happily agree to this. But if we're not physically capable of doing what we agreed to, that's not a rational or informed agreement.

Human irrationality may also explain why Tesla retains some responsibility in the accident, and responsibility matters, too. Before allowing its customers to use Autopilot—which is only in beta-testing mode to work out the last bugs before its official release—Tesla requires them to promise to always be alert: “always keep your eyes on the road when driving and be prepared to take corrective action as needed." And users happily agree to this, in order to enjoy the self-driving feature.

But if we're not physically capable of doing what we agreed to, that's not a rational or informed agreement, right? For years experts have pointed out that humans are not wired to passively stare at a system, while being ready to jump in and seize control in sudden emergencies. The human mind likes to wander when not actively engaged.

Boredom and distraction can quickly set in, as we trust and sometimes over-trust technology. Besides the many YouTube videos of Tesla drivers failing to pay attention, Elon Musk's own wife was caught goofing off behind the wheel. They seem to think it's a joyride, not the life-and-death product testing that it really is.

Over-trust and inattention are known problems that technology developers need to design for, and simply telling customers not to do what comes naturally is probably not enough. It's as if Tesla said, “Don't ever blink," and customers promised not to: they just don't understand what they're signing up for.

Slowing down life-saving research?

Yes, it could be that autonomous cars will save many more lives than they take. But “the ends justify the means" is a dangerous approach in ethics, capable of justifying any evil as long as the math worked out.

And as a society, we don't really believe in that. If torturing innocent people or infringing on other rights can deliver some greater good, we would (or should) be deeply troubled by the choice. In developing cancer drugs that could save millions of lives—just like robot cars are promised to do—we understand we can't ignore problems in clinical and human trials. We can't cut corners just because we want to rush a life-saving product to market.

“The perfect is the enemy of the good," as famously declared by Voltaire, is also a common reaction to ethical critiques of autonomous cars. But this is a straw-man argument: no one is demanding perfection, just due diligence, especially if death is on the line. Just as with cancer drugs or anything else on the market, a product doesn't have to be perfect, though that's not an excuse to not be more careful.

Look at seatbelts as an iconic safety device: even they aren't absolved of all sins, just because they save a lot of lives overall. Unlatch buttons that are too large (and can be accidentally bumped open) or too easily opened have sparked lawsuits and massive recalls. These aren't really malfunctions but only bad designs, and bad designs can kill.

The extra care needed to avoid these problems doesn't have to take a Herculean effort or stall research and development. It just means investing some time to think it through and properly set expectations. This could save lives, and every one counts. (Just ask their families.)

As for Tesla, it could be that the company did all that could be reasonably expected. Or maybe not. We'll have to wait for NHTSA's investigation to close. NHTSA is also working on guidance for autonomous vehicles, due out this summer. In the meantime, next week in San Francisco, we'll be thinking more about these and other issues at the Automated Vehicles Symposium, sponsored by the Transportation Research Board and AUVSI.

Tragedies—no matter how rare, such as this Tesla incident or hypothetical crash dilemmas—can't be ignored, even in the face of great social benefits. It would be doubly tragic if those lessons were lost on us. Ethics is not Fight Club: we're all better off when we talk about it.

The Conversation (0)