Self-Driving Cars Will Be Ready Before Our Laws Are

Putting autonomous vehicles on the road isn’t just a matter of fine-tuning the technology

10 min read
Self-Driving Cars Will Be Ready Before Our Laws Are
Illustration: J.D. King

It is the year 2023, and for the first time, a self-driving car navigating city streets strikes and kills a pedestrian. A lawsuit is sure to follow. But exactly what laws will apply? Nobody knows. Today, the law is scrambling to keep up with the technology, which is moving forward at a breakneck pace, thanks to efforts by Apple, Audi, BMW, Ford [pdf], General Motors, Google, Honda, Mercedes, Nissan, Nvidia, Tesla, Toyota, and Volkswagen. Google’s prototype self-driving cars, with test drivers always ready to take control, are already on city streets in Mountain View, Calif., and Austin, Texas. In the second half of 2015, Tesla Motors began allowing owners (not just test drivers) to switch on its Autopilot mode.

The law now assumes that a human being is in the driver’s seat, which is why Google’s professional drivers and Tesla owners are supposed to keep their hands near the wheel and their eyes on the road. (Tesla’s cars use beeps and other warnings to make sure they do so.) That makes the vehicles street legal for now, but it doesn’t help speed the rollout of fully autonomous vehicles.

It’s not only the law that’s playing catch-up but also the road system. We’ve invested billions of dollars [pdf] in a transportation infrastructure designed for human vision, not at all for computers. But it’s possible to make changes to the laws that govern the roads and the infrastructure, and those could go a long way toward making driverless cars the rule instead of the rare exception.

No matter how the laws and infrastructure evolve and how smart the cars become, bad things will still happen and manufacturers will end up in court. So far, we have no strictly applicable case law, for although Google cars have been involved in 17 accidents to date, the robot was at fault in none of them.

But accidents caused by autonomous machines are hardly theoretical. Consider manufacturing robots, surgical robots, and subway trains—smart machines that have led to injury, death, and lawsuits. Just one 2009 crash caused by a malfunction of a train’s automatic train-control system, for example, led to 21 lawsuits and 84 out-of-court claims. Attorneys don’t even have to wait for actual accidents to sue: A 2015 lawsuit against Ford, GM, and Toyota accused the companies of “hawking vehicles that are vulnerable to hackers who could hypothetically wrest control of essential functions such as brakes and steering.”

For now, the legal landscape is a hodgepodge. Laws in California and Nevada, for example, allow self-driving cars on public roads so long as a human driver is sitting behind the wheel on alert, and other states are allowing testing on designated roadways. European regulators have allowed limited tests of self-driving cars and even tractor-trailers. The United Kingdom authorized testing starting last year and has begun reviewing road regulations to figure out how to eventually allow a fully autonomous shuttle. Japan allowed its first road test of an autonomous car in 2013, although much of the research being done by Japanese car companies is happening in the United States.

The United States’ National Highway Traffic Safety Administration has been carefully watching the technology and is generally endorsing it, stating, for example, that the agency is “eager” to “support the development of vehicle automation” due to the “exciting opportunity [automated vehicles] represent to the American public.” At a minimum, regulations need to continue smoothing the path for testing and rolling out at least limited versions of the technology.

We can’t put off changing the laws until the advent of robotic driving, because today’s laws leave a lot of room for uncertainty, and uncertainty stalls progress. A car company can’t be expected to invest in putting out a new fleet of autonomous cars when it could be forced to pull them all off the road after the first accident. We won’t have truly autonomous cars on the road until this gets sorted out.

illustration of vehicle interpreting traffic signsIllustration: J.D. King

Volvo recently announced that it would take the blame if “any of its self-driving cars crashes in autonomous mode.” Although that may sound like a big deal, it doesn’t represent progress. Under current U.S. law, Volvo would most likely take the blame anyway. The real questions—which terrify Volvo and other manufacturers—are: Exactly when will they be held responsible, and how much will they have to pay?

Most legal scholars think that an accident will lead to a major design-defect lawsuit [pdf]. That worries the car companies for several reasons.

First, it’s expensive no matter who wins. A multimillion-dollar legal case is nearly a certainty when new, complex driving systems involving millions of lines of source code are involved.

Second, the outcome of that case is hard to predict. Generally, the key question in a product liability lawsuit is whether the product had a “defective condition” that was “unreasonably dangerous.” This often involves determining whether the product designer could have made the product safer at an acceptable cost. But what’s “reasonable” for a new technology? Is “reasonably safe” defined by the average human driver, the perfect human driver, or the perfect computer driver?

Third, a lawsuit can lead to a recall [pdf]. A legal determination that a design is defective, caused an accident, and will likely cause another can be a powerful incentive for a recall. Recalls and mistakes can be expensive: GM’s ignition-switch recall cost the company US $4.1 billion in 2014, and Volkswagen’s diesel emissions scandal will likely cost the company over $7 billion. For at least some autonomous-car defects, however, the recall could take the form of a software patch delivered wirelessly and inexpensively.

Finally, punitive damages can come into play. Punitive damages are generally available in the United States for outrageous conduct in designing or manufacturing a defective product. The 1994 case Liebeck v. McDonald’s, which involved a punitive damage award of $2.7 million for burns from hot coffee, is a famous example.

US $16,000

Median damages in automobile accident cases compared with $748,000 for product liability cases

Because of the risk of such a lawsuit, the potential legal costs faced by manufacturers of autonomous vehicles are higher than the costs faced by human drivers. Most auto accidents usually result in pretrial settlements, and the 4 percent of cases that go to trial have relatively low legal costs and low potential damages, compared with those of a design-defect lawsuit. The U.S. Department of Justice reported that median damages in automobile accident cases were $16,000, compared with $748,000 for product liability cases. These legal costs create unfair disincentives for autonomous vehicles.

The solution to the lawsuit problem is actually pretty simple. To level the playing field between human drivers and computer drivers, we should simply treat them equally. Instead of applying design-defect laws to computer drivers, use ordinary negligence laws. That is, a computer driver should be held liable only if a human driver who took the same actions in the same circumstances would be held liable. The circumstances include the position and velocity of the vehicles, weather conditions, and so on. The “mind” of the computer driver need not be examined any more than a human’s mind should be. The robo-driver’s private “thoughts” (in the form of computer code) need not be parsed. Only its conduct need be considered.

That approach follows basic principles of negligence law. As Dobbs’s Law of Torts (2nd ed.) explains: “A bad state of mind is neither necessary nor sufficient to show negligence; conduct is everything. One who drives at a dangerous speed is negligent even if he is not aware of his speed and is using his best efforts to drive carefully. Conversely, a person who drives without the slightest care for the safety of others is not negligent unless he drives in some way that is unreasonably risky. State of mind, including knowledge and belief, may motivate or shape conduct, but it is not in itself an actionable tort”—that is, wrongful conduct.

For example, a computer driver that runs a red light and causes an accident would be found liable. Damages imposed on the carmaker (which is responsible for the computer driver’s actions) would be equal to the damages that would be imposed on a human driver. Litigation costs would be similar, and the high costs of a design-defect suit could be avoided. The carmaker would still have a financial incentive to improve safety. In fact, the manufacturer would have greater incentives than with a human-driven vehicle, because of publicity concerns. Correction of systemic problems could be implemented via a predictable mechanism, such as a mandatory crash-review program with government oversight, without excessive risk to the manufacturer.

As the safety of autonomous vehicles improves and as legal costs become more predictable, stricter safety standards could be imposed to encourage further progress. This scheme would help encourage development of the technology without undermining marginal incentives for safety [pdf]. Insurers have a century of experience in predicting accident costs for human-driven cars. Courts have a century’s worth of benchmarks on which to draw to ensure that a fair comparison is made. Making it just as easy for the courts to judge cases involving self-driving cars would shield manufacturers from excessive financial risk while compensating accident victims no less than they are today. With such predictability, it is likely that self-driving car manufacturers would pay about the same for insurance per vehicle as an average human driver does. Insurance costs could even be lower because the self-driving car would qualify for all the “good driver” discounts.

Implementation of any of these policies in places like the United States and Canada will have to happen on a state-by-state or province-by-province basis, as the rules of the road in these countries aren’t set nationally; in Europe and many other areas, however, it will evolve country by country.

illustration of vehicle and humanIllustration: J.D. King

Public policy is holding back self-driving cars in another way—it influences the design of the roads and the way they are governed based on the needs of drivers that “see.” The rules require that we stop on red, yield when we see a triangle-shaped sign, and obey metering lights at freeway entrances. That’s easy and intuitive for humans, not so easy for machines. Today’s autonomous vehicles recognize objects with a combination of object tracking using distance and velocity (it doesn’t really matter if an object cruising down the road is a car, a rolling boulder, or a flying saucer; a computer driver can avoid hitting it without knowing what it is), and object recognition (it makes sense to slow down if a small child or a deer [pdf] is near the curb, but zipping past a fire hydrant is fine). This technology still has years of development ahead.

A more costly but potentially simpler approach would be to make the infrastructure friendlier to autonomous vehicles. These changes wouldn’t eliminate all the challenges—a car would still have to “see” the child approaching that intersection—but it would simplify much of the burden. Radio frequency transmitters in traffic lights, for example, could tell a computer driver if a light is green or red more quickly and with greater accuracy than a machine vision system struggling with shadows and glare. These kinds of changes will have to happen on national levels, with international coordination if possible, through both regulation and standardization of technology.

There’s a reason to speed the rollout of autonomous vehicles. By replacing error-prone human drivers, autonomous driving technology can potentially save 30,000 lives each year in the United States alone. It can annually prevent 5 million accidents and 2 million injuries, conserve 7 billion liters of fuel, and save so many hundreds of billions of dollars in lost productivity and accident-related costs that the figures are beyond comprehension.

That’s because computer drivers are in principle fundamentally safer drivers. They never text, do their makeup, or fall asleep at the wheel. (Human error, in contrast, causes roughly 93 percent of crashes.) Robo-drivers can have 360-degree vision, and thanks to lidar, radar, and ultrasonic sensors, they can see through fog and in the dark.

Computer drivers can have “telepathy”: A computer driver could let another computer driver know that it is considering changing lanes before making the decision to do so. It could communicate with traffic lights to minimize wait times at intersections and optimize traffic flow.

Computer drivers react faster. Humans rely on chemical signals, with reaction speed limited to about 1.5 seconds—or 37 meters (about 120 feet) at highway speeds. Computers rely on much faster electrical signals and gigahertz-scale processors to react.

Computer drivers can take far more rigorous driver tests than the 20-minute road tests offered by U.S. departments of motor vehicles today. Recorded or virtual information could test a computer driver’s ability to safely drive, say, a million miles before handing over a license.

And, finally, computer drivers have the potential to accumulate far more wisdom than any human. It is said that wise men learn from the mistakes of others; only fools learn from their own. Every autonomous vehicle can learn from thousands of others, through incremental and permanent engineering improvements. Humans, unfortunately, often repeat mistakes others have made.

When self-driving cars do succeed, the effect will be widespread. And they will succeed, despite having two giant thumbs on the human side of the scale—the stacked deck of liability rules and the transportation infrastructure that relies on vision rather than other means of communication. Then a host of new social and legal issues will emerge.

30,000

Estimated number of lives autonomous driving technology can save each year in the United States alone

The most immediate impact will likely be on transportation for hire—the sandbox of Uber, Lyft, and taxis. Uber is already bullish on replacing its human drivers with computers, and it has hired 50 Carnegie Mellon University scientists to develop the technology. Uber may be able to count on computers not filing a class-action lawsuit against the company, but it should plan for angry human former drivers accusing it of “economic terrorism” and lengthy negotiations with regulators. And Uber will not be the only robo-taxi startup.

Self-driving cars will also disrupt the standard model of car ownership and use. Currently, cars typically are parked 95 percent of the time. If people ordered a self-driving car only when needed, utilization rates would rise, ownership costs would decline, and as an added benefit, we would typically ride in newer-model cars with a smaller environmental footprint. But what will that do to car sales?

Privacy will be a concern. Self-driving cars are the ultimate connected, on-the-grid machines. Not only would they know your exact location and route, but in a robo-taxi or shared-ownership model, the cars might have video monitoring or other means of preventing vandalism (or passenger failure to clean up all those breakfast-sandwich wrappers). In addition, self-driving cars continuously monitor other drivers on the road. Whether the gigabytes of generated information can be permanently stored—and how they can be used later—is not settled.

And self-driving cars will have a profound effect on city design. Parking spaces take up, on average, about 31 percent of city central business districts. Self-driving cars can park themselves in peripheral areas, or, in a shared-ownership/taxi model, they could pick up the next passenger. In either case, more land could be devoted to pedestrian zones, shopping, parks, and other valuable uses.

These issues will be worked out because ultimately we want to choose the best technology in terms of costs and benefits to society. So 50 years from now, in a world with no traffic accidents, people will look back and conclude that human drivers were a design defect.

The author is an intellectual-property lawyer. Complex product liability issues are beyond the scope of this article.

This article originally appeared in print as “Self-Driving Cars and the Law.”

About the Author

Nathan A. Greenblatt is an intellectual-property attorney at Sidley Austin in Palo Alto, Calif. His interest in the legal and policy implications of autonomous vehicles sprang from “being frustrated by my daily commute and being fascinated by the potential of the technology,” Greenblatt says. “I look forward to having vehicles on the road that will actually let others merge.”

This article is for IEEE members only. Join IEEE to access our full archive.

Join the world’s largest professional organization devoted to engineering and applied sciences and get access to all of Spectrum’s articles, podcasts, and special reports. Learn more →

If you're already an IEEE member, please sign in to continue reading.

Membership includes:

  • Get unlimited access to IEEE Spectrum content
  • Follow your favorite topics to create a personalized feed of IEEE Spectrum content
  • Save Spectrum articles to read later
  • Network with other technology professionals
  • Establish a professional profile
  • Create a group to share and collaborate on projects
  • Discover IEEE events and activities
  • Join and participate in discussions