Tesla Autopilot Crash Exposes Industry Divide

Is it time to take human drivers out of the loop completely?

4 min read

A woman in a Tesla car on Autopilot without her hands on the wheel.
Photo: Bloomberg/Getty Images

The first death of a driver in a Tesla Model S with its Autopilot system engaged has exposed a fault line running through the self-driving car industry. In one camp, Tesla and many other carmakers believe the best route to a truly driverless car is a step-by-step approach where the vehicle gradually extends control over more functions and in more settings. Tesla’s limited Autopilot system is currently in what it calls “a public beta phase,” with new features arriving in over-the-air software updates.

Google and most self-driving car startups take an opposite view, aiming to deliver vehicles that are fully autonomous from the start, requiring passengers to do little more than tap in their destinations and relax.

The U.S. National Highway Traffic Safety Administration (NHTSA) classifies automation systems from Level 1, sporting basic lane-keeping or anti-lock brakes, through to Level 4, where humans need never touch the wheel (if there is one).

A Level 2 system like Tesla’s Autopilot can take over in certain circumstances, such as highways, but requires human oversight to cope with situations that the car cannot handle—such as detecting pedestrians, cyclists, or, tragically, a white tractor-trailer crossing its path in bright sunlight.

Proponents of Level 4 technologies say that such an incremental approach to automation can, counter-intuitively, be more difficult than leap-frogging straight to a driverless vehicle. “From a software perspective, Level 2 technology may be simpler to develop than Level 4 technology,” says Karl Iagnemma, CEO of autonomous vehicle startup nuTonomy. “But when you include the driver, understanding, modeling and predicting behavior of that entire system is in fact pretty hard.”

“The question is, are the sensors and compute power on the (Tesla Model S) enough? I don’t know how many more software updates you could do to squeeze more performance out of this car.”

Anthony Levandowski, who built Google’s first self-driving car and now runs autonomous trucking startup Otto, goes even further. “I would expect that there would be plenty of crashes in a system that requires you to pay attention while you’re driving,” he says. “It’s a very advanced cruise control system. And if people use it, some will abuse it.”

Even if drivers are following Tesla’s rules—keeping their hands on the wheel and trying to pay attention to the road—many studies have shown that human motorists with little to do are easily distracted.

At this point, of course, Tesla is extremely unlikely to remotely deactivate the Autopilot system until it reaches Level 4. So what are its options? Experts think that in one respect, at least, Tesla is on the right track. “Putting the self-driving hardware on all your vehicles then activating it with a software update later seems like a great idea,” says Levandowski. “It’s shocking that nobody else did that.”

“There are very few examples of software of this scale and complexity that are shipped in perfect form and require no updating,” agrees Iagnemma. “Developers will certainly need to push out updates, for either improved performance or increased safety or both.”

Tesla has been able to capture driving camera and radar data from tens of thousands of cars in the real world, using the information to train object-detection algorithms over a vast range of conditions. “The question is, are the sensors and compute power on the car enough?” wonders Levandowski. “I don’t know how many more software updates you could do to squeeze more performance out of this car.”

“What Tesla was thinking, I believe, is that maybe a lidar sensor wasn’t necessary because you have the human operator in the loop, acting as a fail-safe input.”

One sensor noticeably absent from Tesla’s Model S and X is lidar—the laser ranging system favored by the majority of autonomous car makers. It can build up a 360-degree image of a vehicle’s surroundings in the blink of an eye. “The introduction of an additional sensor would help improve system performance and robustness,” says Iagnemma. “What Tesla was thinking, I believe, is that maybe a lidar sensor wasn’t necessary because you have the human operator in the loop, acting as a fail-safe input.”

Mark Halverson is CEO of transportation automation company Precision Autonomy and a member of the IEEE Global Initiative for Ethical Considerations in the Design of Autonomous Systems. He thinks that roads with a mix of connected human drivers and self-driving cars would benefit from a cloud-based traffic management system like the one NASA is developing for drones.

“In this accident, the truck driver and the Tesla driver both knew where they were going,” he says. “They had likely plugged their destinations into GPS systems. If they had been able to share that, it would not have been that difficult to calculate that they would have been at the same position at the same time.”

Halverson thinks that a crowdsourced system would also avoid the complexities and bureaucratic wrangling that have dogged the nearly decade-long effort to roll out vehicle-to-vehicle (V2V) technologies. “A crowdsourcing model, similar to the Waze app, could be very attractive because you can start to introduce information from other sensors about road conditions, pot holes, and the weather,” he says.

Toyota, another car company that favors rolling out safety technologies before they reach Level 4, has been struggling with the same issues as Tesla. Last year, the world’s largest carmaker announced the formation of a US $1-billion AI research effort, the Toyota Research Institute, to develop new technologies around the theme of transportation. The vision of its CEO, Gill Pratt, is of “guardian angel” systems that allow humans to drive but leap in at the last second if an accident seems likely. His aspiration is for vehicles to cause fatal accidents at most once every trillion miles.

Such technologies might not activate in the entire lifetime of a typical driver, requiring all the expensive hardware and software of a driverless Level 4 vehicle but offering none of its conveniences. “It’s important to articulate these challenges, even if they’re really hard,” says John Leonard, the MIT engineering professor in charge of automated driving at TRI. “A trillion miles is a lot of miles. If I thought it would be easy, I wouldn’t be doing it.”

Elon Musk will surely be hoping that the next Autopilot accident, when it inevitably comes, will be nearly as many miles off.

The Conversation (1)
moto me
moto me17 Feb, 2023
INDV

<a href="https://motoverse.me/autopilot-vehicles-the-future-of-driving-has-arrived/">Thank for your article and it's very useful for me !</a>