A Former Pilot On Why Autonomous Vehicles Are So Risky

5 questions for transportation-safety expert Missy Cummings

3 min read
A photo illustration of ​Missy Cummings with colors and shapes.
Stuart Bradford

In October 2021, Missy Cummings left her engineering professorship at Duke University to join the National Highway Transportation Safety Administration (NHTSA) in a temporary position as a senior safety advisor. It wasn’t long before Elon Musk tweeted an attack: “Objectively, her track record is extremely biased against Tesla.” He was referring to Cummings’s criticism of his company’s Autopilot, which is supposed to help the driver drive, though some customers have used it to make the car drive itself—sometimes with disastrous results.

Some of Musk’s fans followed his lead: Cummings received a slew of online attacks, some of them threatening.

As a former Navy fighter pilot,Cummings was used to living dangerously. But she hates taking unnecessary risks, particularly on the road. At NHTSA, she scrutinized data on cars operating under varying levels of automation, and she pushed for safer standards around autonomy. Now out of the government and in a new academic perch at George Mason University, she answered five high-speed questions from IEEE Spectrum.

We are told that today’s cars, with their advanced driver-assistance systems (ADAS), are fundamentally safer than ever before. True?

Cummings: There is no evidence of mitigation. At NHTSA we couldn’t answer the question that you’re less likely to get in a crash—no data. But if you are in an accident, you’re more likely to be injured, because people in ADAS-equipped cars are more likely to be speeding.

Could it be that people are trading the extra safety these systems might otherwise have provided for other things, like getting home 3 minutes sooner?

Cummings: I call it risk homeostasis. It’s a big problem with Tesla, for example. You’re told it has self-driving capability, with all these features, such as automatic braking. Oh, the car is going to do x, y, and z for me, and then it turns out that it doesn’t.

Did you observe risk homeostasis back in your fighter-pilot days?

Cummings: It happened with air-to-ground bombing radar. Pilots figured out that you could use it to set up a self-contained approach to an aircraft carrier and then manage the landing by yourself. Given the control freaks that pilots are, it happened. But the system didn’t adjust for the pitching deck, so it set people up for much more lethal approaches.

Some have said that partial autonomy is the riskiest solution of all. What’s your take?

The policy should be that either the computer is driving or you are driving. And by driving I mean steering—people do fine with regular cruise control. The act of keeping your hands on the wheel and guiding the car’s lateral motion is enough to keep your brain engaged. So, no L3 [full self-driving, but the driver must be ready to take the wheel], which is too confusing, and no hands-free L2 [partial self-driving]. I am not against the passing of control per se, but there should just be two modes of operation, with crystal clear feedback about which mode you are in.

When do you think true self-driving cars will come?

Cummings: It’s possible to do self-driving in narrow applications. Waymo has been giving rides for a long time in Chandler [Ariz.]. That environment is very structured, and it’s much easier to operate these systems in. My favorite application is last-mile delivery, say, food delivery; it could be very helpful when, say, viruses spike. But the day when AI in cars can handle all conditions on the road, all of the time—it’s not going to be in my lifetime.

Mary (Missy) L. Cummings is the director of the Autonomy and Robotics Center at George Mason University and a senior member of IEEE. She received a Ph.D. in systems engineering from the University of Virginia.

The Conversation (3)
D Sireci18 May, 2023
SM

It is the corner cases where AI fails badly at self driving. There is a mistaken assumption that if a 16 year old kid can learn to drive in a few months, then an AI can do the same. The critical missing point is that a 16 year old kid has 16 years of image and video recognition experience which no AI system can match. The human brain is amazingly good at using minimal information to analyze whether objects represent a threat.

steve crouse18 May, 2023
INDV

As technology takes more and more skilled and semi-skilled decisions away from operators, those operators lose the skills necessary to make those decisions.

I see a time in the not-too-distant future where a new driver, right out of high school, will not have the skills to drive a 2010s or older era vehicle.

We see it now with people that are not able to drive a standard shift vehicle.

1 Reply