The “Trolley Problem” Doesn’t Work for Self-Driving Cars

The most famous thought experiment in ethics needs a rethink

3 min read
pixelated illustration of a car scene with two cars sitting perpendicular to each other

This frame from a simulation shows the protagonist car hitting a truck in an intersection after a traffic accident.

North Carolina State University

If you were the conductor of a trolley barreling toward two human beings, each one on either end of a fork in the track, could you choose which life to spare?

This problem, one of the most famous thought experiments in all of philosophy, was proposed by British philosopher Phillipa Foot in 1967 as a way to consider tough ethical choices in many fields. It was taken up early in the debate over how to design autonomous vehicles (AVs). But it may not be applicable to this question, argue Veljko Dubljević and his colleagues in the journal AI & Society.

Unlike human drivers, who make a split-second decision on how to react when they see an accident unfold or an obstacle emerge on the road, an AV must follow a preset moral formula to make its choice. Should it swerve to avoid a child crossing the road, even if it means hitting a larger group of adults on the sidewalk? And what if that choice harms the person inside the AV?

“The trolley paradigm was useful to increase awareness of the importance of ethics for AV decision-making, but it is a misleading framework to address the problem,” says Dubljević, a professor of philosophy and science, technology and society at North Carolina State University. “The outcomes of each vehicle trajectory are far from being certain like the two options in the trolley dilemma [and] unlike the trolley dilemma, which describes an immediate choice, decision-making in AVs has to be programmed in advance.”

One way that this shortcoming has played out, says Dubljević, is when collecting human participant responses as training data for AVs. In particular, Dubljević and colleagues write that the moral machine experiment–which has collected millions of responses about unavoidable traffic accidents–relies on binary scenarios that are often unrealistic and sacrificial. For instance, to save one person, others must be killed.

These choices also often reflect human biases that ethicists don’t necessarily want AVs to adopt.

“The goal is to create a decision-making system that avoids human biases and limitations due to reaction time, social background, and cognitive laziness, while at the same time aligning with human common sense and moral intuition,” says Dubljević. “For this purpose, it’s crucial to study human moral intuition by creating optimal conditions for people to judge.”

Dubljević and colleagues created more realistic environment using a combination of virtual reality and mundane traffic scenarios without binary solutions. The researchers also introduce a system to judge the “character” of the drivers, depending on the agent, the deed, and the consequence.

For example, say that a car accidentally runs a stop sign due to a mechanical failure and causes a non-lethal accident. Is the driver morally in the wrong if the traffic violation was out of their control? Would this judgment change if the car had been stolen but did stop at the stop sign?

Nicholas Evans, a professor of philosophy at UMass Lowell, has also studied ethical decision-making of AVs in low-stakes scenarios. He does not think that the trolley problem is obsolete, although he does agree that more work in non-binary moral decision-making is important. But he doesn’t much approve of character-based assessment, particularly in future scenarios where AVs might be making decisions about another AV’s driving.

“These are machines; it’s not Herbie the Love Bug,” Evans says. “Maybe one of the reasons we aren’t as interested in character in AV ethics is that cars don’t have characters, or dispositions, of the kind that humans and animals do. Certainly not yet; according to some, maybe never.”

Time will tell how AVs can interpret this character data. This new framework is still in the early stages, with human participants only making judgments as an observer instead of an agent. However, Dubljević says his team hopes to redesign this type of experiment for first-person decision-making using virtual reality and a driving simulator.

“This may be described as the ‘moral obstacle course,’ which after trials with humans can be used to train artificial neural networks,” Dubljević says.

The Conversation (4)
Dominik Troster
Dominik Troster14 Jan, 2024
M

The trolley problem is highly abusive. There is no reason why the car should be as fast as making it impossible to stop within the room available: There might be a steamroller coming backwards on your lane, not noticing you're coming at all. - Anyway, in this case only people in the speeding AI vehicle would be sacrified: and for a good cause! LEARN as you go...

Esther Lumsdon
Esther Lumsdon13 Jan, 2024
SM

The trolley problem needs to be completely redesigned. An AV is likely capable of stopping with much greater force than a human driver, in a much shorter distance.

Thomas Burke Ii
Thomas Burke Ii10 Jan, 2024
M

Imagine that the outcome has "levels of importance." An old white dude is worth less societally than a young child. Then, a point value is assigned - toddler, 1000 points; nun in a crosswalk, 900 points;old white guy, 100 points. Given worst-case type scenarios, the car chooses to kill 9 old white guys in order to save the child.

Now, imagine the potential for abuse - high-level politicians are worth 10,000 points, the president is worth a million.

Imagine one of those outliers deciding to jaywalk...

1 Reply