Testing Trust in Autonomous Vehicles through Suspension of Disbelief

The easiest way to test how humans react to autonomous cars is by getting them to believe they're in one

4 min read

Testing Trust in Autonomous Vehicles through Suspension of Disbelief
Image: Stanford

Autonomous vehicles are generally pretty safe by the time they get to the point that they’re able to drive themselves on public roads. Statistically, they’ve proven themselves to be safer than human drivers most of the time.

While it’s easy enough to tell people that, convincing them to place their trust in an autonomous vehicle isn't a simple thing, especially because the trust humans have in robots (all robots, not just cars) is highly situational. For instance, it might be easy to have a lot of trust in an autonomous car at low speed on an empty road, but how about driving around cyclists or through construction zones? The car may be capable of handling these situations, but whether the human believes that the car can handle these situations, is a different kettle o’ fish.

Researchers don’t have a lot of options when it comes to studying how humans react to being driven by autonomous cars. You can try a simulation, but because the subjects always know it’s a simulation and they’re not really in danger, you’re not accurately testing their reactions. You can also stick them in a real autonomous car, but even if you have one of those and you can legally drive it on public roads, it’s not easy (or necessarily safe) to put it through the specific types of situations that are designed to test passenger trust and reactions in the way that you want.

At Stanford’s Center for Design Research, Sonia Baltodano, Srinath Sibi, Nikolas Martelaro, Nikhil Gowda, and Wendy Ju have come up with a third option. It’s called Real Road Autonomous Driving Simulation (RRADS), and it works by encouraging humans to think they’re in an autonomous car when they’re really not.

Here’s how RRADS works: it’s based on a regular car, driven by a human, with a partition to separate the driver (one of the researchers) from the passenger (the subject) so that the subject cannot see the driver. The subjects all sign consent forms which clearly state that a licensed driver would be operating the vehicle at all times. Despite this, many subjects assumed that the car was in fact driving itself:

Although our study using this simulator did not use overt deception—the consent form clearly states that a licensed driver is operating the vehicle—the protocol was designed to support suspension of disbelief. Several participants who did not read the consent form clearly strongly believed that the vehicle was autonomous; this provides a lens onto the attitudes and concerns that people in real-world autonomous vehicles might have, and also points to ways that a protocol that deliberately used misdirection could gain ecologically valid reactions from study participants.

This “Wizard-of-Oz” protocol is fairly common in human-robot interaction studies, and is quite literal representation of the idea that you should pay no attention to the man behind the curtain. By pretending that a remotely-controlled robot is autonomous, it's much easier to experiment interactively with humans in situations where trying to do so with a robot that is actually autonomous might be entirely impractical.

The researchers explained that out of the 35 participants in the RRADS study, eight of them believed that they had been riding in a fully autonomous car—again despite the fact that the form they signed explicitly said that the car was being driven by a human. This was not the intent of the study, and was actually a total surprise, as the researchers explain:

We did not conduct our research to take advantage of participants not reading the consent form. RRADS actually worked very successfully as an autonomous driving simulation for participants who understood that the study was performed as a Wizard-of-Oz study, and for those that did not. It was not apparent to any of the researchers during the run of the study that any participants truly believed that they were in an autonomous vehicle. It was really only when we reviewed the video footage from the study that we came to suspect that many participants may have felt that they were actually being driven autonomously.

It’s important to point out that in any study like this, researchers have to get their methodology approved by an Institutional Review Board (IRB), which is a step required for any research involving human subjects. For this particular study, the researchers were specifically allowed to use a "suspension of disbelief" (Wizard-of-Oz) protocol, as opposed to a "deception study" protocol. The researchers describe it as a sort of theater, which works just as well for people who play along as it does for people who can’t tell that it’s not real:

The data we gathered from RRADS seem to be the same regardless of whether or not people were aware that there was a person driving the car - but in a ll cases people were willing to “play along.” While RRADS does rely on the cooperation of the participant, the effectiveness of the simulation does not depend on deception or not reading the consent form. RRADS is like theater - the sets in a play aren't really intended to “fool” an audience into thinking there is a real building or a tree on stage, but the set does allow audiences to become more fully involved in the play, and to “suspend disbelief.”

So, whether or not subjects actually think that they’re riding in a real autonomous car, RRADS seems like a very useful way of taking advantage of our active imaginations to study how we interact with autonomous vehicles that’s both more effective than simulation and easier (and safer) than using real autonomous cars.

The paper based on the study, RRADS: Real Road Autonomous Driving Simulation, was presented at HRI 2015.

The Conversation (0)