Self-driving Cars You Can Trust

Human drivers behave differently when automated driving systems inspire confidence

2 min read

Self-driving Cars You Can Trust

A wag once noted that you could cut traffic accidents by planting a spike in the middle of every steering wheel. People would then drive exceedingly carefully.

The other side of the coin is that built-in safety features don’t work as well as expected if they lead drivers to take greater risks. Some experts have argued that the early antilock braking systems, for instance, made drivers more liable to tailgate and to speed.

So until cars are so fully robotic that they can dispense with human guidance altogether, engineers will have to design them with human foibles in mind. One such foible is the degree of trust we put in our machines, according to Swedish researchers.

In the February issue of IEEE Transactions on Intelligent Transportation Systems, Katja Kircher and Jonas Andersson Hultgren, at the Swedish National Road and Transport Research Institute, in Linköping, and Annika Larsson, at Lund University, describe experiments made in driving simulators.

One system they studied was adaptive cruise control, which normally keeps the car moving at a set speed but intervenes if necessary to avoid getting too close to the vehicle in front of it. Such systems have been in use for a long time, and as the researchers showed, drivers put their trust in it. 

The researchers also studied a system that combines adaptive cruise control with active steering, which can, for instance, allow a car to follow a lead vehicle even if that means changing lanes. But active steering became available only recently, and the researchers discovered that drivers trust it less. What’s more, they found that a driver's trust in one system didn't correlate much with his trust in the other. 

That’s actually good news. Far from being technophobes or -philes, the subjects in the experiment were simply adapting rationally to the automation they confronted. 

One outcome had to do with the ease of turning active systems on and off. Say the driver trusts his active steering in easy situations but not in harder ones. If the system is hard to deactivate, the driver may let it alone when making easy decisions, like slowing down behind a slow vehicle, but intervene in harder ones, such as when the car must suddenly veer to one side to follow a lead vehicle onto a highway exit ramp.

“When control is partially handed over to automation or another driver,” the researchers write, “the driving task changes in its nature. It not only means merely giving up of certain activities but it also means that the driver needs to monitor the agent that takes over control.” And that means restructuring the way people drive.

Although this study found no instances in which people treated automation at all irrationally, another recent study found that they do—by preferring cars with human-like traits.  

In the January issue of the Journal of Experimental Social Psychology, Adam Waytz of Northwestern University, Joy Heafner of the University of Connecticut, and Nicholas Epley of the University of Chicago write that they could inspire trust in a car [pdf] by naming it Iris and providing it with a suitably feminine voice. This result is particularly impressive because here, too, the persona was attributed to a mere driving simulator, not a warm, glass-and-metal chariot.

Rosie the household robot, from the old U.S. television cartoon series, "The Jetsons," not only had a feminine name, voice, and dress, she also was so emotional that her owners had to tread carefully in her presence. It was just a joke by the cartoon's writers. But to the engineers of robotic cars, it's not a joke—it's a design specification.

The Conversation (0)