Hey there, human — the robots need you! Vote for IEEE’s Robots Guide in the Webby Awards.

Close bar

Experiments to Show if Robocar Tech Will Make Us Too Lazy to Drive Safely

If robotic "nanny cars" lull us into complacence, things could turn ugly fast

2 min read

Experiments to Show if Robocar Tech Will Make Us Too Lazy to Drive Safely
Photo-illustration: Chris Clor/Getty Images

There you are, daydreaming, when suddenly someone yells “heads up!” and lobs a ball at you. It’s a childish ploy to see how fast you can pull yourself together and catch the ball with your hands rather than with your forehead. 

That’s pretty close to what researchers did recently to test drivers’ responses to a collision with a car they’d been following. Actually, it was just a blow-up model of car, towed on a platform by a truck 15 meters up ahead of them. Still, the sudden appearance of even a balloon-like car sure does concentrate the mind.

“Most people were really surprised,” says Caroline Crump, a cognitive neuroscientist who worked on the experiment. “A lot of them brake to a stop, then start nervously laughing.”

Crump works for Exponent Failure Analysis Associates, a Los Angeles engineering firm that consults for  private clients, mostly on legal liability cases. But this project is being done to test how people adapt to various driver-assistance programs, and the results will be presented next month  at the annual Society of Automotive Engineers World Congress, in Detroit. 

Here the feature under examination is forward collision prevention and mitigation, which uses sensors to mind the gap with the car just ahead of yours. If the gap closes up, the car sounds an alarm and flashes a warning on a screen, and if the driver doesn’t apply the brakes, the software will.

Until now most such testing has been done on driving simulators, which are utterly safe and therefore possibly misleading in their results

Of  16 subjects tested, 14 braked before the automatic system could do it for them—which is just what the system’s designers would have wanted. If the drivers just defer to the system, then there’s really no backup at all.  It’s not quite clear what the other two may have done wrong. “The two who collided with the balloon started braking at about the same time, and the peak force of their braking was in the same range” as the other drivers, she says.  

Until now most such testing has been done on driving simulators, which are utterly safe and therefore possibly misleading in their results. People may let their attention wander when they know there’s no real price to pay.

Here the surroundings were more natural, and though the drivers were told that their cars had forward-collision mitigation systems, the information was imparted casually, along with a lot of other things. Most of them probably didn’t think that they’d be testing that system. The experimenters understood this, of course, and in future tests will try to see what happens when the drivers do expect to test it.

The key safety question is whether drivers will become so habituated to nanny-like robotics that they lose their reflexes or, even worse, deliberately drive less safely in the expectation that the automation will save them from themselves. It’s called risk compensation—like when motorcyclists drive less carefully after being required to wear helmets.

Exponent’s researchers plan to test the effects of habituation by presenting drivers with even more realistic conditions—complete with side traffic and other distractions—and give them not hours but weeks to get used to all the automation. 

The Conversation (0)