Until cars can take over the driving entirely, we humans will just have to share the job with them, exploiting the strengths and masking the weaknesses peculiar to each side—carbon and silicon.
The biggest human weakness may well be our tendency to be lulled into complacency by robots that drive perfectly almost, but not quite all, of the time. One of the biggest machine weaknesses is in making certain judgment calls. (They’re not great at spotting potholes, for instance.)
Most researchers are so intent on getting driver-assistance systems to stand alone that they dismiss human behavior as mere noise in the system. But some boffins at Berkeley want to make full use of that behavior by telling the machines when to defer to the driver.
Call it driver assistance for driver assistance.
“It’s incredibly important to include the human, not just as something to consider as a disturbance,” says Katherine Driggs Campbell, who is leading the project. She is a doctoral candidate in electrical engineering, under the supervision of professor Ruzena Bajcsy.
The researchers use cameras to monitor posture, head, and eye movements. (A Microsoft Kinect determines posture by measuring the position of a person’s joints.) Those measurements are fed into a model that processes them so the automated systems can guess when people are losing their concentration. That’s when the robot stops deferring and starts intervening.
“It’s sometimes hard to tell from body position whether the driver is using a phone or just holding it up,” Campbell says. “Now, in our most recent iteration, we can pull information from the phone as well, to see if you’re actually using it.”
Right now the system is trained on individual drivers. “We’ve worked on creating a more generalized model,” Campbell says, “but that would need at least a coarse breakdown into various types of drivers.”
“We shrink down the set of what people might do to the more likely things,” Campbell says. In a just-published account of their earlier work, the researchers looked only at driving events that unfolded within 1.2 seconds. Now they’re looking as far as 2 seconds ahead, which is actually rather a long time, given how fast a car can be sailing down the road.
Most of the experiments have been conducted on people operating automobile simulators, but the researchers are in the process of validating their results in real cars that drive on a special testbed. The cars and the funding come from Hyundai.
Plenty of difficulties remain, even the old familiar one of driver complacency. Often, Campbell says, “the driver either became more reliant on the controller or reacted in a negative way, getting slightly panicked.”
Maybe the next step is to get the car to help the human help the car—help the human.
Philip E. Ross is a senior editor at IEEE Spectrum. His interests include transportation, energy storage, AI, and the economic aspects of technology. He has a master's degree in international affairs from Columbia University and another, in journalism, from the University of Michigan.