How you automate a car depends on how much money you're prepared to spend. The options range from cheap to damn-the-cost. The primary benefit that champions of the cheaper solutions highlight is that they’re ready right now; the others counter that the costs are dropping fast
All these approaches were on offer at last week’s Automated Vehicles Symposium in San Francisco, sponsored by AUVSI, an industry group known mostly for its support of drone technology. The conference speakers knew their stuff, of course, but more gripping still were the demos out in the parking lot.
I began with the damn-the-cost Velodyne 3-D laser ranging, or Lidar, system—the rooftop tower that festoons the Google self-driving car. The first new thing I learned about it was that it sells for US $75,000 instead of the $70,000 we in the automotive press had been saying for years.
“I guess the five must have dropped off,” laughs David Doroshnik, who stood by the tower and its revolving Eye-of-Sauron scanner and a flat-panel display in the car’s trunk. The display showed a schematic-but-detailed view of the hotel, the parking lot, and the neighboring road. It came to a full 360 degrees of horizontal coverage with a fair bit of up-and-down, as well.
“Here comes a fire engine,” Doroshnik said, and a long, articulated pair of red-colored prisms looped around the display of the lot and went on. Pedestrians crossed in front, their tiny legs moving neatly. But when I moved around, nothing registered, because I was standing in the tower’s cone-shaped shadow.
A day later and a few steps away, a different bunch of guys made much of that shadow.
“Even if you have it high up, you get a blind spot around the car,” says Bobby Hambrick, head of Autonomous Stuff, a Morton, Ill., startup that sells devices for automotive robotics, including lunchbox-size Lidars from the French manufacturer Valeo. Two of them were mounted on the front and two on the back of his demo car, a Kia. From the outside, they didn’t seem to rotate, but within a laser scanned back and forth.
Add two more, one on either side, and you’d get a 360-degree view without a blind spot. However, you wouldn’t save a penny, because they cost around $20,000 each.
“Well, they’re just for research now,” Hambrick says. “Valeo is aiming for a target price of around $250 by 2016. Audi said at CES three years ago [that Lidar] would have to get down to that price” to make it practical.
Now that it seems the lower a price will soon be at hand, Bosch, the auto supplier giant, now says it’s banking on Lidar. Besides the smaller price, designers are hoping for a smaller profile, perhaps something about the size of a deck of cards, so that designers could hide them behind peekholes in the car’s body.
Some car companies continue to pooh-pooh lasers as too big, clunky and expensive for commercial use. Daimler, most notably, has argued that multiple overlapping radars, cameras, stereocameras (for depth perception), GPS and ultrasound (for parking) provide all the sensory acuity you need. The poster car for this position is Daimler’s Mercedes-Benz S class—particularly the experimental version of it known as Bertha, which (with a human watching over it) can drive not only on highways but also on city streets.
Yet even this approach must still be called deluxe because those radars and cameras cost money. For a true budget solution—one that plays Chevy to Google's Cadillac—you have to turn to suppliers such as Magna International. The company's Chris Van Den Elzen took me and a few other people from the symposium for a little spin in a car tricked out with exactly one sensor: a conventional camera fixed to the inside of the windshield.
We followed a lead car, driven by a Magna confederate, whose purpose was to produce a short enough gap to simulate conditions in a traffic jam, so that the “traffic-jam assist” function could operate. We got onto the highway, staying under 45 miles per hour, another requirement of traffic-jam assist.
“We’re now at about a two-second following distance,” Van Dan Elzen said. “If we stop, the temporal spacing obviously goes to infinity, so we switch to geometric spacing to maintain a five-meter distance.”
Suddenly I noticed that he’d taken his hands off the wheel.
“Um, don’t you need stereocameras to do that and to identify traffic signs?” I ask nervously.
“No, that’s a common misperception,” Van Dan Elzen answered breezily.
“So what advantage do you get from stereocameras?” I counter. I'd always thought stereo was the only way to get depth perception.
“You probably have a small advantage in accuracy over 40 to 45 meters,” he says. "Beyond that, it’s monocular anyway.”
He explains that one camera, feeding data to a pattern-recognition chip from MobileEye, an Israeli autonomous-vehicle company, can provide good, basic, driver-assistance functions provided that the conditions are right. If conditions turn unfavorable, the car just hands control back to the driver—something the driver can easily initiate by hitting the brakes or turning the wheel.
I tried another tack. I noted that in his keynote address the day before, Ralf Herrtwich, the head of Daimler’s research program on self-driving cars, had emphasized the importance of a multiplicity of cameras, radars, and other sensors. What, I asked, would Herrtwich say about Magna’s one-camera demonstration?
“Oh, he’d probably pat me on the head and say how cute it is,” Van Dan Elzen laughed. “Look, Mercedes Benz is not interested in a cheaper solution. Their system is very, very premium, which is what they are.”
There should be “a car for every purse and purpose,” said Alfred Sloan, the legendary head of General Motors, back in 1924. The adage is still true in the age of cars that think.
Philip E. Ross is a senior editor at IEEE Spectrum. His interests include transportation, energy storage, AI, and the economic aspects of technology. He has a master's degree in international affairs from Columbia University and another, in journalism, from the University of Michigan.