The only sign of fallibility I saw yesterday in Ford’s experimental self-driving car came halfway through a drive near the company’s headquarters in Dearborn, Mich., when the robocar briefly braked for no clear reason, then apparently thought better of it.
A tiny irregularity, and grist for the engineers’ mill along with other little lapses, logged at this media event. One reporter said his car had been a bit “spooked” by a hedge. But at least in my drive the car did everything from start to finish, setting out with a programmed destination but deciding on each turn, lane-change, stop, and start.
Here, in this protected realm among Ford employees, the self-driving car will first see use in a ride-hailing service in 2018. By then some of the sensors will have improved. For instance, the four, $8,000 lidar sets on the roof, which reach only 80 meters, will soon be replaced by just two sets that can see about twice as far. And by 2021, when Ford plans to roll out a commercial robotaxi service, the lidar should be still better, smaller and cheaper.
“We design modularly, so that we don’t depend on the availability of new hardware,” Randal Visintainer, the director of Ford's autonomous vehicle program, told IEEE Spectrum. A lot of suppliers have been talking about lidar-on-a-chip for ridiculously cheap prices, he noted, “but I haven’t seen one yet.”
The interesting thing about the lidar arrangement is that the two outermost sets revolve obliquely, so as to get a view of the space immediately adjacent to the side of the car. If you rely on just one roof-mounted set, as Google’s car does, the car casts a shadow, creating a blind zone. The other two sets on Ford’s vehicle are vertically oriented so that their fields overlap in front and in the back, providing extra detail.
In this first unveiling to journalists, Ford’s little fleet of robocars, all based on the Ford Focus hybrid, stuck to streets mapped to within two centimeters, a bit less than an inch. The car compared that map against real-time data collected from the lidar, the color camera behind the windshield, other cameras pointing to either side, and several radar sets—short range and long—stashed beneath the plastic skin. There are even ultrasound sensors, to help in parking and other up-close work.
Here’s how the map looks to the car’s self-driving system. The darker colors represent stored mapping data, the brighter colors represent real-time data from the car’s own sensors:
One sensor the car did not use was real-time GPS, which Ford deems too unreliable in built-up areas, where the satellite’s signal can reflect along a number of different pathways.
“If they give us GPS relay stations, we’ll use them,” Visintainer said. “If there were smart intersections, we’d use them too—it’d make our lives a lot easier.”
The only staged event in our ride came when a Ford employee acted out the part of a pedestrian: He hit the “walk” button at a crosswalk and then crossed the street while demonstratively fiddling with a cellphone. The car stopped appropriately. Later, though, real pedestrians crossed, and the car again did as it should, if anything using an excess of caution. One time it stopped and wouldn’t budge until a pedestrian had not only crossed the street but taken another dozen steps up a sloping path.
The critical point here is that Ford is designing a car that will do it all, all at once—a kind of technological Great Leap Forward from today’s cars—with their ADAS, advanced driver assistance technologies. That doesn’t mean the company isn’t working on those stopgap measures as well.
“We developed a philosophy of designing from the bottom up and from the top down,” Visintainer said. Improving the self-driving power is the top-down approach; getting the driver-assistance systems to work is the bottom-up approach. “The question is, how far down can we take that [first approach], and when do the two approaches meet?”
Google made the top-down approach famous, arguing that anything short of full autonomy would lull drivers into a false sense of security. And that’s what many in the business say caused the one fatal robocar accident back last May, when a Tesla, unsupervised by its driver, drove itself into the side of a truck.
Taking the human being out of the loop—the jargon for turning a driver into a passive passenger—means taking away the safety net that today’s most advanced cars all require. “And you need extra redundancy if there’s no human serving as backup,” Visintainer adds.
He said putting the two strategies together, and putting systems together so they can be manufactured efficiently and last long, is Ford’s core competency—the thing it can do better than non-carmakers like Google, Apple, and Uber.
It was a note sounded yesterday in a talk by Bill Ford, the executive chairman of Ford Motor Co., who noted he’d gotten to know Silicon Valley during the years he spent a member of the board of eBay. “We know how to integrate all that technology into a vehicle that they [the non-car makers] might not have,” Ford said. “People were saying we’ll be low-margin assemblers of other people’s technology. That’s shifted. We bring a lot of technology ourselves, then we integrate it into the vehicle, and then we build it.”
The car and non-car companies are competing, not just to lead in technology, but also to be perceived as leading. It was no accident that on Tuesday, while Ford was giving a gaggle of journalists in Dearborn their first ride in a robotic version of the Ford Fusion, Uber was giving another bunch of scribes in Pittsburgh their first ride in Uber’s own robotic version of the very same car.
Uber is about to use them in a pilot commercial robotaxi service, but that doesn’t mean it’s leapfrogged Ford, let alone Google. The Uber cars will remain firmly under the supervision of professional drivers.