The Ford Motor Company is skipping Level 3 autonomy—when the driver must be prepared to take the wheel—and going straight to Level 4, when there is no steering wheel at all. The reason? Its own engineers were falling asleep during Level 3 test drives.
“These are trained engineers who are there to observe what’s happening,” Raj Nair, Ford’s product development chief, told Automotive News. “But it’s human nature that you start trusting the vehicle more and more and that you feel you don’t need to be paying attention.”
Apparently, the Ford engineers kept nodding off even when every attempt was made to keep them on their toes. Bells and alarms did no good, nor did putting in a second engineer to ride shotgun. He nodded off, too. It was this spectacle that convinced Ford honchos to double down on the damn-the-stopgap push to full autonomy, which Google’s Waymo pioneered.
Previously, Ford had leaned toward that idea, but hedged its bets by trying to improve driver-assistance systems until they achieved full autonomy. Just six months ago, Randal Visintainer, director for autonomous vehicle development at Ford, told IEEE Spectrum that both approaches—which he termed “top down” and “bottom up”—were still under review. “The question is, how far down can we take that [first approach], and when do the two approaches meet?”
Other automakers still favor using the stopgap of Level 3, defined as a self-driving system that might, at any moment, give the driver just 10 seconds to wake up and take command. Just last month, at CES, Audi announced that it would release a Level 3 car within a year, then aim for Level 4 some three years later.
Ten seconds may seem like plenty of time, but it sure seems short when you’re dreaming.
Earlier this week, the U.S. House Subcommittee on Digital Commerce and Consumer Protection held a hearing on Self-Driving Cars: Road to Deployment. I know, it sounds super boring, and most of it was: if you’ve been following the space for a while, nothing in the prepared statements will surprise you all that much, even though the witnesses at the hearing included industry heavy hitters like Gill Pratt from TRI, GM’s Vice President of Global Strategy Mike Ableson, and Anders Karrberg, Vice President of Government Affairs at Volvo Car Group, as well as Lyft’s Vice President of Public Policy Joseph Okpaku and Nidhi Kalra, Co-Director and Senior Information Scientist, at the RAND Center for Decision Making Under Uncertainty.
What was interesting, however, was the question and answer session. It’s an open look at what the house members think is important, and the answers from the witnesses are on the fly. Remember, these are the people who are making self-driving car policy talking directly to the people who are making self-driving cars: What was talked about at this hearing could potentially shape the direction of the entire industry. There’s over an hour of questioning, and we’ve watched it all. But we opted to transcribe only the most interesting bits.
To be clear, this drone exists, it flies, and, strictly speaking, there is no specific technological reason why EHang and Dubai couldn't do exactly what they're saying they're going to do. And that's what's so scary.
Dumb cars shield against crashes. Smart cars avoid them. Shrewd cars prepare you for them.
This year, Mercedes-Benz is introducing what surely is one of the shrewdest precrash features yet: a burst of sound that causes a muscle inside the ear to contract, bracing the eardrum against the potentially deafening noise of the crash itself.
Tip o’ the hat goes to IEEE Spectrum’s auto maven, Lawrence Ulrich, who pointed out to us the importance of “Pre-Safe Sound,” standard in the 2017 E class. (Ulrich will describe this and other marvels in “Top Ten Tech Cars,” in our April issue.)
The idea of pre-crash safety has been out there for a while. For instance, a car can instantly tighten the seatbelt to minimize movement and prevent the body from “submarining” forward, under the belt. Or it can inflate a tiny airbag to nudge the driver toward the center, protecting against side impact. Or it can close the sunroof, adding to the rigidity of the cabin.
But Pre-Safe Sound goes to ear-popping lengths. When the car’s sensors sense an impending crash, the cabin is filled with a burst of “pink” noise, a broad spectrum of frequencies in which the power is inversely proportional to the frequency. That triggers the so-called acoustic reflex, in which the stapedius muscle—the smallest muscle in the body (remember that for Trivial Pursuit)—contracts, bracing it, the bones of the inner ear, and the eardrum.
The pink noise is around 80 decibels, about equal to that of a dishwasher and completely safe. A crashing car puts out around 145 dB, high enough to damage hearing, at least some of the time. Worse still—and this part is not emphasized by Mercedes-Benz or any other carmaker—is the noise created by the near-instantaneous deployment of the airbag: around 165 dB. It’s estimated that 17 percent of the people who are exposed to airbag deployment suffer some degree of permanent hearing loss.
That’s why its important to put the various safety features into action in the right order. First comes totally safe features, like emergency braking, seltbelt pretensioning and Pre-Safe Sound. The airbags come only when necessary, during the crash itself.
Pre-safe noise took a long time to go from a gleam in an inventor’s eye to a commercial offering. A U.S. patent for the idea was filed in 1997 by one Armin Kausch, an employee of a subsidiary of TRW Automotive. That patent application cited earlier work going back as far as 1960.
It takes time to get the bugs out. Remember that whenever you read that cars without steering wheels will be plying our roads before your kids are old enough to get a driver’s licence.
Last year, a self-driving car failed about every 3 hours in California, according to figures filed with the state’s Department of Motor Vehicles.
Every January, carmakers testing self-driving cars in California have to detail how many times their vehicles malfunctioned during the preceding year. These so-called disengagement reports detail every time a human safety driver had to quickly take control of their car, either due to hardware or software failure or because the driver suspected a problem.
The reports—detailing 2,578 failures among the nine companies that carried out road-testing in 2016—give a unique glimpse into how much testing the different companies are doing, where they are doing it, and what is going wrong. None of this year’s disengagements resulted in an accident.
Alphabet’s spin-out company Waymo [Google in chart above] still has by far the biggest testing program—its 635,868 miles of testing accounted for over 95 percent of all miles driven by self-driving cars in California in 2016. Waymo’s fleet of 60 self-driving cars reported a total of 124 disengagements, 51 of them due to software problems. That represents a sharp reduction in disengagements from the previous year, from 0.8 disengagements for every 1,000 miles of autonomous driving down to just 0.2.
Bosch, by contrast, reported over 1,400 disengagements while covering just 983 miles in three vehicles—equivalent to a hefty 1,467 disengagements for every 1,000 miles of driving. But that doesn’t mean Waymo’s cars are 8,000 times safer than Bosch’s, as every company has its own way of counting disengagements.
For instance, Waymo does not count every single time the driver grabs the wheel to take over from the robotic chauffeur, which it admits happens many thousands of times annually. Instead, the company later simulates what would have happened if the human had not jumped in—and only reports disengagements where the car would have done something unsafe. It calculates that if its drivers had taken no action at all, nine disengagements in 2016 would have led to the car hitting an obstacle or another road user. That is down from 13 the previous year, despite covering 50 percent more miles.
“Waymo’s report would seem to suggest substantial improvement,” says Bryant Walker-Smith, a professor at the University of South Carolina. “But I’d want to know whether Waymo’s system could handle any of the system-initiated disengagements by achieving a minimal risk condition, say by pulling off to the side of the road, rather than immediately disengaging.”
The other problem with comparing disengagement rates is that different companies are using California’s testing permits for different things. Only Waymo and Cruise Automation, now owned by General Motors, have large, general-purpose testing programs. In its first year on the state’s roads, Cruise’s two dozen cars went from covering less than 5 miles in June 2015 to over 2,000 miles in September 2016. Its disengagement rate also plummeted over the same period, from over 500 to under 3 per 1,000 miles.
No other company drove more 5,000 miles in 2016, and some of the world’s biggest carmakers, including BMW, Ford, and Mercedes-Benz, covered less than 1,000. “The low number of miles, combined with high number of disengagements, suggests that R&D engineers are occasionally using a local vehicle to get real-world performance data useful for a specific project,” says Walker-Smith.
Despite holding testing permits, Honda and Volkswagen drove no autonomous miles at all last year on public roads in California, preferring to test on private courses or out of state.
Once more, the most mysterious disengagement report is from Tesla. In 2015, the company reported no disengagements at all, suggesting that it either carried out no public testing in California or that its cars were flawless. This year, its report admits 182 disengagements in 550 miles of autonomous driving during 2016.
However, all but a handful of those disengagements happened in just four cars over the course of a single long weekend in October, possibly during the filming of a promotional video. Tesla does much of its testing out of state and on test tracks, although it also benefits from receiving millions of miles of road data from thousands of AutoPilot-equipped vehicles owned by its customers.
Companies that began autonomous vehicle testing in California in 2016, including startups Zoox, Drive.ai, Faraday Future, and NextEV, will not have to report their disengagement data until this time next year. Uber, which abandoned its pilot program of self-driving Volvos in San Francisco rather than apply for a testing permit, is currently testing in Arizona and Pennsylvania, states that do not require companies to report disengagements or failures.
Daimler will build—and operate—a fleet of self-driving Mercedes-Benz cars within the Uber network.
That role makes this deal the first of its kind: Daimler would not only do the design and manufacturing work but also assume all the costs associated with the fleet. Contrast that with the agreement between Uber and Volvo, in which the two companies collaborated on building the self-driving XC90 SUV, which Uber is now testing—and presumably will operate on its own. Lyft, a rival ride-hailing service, has a similar collaboration with General Motors.
The Mercedes robocars are to hit the roads “in the coming years,” Travis Kalanick, founder and chief executive of Uber, wrote on his blog. In other words, not for a while.
The arrangement addresses a problem in robotaxi service that’s often swept under the rug: When automation replaces the Uber driver, who will shoulder the many costs that the driver used to bear?
“Cars, whether autonomous or not, cost a lot of money which has to be paid to the manufacturer before they go into a fleet,” veteran auto industry analyst Mary Anne Keller wrote in September, on LinkedIn. “A small fleet of 100,000 vehicles at US $40,000 per unit amounts to $4 billion that would have to be paid by some entity.”
And the costs don’t end with the purchase of the car. “There is a myth among some tech geeks that electric cars don’t need service,” Keller continued. “Tesla has demonstrated that in fact they need maintenance. Despite all the sensors and millions of lines of code and a large battery, they still have wheels and tires, brakes and other mechanical parts, and fluids that require replacement or adjustment.”
Besides normal wear and tear, rental cars get a lot of abuse from their customers; that’s why rental car services check the car before and after you’ve rented it and charge for every new ding in the metal and every new stain on the upholstery. Today the Uber owner-driver takes care of such chores; in the day of the robotaxi that headache will devolve to the fleet operator—in this model, Daimler.
Already cars are coming with inward-facing sensors designed to check on the driver and make sure he’s got his eyes on the road. In future, robotaxis may also be checking on that jumbo Slurpee you’re clutching—and noting down every sticky drop that spills on the carpet.
Robotic cars are great at monitoring other cars, and they’re getting better at noticing pedestrians, squirrels, and birds. The main challenge, though, is posed by the lightest, quietest, swerviest vehicles on the road.
Nuno Vasconcelos, a visual computing expert at the University of California, San Diego, says bikes pose a complex detection problem because they are relatively small, fast and heterogenous. “A car is basically a big block of stuff. A bicycle has much less mass and also there can be more variation in appearance — there are more shapes and colors and people hang stuff on them.”
That’s why the detection rate for cars has outstripped that for bicycles in recent years. Most of the improvement has come from techniques whereby systems train themselves by studying thousands of images in which known objects are labeled. One reason for this is that most of the training has concentrated on images featuring cars, with far fewer bikes.
Consider the Deep3DBox algorithm presented recently by researchers at George Mason University and stealth-mode robotic taxi developer Zoox, based in Menlo Park, Calif. On an industry-recognized benchmark test, which challenges vision systems with 2D road images, Deep3DBox identifies 89 percent of cars. Sub-70-percent car-spotting scores prevailed just a few years ago.
Deep3DBox further excels at a tougher task: predicting which way vehicles are facing and inferring a 3D box around each object spotted on a 2D image. “Deep learning is typically used for just detecting pixel patterns. We figured out an effective way to use the same techniques to estimate geometrical quantities,” explains Deep3DBox contributor Jana Košecká, a computer scientist at George Mason University in Fairfax, Virginia.
However, when it comes to spotting and orienting bikes and bicyclists, performance drops significantly. Deep3DBox is among the best, yet it spots only 74 percent of bikes in the benchmarking test. And though it can orient over 88 percent of the cars in the test images, it scores just 59 percent for the bikes.
Košecká says commercial systems are delivering better results as developers gather massive proprietary datasets of road images with which to train their systems. And she says most demonstration vehicles augment their visual processing with laser-scanning (ie lidar) imagery and radar sensing, which help recognize bikes and their relative position even if they can’t help determine their orientation.
Further strides, meanwhile, are coming via high-definition maps such as Israel-based Mobileye’s Road Experience Management system. These maps offer computer vision algorithms a head start in identifying bikes, which stand out as anomalies from pre-recorded street views. Ford Motor says “highly detailed 3D maps” are at the core of the 70 self-driving test cars that it plans to have driving on roads this year.
Vasconcelos doubts that today’s sensing and automation technology is good enough to replace human drivers, but he believes they can already help human drivers avoid accidents. Automated cyclist detection is seeing its first commercial applications in automated emergency braking systems (AEB) for conventional vehicles, which are expanding to respond to pedestrians and cyclists in addition to cars.
Volvo began offering the first cyclist-aware AEB in 2013, crunching camera and radar data to predict potential collisions; it is rolling out similar tech for European buses this year. More automakers are expected to follow suit as European auto safety regulators begin scoring AEB systems for cyclist detection next year.
That said, AEB systems still suffer from a severe limitation that points to the next grand challenge that AV developers are struggling with: predicting where moving objects will go. Squeezing more value from cyclist-AEB systems will be an especially tall order, says Olaf Op den Camp, a senior consultant at the Dutch Organization for Applied Scientific Research (TNO). Op den Camp, who led the design of Europe's cyclist-AEB benchmarking test, says that it’s because cyclists movements are especially hard to predict.
Košecká agrees: “Bicycles are much less predictable than cars because it’s easier for them to make sudden turns or jump out of nowhere.”
That means it may be a while before cyclists escape the threat of human error, which contributes to 94 percent of traffic fatalities, according to U.S. regulators. “Everybody who bikes is excited about the promise of eliminating that,” says Brian Wiedenmeier, executive director of the San Francisco Bicycle Coalition. But he says it is right to wait for automation technology to mature.
Tesla CEO Elon Musk says that the latest software for his company’s AutoPilot self-driving package will better the 40 percent reduction in crashes that earlier versions have already shown.
Since October Tesla has been offering new hardware, including sensors, as well as new releases of software to exploit that hardware’s capabilities. Last week it began uploading what it calls an enhanced mode of the software, and now it is making that software operational in the cars. The company didn’t speculate on how much today’s release might improve safety, but Musk said that the final effect would be massive.
“Our target is a 90 percent reduction [in crashes] as the software matures,” he posted on Twitter.
Not only does the U.S. road safety regulator clear Tesla’s AutoPilot for any responsibility in causing a fatal crash in May 2016, the agency even credits the system for reducing crashes by 40 percent overall.
In the report, released yesterday [pdf], the National Highway Transportation Administration said that the crash—a side collision in an intersection between a Tesla Model S and a truck—was due to driver distraction that lasted for at least 7 seconds. It concluded the automatic emergency braking system (AEB) was not to blame because it hadn’t been designed for such a scenario.
“AEB systems used in the automotive industry through Model Year 2016 are rear-end collision avoidance technologies that are not designed to reliably perform in all crash modes, including crossing path collisions,” the report said.
More important, NHTSA said that Autosteer, another element in Tesla’s driver-assistance package, had reduced crashes by nearly 40 percent. Before installation of Autosteer there were 1.3 crashes involving airbag performance for every million miles driven; that fell to 0.8 after installation.
Many have criticized AutoPilot not so much for its technical prowess as for its ambitious-sounding name, which might be taken as an encouragement to rely on the system to drive the car without human supervision. NHTSA points out that Tesla has addressed the problem by giving drivers visual cues to test their attentiveness: miss one cue too many and you’ll "strike out," turning off Autopilot until you stop the car.