Mobileye, the Israeli car automation company that came onto the self-driving car scene as sort of an anti-Google, is now looking at the future in terms that seem a bit closer to Google's than used to be the case.
Speaking Friday at a conference organized by Goldman Sachs (which owned a chunk of Mobileye’s shares when the company first became publicly traded in 2014), Amnon Shashua, Mobileye’s founder and chief technical officer, placed a lot of emphasis on mapping, something Google has done all along. And now Shashua is predicting utterly hands-free driving—if only on the highway—by 2021.
Mobileye had always emphasized incremental steps, such as active cruise control and emergency braking, collectively called advanced driver assistance systems (ADAS). It was Google that proposed to skip all half measures and get right to full-bore self-driving cars.
But Google is also taking a step back from its original position. Chris Urmson, Google’s robocar chief, used to say he expected its car—with no steering wheel, accelerator, or brake pedal—to go on sale in time for his own teenage son to avoid ever having to take a driver’s qualification test. In March, though, he said full autonomy might trickle into various driving environments over a three-to-30 year period.
Still, Mobileye’s approach differs from Google's in a number of ways.
First, Google uses an expensive array of sensors costing tens of thousands of dollars per car. Mobileye got its start with a system built around a single camera, at a cost to manufacturers of less than US $1,000. The prospect of a relatively simple and super-cheap robocar system explains why Mobileye wowed Wall Street with sky-high market valuations.
Second, Google has professional drivers test a relatively small fleet of experimental cars, while Mobileye and its automotive collaborators—notably Tesla—have gotten their data from customers, through crowdsourcing. Shashua says that the collaborators can also include mapping and navigation companies, such as Tom Tom, in the Netherlands, and Here, in Germany.
Now Google appears to be moving ever more in the direction of the "deep learning" approach to teaching cars to drive themselves. This approach, in which deep neural networks train themselves into expertise with little or no human intervention, is what powered Google's AlphaGo program to its recent victory over a leading master of the game of Go.
AlphaGo learned to imitate the play of human masters through trial and error. At no point did a human being step in and tell the machine to pay attention to, say, points near the edge of the board. The same researchers earlier trained machines to play Atari games—again, without giving them any hints. The program had to learn the rules as it went along.
But Shashua has poured a little cold water on the idea of cars being self-taught. Deep learning does well on games and other well-defined tasks, like recognizing images in a database, or translating from one language to another—two of Google’s other specialties. But driving, taken as a whole, says Shashua, is not so well defined.
“What makes both driving assist and autonomous driving real is the ability to find a needle in a haystack,” he said. “There are many rare events that need to be covered to reach 99.999% capability. Building for demonstration is manageable, but building something that will reach production-worthiness requires this remaining 10 percent, and that makes all the difference.”
MobileEye plans to continue using human experts to break self-driving down into parts that it can automate—an expert-system approach. “We have 600 people annotating images at MobileEye; at the end of this year, it will be 1000,” Shashua said.
Sure, Shashua was answering his company’s critics, who maintain that it got the direction of the industry wrong when it bet on simple camera systems and incremental automation. But the anti-Google is suddenly looking like less of an outlier than was previously thought.