Drive.ai is the 13th company to be granted a license to test autonomous vehicles on public roads in California. This is exciting news, especially because we had no idea that Drive.ai even existed until just last week. The company has been in stealth mode for the past year, working on applying deep learning techniques to self-driving cars. We spoke with two of Drive.ai's co-founders, Sameep Tandon and Carol Reiley, about why their approach to self-driving cars is going to bring us vehicle autonomy that's more efficient, more adaptable, more reliable, and safer than ever.
Drive.ai came straight out of Stanford's AI Lab about a year ago. Its core team is made up of experts with a wealth of experience developing deep learning systems in all kinds of different domains, including natural language processing, computer vision, and (most recently) autonomous driving. “This team helped pioneer how to scale deep learning, which is one of the reasons why deep learning has been successful as of late,” says Tandon, the company’s CEO.
After working for several years on the problem at Stanford, these researchers felt that a startup would be the best way to commercialize their ideas and technology and turn them into a product. So they decided to put their PhDs on hold and started Drive.ai.
“Drive.ai is a deep learning company,” Reiley says. “We're solving the problem of a self driving car by using deep learning for the full autonomous integrated driving stack—from perception, to motion planning, to controls—as opposed to just bits and pieces like other companies have been using for autonomy. We’re using an integrated architecture to create a more seamless approach.”
What is deep learning? And why should we care that it's being applied to autonomous driving? Says Tandon:
When you're developing a self-driving car, the hard part is handling the edge cases. These include weather conditions like rain or snow, for example. Right now, people program in specific rules to get this to work. The deep learning approach instead learns what to do by fundamentally understanding the data.
“Generally, before deep learning, doing machine learning was all about feature selection,” Reiley adds. “It was a very crude way of doing it, and it was difficult and time consuming to get these algorithms to recognize anything.” Deep learning, she says, is much more analogous to the way humans learn. “You show an algorithm good and bad examples, and it learns to generalize. For a dynamic environment that is extremely complex, we believe this is best way to solve the problem.”
The first step for Drive.ai is to get a vehicle out on the road and start collecting data that can be used to build up the experience of their algorithms. “It’s not about the number of hours or miles of data collected,” says Tandon. “It comes down to having the right type of experiences and data augmentation to train the system—which means having a team that knows what to go after to make the system work in a car. This move, from simulation environments and closed courses onto public roads, is a big step for our company and we take that responsibility very seriously.”
As far as what the actual vehicle is going to look like, Drive.ai isn't quite ready to comment. (But if you see it cruising around Mountain View at some point, send us a pic!)
Software is certainly going to be the engineers’ focus (as it's arguably the most difficult and important part of any autonomous driving system). But Tandon and Reiley did tell us that, in their opinion, a lot of the sensing hardware that's available today is underutilized. “If you're driving a car with the radio on, the only input sensors that you're relying on to control your car are your two forward-facing eyes,” Reiley notes. “It's kind of scary that we trust ourselves today to drive around like that, controlling the car based only on that information.”
We should also mention that Drive.ai has recently closed US $12 million in Series A funding, and that the team is looking to hire a bunch of people. While Drive.ai isn't quite ready to talk about what their end goal is, we're quite curious to see some tangible examples of deep learning’s benefits to autonomous cars over other approaches.
Evan Ackerman is the senior writer for IEEE Spectrum's award-winning robotics blog, Automaton. Since 2007, he has written over 6,000 articles on robotics and emerging technology, covering conferences and events on every single continent except Antarctica (although he remains optimistic). In addition to Spectrum, Evan's work has appeared in a variety of other online publications including Gizmodo and Slate, and you may have heard him on NPR's Science Friday or the BBC World Service if you were listening at just the right time. Evan has an undergraduate degree in Martian geology, which he almost never gets to use, and still wants to be an astronaut when he grows up. In his spare time, he enjoys scuba diving, rehabilitating injured raptors, and playing bagpipes excellently.