Drive.ai Solves Autonomous Cars' Communication Problem

Safe driving is only one part of the autonomous car challenge. Interaction with humans is another

7 min read

Drive.ai
Photo-illustration: Drive.ai

Understandably, most people working on autonomous vehicles are very focused on things like getting the cars to avoid running into stuff. And in general, this is something that autonomous cars have gotten very good at—especially on highways and in other areas where they don't have to worry about unpredictable humans running around and making their thinking more complicated and difficult.

Drive.ai is one of a small handful of startups pushing for rapid commercialization of autonomous driving technology. It came out of stealth mode back in April, and IEEE Spectrum wrote about its top-to-bottom deep learning approach to the problem. Today, Drive.ai is “officially emerging from stealth” (whatever that means), and we've learned a bit more about what the company is working on.  Drive.ai is touting a retrofit kit for business fleets that can imbue existing vehicles with full autonomy. But uniquely, it includes an HRI (human-robot interaction) component in the form of a big display that lets the car communicate directly with people. At first glance, something like this may seem like a novelty, but it's a feature that autonomous cars desperately need.

To understand why giving autonomous cars the ability to communicate like this is so important, consider what happens when you're trying to use an uncontrolled crosswalk as a pedestrian. An oncoming car might slow for you, but typically, before you cross in front of it you make eye contact with the driver to make sure that they've seen you and will stop. Now, imagine a driverless car in the same situation. With no human in control, how would you know whether the car has: a) detected you at all; b) understood what you want to do; and c) decided that it's going to stop for you?

Communication like this happens far more frequently than you probably realize, whether it's involving pedestrians, cyclists, or other drivers. It also probably happens far less frequently than it actually should. (I consider myself an expert on this subject, since I drove from NYC to Washington, D.C., last weekend.) Not only will autonomous cars actually use their turn signals, but with the ability to communicate more complex concepts, they could even politely ask to merge, provide useful information like “slowing for accident ahead,” or even apologize if they cut you off, which they probably won't ever do.

The fundamental necessity for a focus on HRI in the first generation of commercial autonomous vehicles stems from the fact that there's going to be a significant transitional period between mostly human-driven cars and mostly autonomous cars. Once roads are full of autonomous vehicles, and vehicle-to-vehicle communication is done wirelessly, it's not going to be as big of an issue.

At times, it can feel like most self-driving car companies are hyperfocused on that end goal. And because the transitional period is going to be messy, the common solution is to either ignore it (“we'll deal with it later”) or try to circumvent the problem. Going back to the crosswalk example, the difference is between making sure your autonomous car doesn’t hit humans in crosswalks, as opposed to actually helping humans safely cross the street.

imgImage: Drive.ai

For more details on applying HRI techniques to driverless cars, as well as more on Drive.ai's full stack deep learning approach to autonomy, we spoke with co-founder and president Carol Reiley:

IEEE Spectrum: How is Drive.ai's approach to self-driving cars unique?

Carol Reiley: I'm looking at self-driving cars as the first social robot most people are going to interact with. It's not a humanoid, but it is a smart machine that's going to be enabled through artificial intelligence. 

[We had to ask ourselves], once you solve the problem of getting from point A to point B, how do these self-driving cars interact with all of the other players on the road? What does that relationship look like, and what is the non-verbal dance that happens at crosswalks, at intersections, or when you're trying to merge? When you replace the human behind the wheel, how does this car now emote? How does it communicate so that everyone feels safe and trusts it? We felt like this was a piece of the conversation that people aren't talking about.

Spectrum: When you talk about enabling a smarter robot through artificial intelligence, how has that evolved in the context of autonomous driving? How is the AI that the cars in the 2007 DARPA Grand Challenge used different from the AI that autonomous cars are using now?

Reiley: There are lots of different layers to that question. We're building our company with deep learning from the ground up; the Darpa days were pre-deep learning. Sebastian [Thrun, who led Stanford's Darpa Grand Challenge team before developing autonomous cars at Google] had said “computer vision isn't going to work, and I'm betting on HD maps and lidar,” and that's how Google's autonomous car program was built: on the assumption that computer vision won't work. 

In 2012, Google Brain revolutionized artificial intelligence for computer vision and perception, and that industry is all powered by deep learning now. At that point, Google had already invested years into a non deep learning approach, and they're switching it out module by module, but it's hard to fundamentally change the approach. That's one of the advantages of our startup: we're building a deep learning self-driving car company from the ground up. And we're using it not just for perception, but for decision making as well. It's a more end-to-end approach. That's one perspective of how AI has changed since the DARPA Grand Challenge days.

Spectrum: What kind of sensors do your cars use? How do you feel about cameras as opposed to lidar?

Reiley: At the very front of our deep learning pipeline are the questions like, what are the right sensors to put on your car, how much data do I collect, and how many miles do I need to drive. On the deep learning side of that, we're taking the approach that we want to push low-cost sensors farther than they've ever been. One sensor that's incredibly inexpensive is cameras, and with deep learning, you're able to contextualize images. We have other sensors for redundancy, but we're really pushing cameras a lot harder than most other teams, and deep learning enables that.

Certain groups use lidar front and center, or think that you can't solve this problem without HD maps. Humans drive around fine locally, without maps in their heads, and they basically have the equivalent of a [stereo] camera. Our group welcomes any low cost sensors; if Quanergy can get $100 lidar available, that would be terrific, and we would use it. We're not trying to show off what we can do with cameras; we're just trying to build safe, affordable systems that people can actually use. 

Spectrum: Why is HRI such an important consideration for autonomous cars?

Reiley: When a human drives, you look for all these social cues. For instance, you look at the car in front of you, and if its wheels are turned to the right, you can now infer what its next motion is: it's probably going to turn to the right. There are all these other subtle cues that humans look for that help us navigate in the world, and make it seem like [our cars are] more socially intelligent, because you can start anticipating motions before they happen.

We're really pushing this social interaction aspect of the self-driving car. When there's nonverbal human to human communication, it can be very confusing at times. When you remove the human, these cars need to be able to intelligently navigate in the world, and also be socially accepted by all the other humans on the road, and do that very safely. So, what happens at a four way intersection, between cars and pedestrians? We're looking at how does our car express itself, and we do it through LED lights, R2-D2-like sounds, and through different ways that our car moves to give indication of its intentions. We're trying to think about how we get our cars to communicate to everyone else.

What's interesting about driving is that it's so dynamic, and there are so many humans around, and humans are indecisive. For a self-driving car to have to make real-time decisions, it needs to be very transparent when it switches modes, so that it doesn't just seem unstable. How do we indicate to the outside world that this car is autonomous, and how do we indicate what our intentions are?

Spectrum: Does this emphasis on HRI imply that the actual driving part of vehicle autonomy is a (mostly) solved problem?

Reiley: I feel like most of the industry is focused on the mechanics of driving. This is not to suggest that the HRI is completely separate from that; I see it as highly coupled, and something that needs to be developed in parallel as opposed to in series. This is not just a robot in a lab. There are so many human-related problems that need to be considered. I think that the auto industry takes a modular approach to things, but self-driving cars are not a modular problem: they're a software-based, holistic thing, and you have to step back and look at the big picture. 

Spectrum: What is Drive.ai's plan from here?

Reiley: We're not building cars; we're building retrofit kits for businesses. So, select partners that are interested in delivery of goods or delivery of people. Existing vehicles come into the Drive.ai factory, we add the roof rack which has the sensors and HRI component and software, and we work with these partners to drive on a route-based situation.

We see this as a safe, logical first step for self-driving cars. I think global deployment of autonomous vehicles is going to cause mass chaos. I don't think people are thinking about humans in the loop at all right now. Even if we solve autonomous cars, the bigger problem is really humans. Humans are going to mess everything up, and you have to really design for humans using self-driving cars, and how they're going to understand things around them. We want to roll this technology out quickly, and also safely, and we see this route-based strategy with our partners as a first step. And we're definitely interested in doing a Level 4 [fully autonomous] approach, because Level 3 [where a human takes over sometimes] is also chaotic.

imgImage: Drive.ai

Drive.ai has its own fleet of cars that it’ll be testing around Mountain View, Calif. Because the company's vision involves vehicles that will “communicate transparently with us, have personality, and make us feel welcome and safe, even without a human driver,” we recommend that you find creative ways of pestering them just to see what they do, and then tell us about it.

Eventually, Drive.ai will expand from delivering goods into ridesharing and both public and private transit. The press release mentions some existing partnerships with major OEMs and automotive suppliers. And because of that $12 million in funding, we wouldn't be surprised to see vehicles with big friendly screens politely driving around California delivering things within the next year or two.

The Conversation (0)