Drive.ai Launches Robot Car Pilot in Texas With a Focus on Humans

By keeping humans in the loop and incorporating a new visual signaling system, Drive.ai's self-driving car service aims to be both friendly and effective

13 min read

Drive.ai
Photo: Evan Ackerman/IEEE Spectrum

Self-driving vehicle startup Drive.ai has only been around since 2015, but has moved aggressively toward getting autonomous cars out into the world to do useful things safely and efficiently. Drive struck a partnership with Lyft last September to test its technology with the ride-sharing service in San Francisco, and this week, the company announced an on-demand self-driving car service in Frisco, Texas, just north of Dallas.

Starting in July, 10,000 local residents in a small area consisting mostly of office parks and restaurants will gradually receive access to Drive.ai's app, which they'll be able to use to hail themselves an autonomous car to drive them around (for free). A few fixed routes will connect specific locations. If everything goes well after six months, the company will add more routes in other areas.

Drive.ai is not the first self-driving car company to run a pilot, but there are a few things that make its effort particularly interesting. First is the introduction of hardware and software to allow autonomous vehicles to communicate with pedestrians and other drivers, a desperately needed capability that we haven't seen before. And second, Drive will implement a safety system with remote "tele-choice operators" to add a touch of humanity to tricky situations—and keep humans in the loop even after safety drivers are eventually removed from the vehicles.

We spent a very hot Monday in Frisco at Drive.ai's launch event, took a demo ride and talked with as many engineers as we could to bring you the details on all the new stuff that Drive has been working on.

We've covered Drive.ai's approach to self-driving vehicles in the past; if you'd like to learn more about their full stack deep learning secret sauce, check out these articles:

Drive.ai Brings Deep Learning to Self-Driving Cars

How Drive.ai Is Mastering Autonomous Driving With Deep Learning

That said, it's been about a year since we checked in with the company, so we made sure to schedule some time with Drive.ai CEO Sameep Tandon and co-founder Tao Wang to make sure we were all caught up.

IEEE Spectrum: Why did you decide to do a pilot here in Texas?

Sameep Tandon: At a state level, what we really liked about Texas is that it has a very clear regulatory stance, and that makes it really easy for us to then go work within the state. Frisco in particular has very strong public-private partnerships, and we also noticed that in this particular area, there's a real transportation need that we can address: taking people from work, to where they relax, and eventually to where they live. These short, microtransit types of rides are the perfect use case for autonomous vehicles.

Spectrum: What was it like going from testing vehicles in the Bay Area to testing vehicles in Texas?

Tao Wang: We've been working hard on building our deep learning and AI infrastructure, and it's that work that we've done that has allowed us to adapt to Frisco, Texas so quickly. It's quite different from Mountain View, but the training infrastructure and the way we can map the area allows us to quickly transition into new environments. Some of the challenges are quite unique— we operate in office parks, with parking lots full of pedestrians walking around and cars pulling out. These are the type of scenarios we don't see every day in Mountain View.

Tandon: It took us about seven days from when the car left Mountain View to when we were driving autonomously in Frisco. For almost three of those days, the car was being driven down from Mountain View. It's a testament to the infrastructure and tools that we've been building up for several years now—in just a few days, we were able to go from cars in data collection mode to cars driving in autonomous mode.

Spectrum: How well has your system for collecting and annotating data been able to scale over the last year?

Tandon: The technology is continuing to advance; for the annotation tools and infrastructure, the speed has increased, and not only that, there are a lot more things that we're able to annotate, and we're able to make higher quality annotations as well. So this is something where the core data with the annotations helps us push the simulation faster, which helps push real road testing faster… This entire infrastructure, these tools, are a huge reason why we're able to move as quickly as we are.

Spectrum: Initially, Drive.ai was planning to focus on logistics, which is a more constrained challenge for self-driving vehicles than dealing with human passengers. Why did you shift to ride sharing instead?

Tandon: I think it would have been an easier technology problem, but I think from the perspective of solving real transportation problems, this type of microtransit application seems to be the place to start. In the long term, of course, we're considering human transportation as well as logistics transportation, and our autonomous vehicles should be able to address both of those things.

Spectrum: When you hear about other autonomous vehicles getting into accidents, what can you learn from the experiences that other companies have?

Tandon: Many of these accidents are tragedies, but emotions aside, we can look at them as a learning opportunity—how can we develop a solution that's going to be safer for humanity? We do look at what happens, not only in self-driving car accidents but also in human-driven accidents, and we use that to help us put simulation test cases together and improve our testing process. Solving these problems is honestly some of the most exciting work we can do.

The drive.ai van with a dramatic skyPhoto: Drive.ai

Driving in Texas

They say everything is bigger in Texas, but coming from Mountain View where Drive.ai started out, the roads here in Frisco certainly are something else. There's plenty of space, so everything is spread out, connected by many new and very wide roads.

In practice, lots of roads also means lots of cars, which means lots of parking lots, which create even more sprawl. Ultimately, everything seems slightly too far to walk to, even if it isn't 90 degrees Fahrenheit out (and in my experience, it's at least this hot in Frisco 100 percent of the time).

Case in point: someone may have to walk about 20 minutes from their office to find a restaurant for lunch, which means that an hour-long lunch break really only provides 20 minutes of scarfing time.

Many destinations also seem too close to drive to, especially since parking is somehow a challenge despite all the parking lots. This area of Texas is essentially suffering from a last mile problem, where there's no good way to travel these short-ish distances. Drive.ai aims to solve that.

Just because the distances are short, the route is well established, and the roads and weather are good, doesn't mean that Drive.ai has it easy. We asked co-founder Wang where the trickiest part of the route was, and he described one specific intersection where the vehicles have to cross a six-lane, high-speed road without any traffic lights.

Drive.aiPhoto: Evan Ackerman/IEEE Spectrum

I'm pretty sure this is the intersection that Wang was talking about: a massive road with two travel lanes in each direction, additional turning lanes in each direction, and an enormous median that, if it were in Mountain View, would probably be full of expensive coop workspaces for early stage startups or something.

The median is so big, in fact, that it's possible for Drive's vehicle to safely hide there while it plots its next move. This is good, because situations like this are a huge challenge for self-driving vehicles. Sensors have a difficult time seeing far enough to give the vehicle a chance to get across before a fast-moving vehicle that was previously out of sensor range reaches the intersection. The picture also shows how a curve in the travel lane prior to the intersection can make it particularly tough for human or automated drivers to detect oncoming traffic far in advance. But, Drive.ai is maneuvering through autonomously, which is impressive.

"Over the past few weeks, I've already seen our system get much better at that intersection," says Wang. "And this is also where we believe our [community] partnerships could help: it'd probably be safer for everyone if there were some signs near the intersection, warning of cross traffic, pedestrians, and potentially even self-driving cars."

Otherwise, driving in Frisco seems fairly tame. I asked Tandon and Wang about the edgiest edge cases they'd run into so far, hoping for stampeding longhorns or at least the occasional tumbleweed. 

"I think the hardest parts are still the parking lots," Wang said. "It's hard to be predictive. And actually, this morning, we saw a family of geese walking down the street. They don't really follow traffic rules, so that was something new, but our cars slowed down for them and handled them pretty well."

Drive.aiPhoto: Evan Ackerman/IEEE Spectrum

Human-Robot Interactions

One of the things that makes Drive.ai unique among autonomous car companies running pilots right now is a carefully thought out emphasis on community usability. Recognizing that people will need to live and work around these autonomous cars, Drive wants to to make sure that their overall system can provide a positive experience for everyone—even people who aren't using it—by finding ways to make the cars pleasant and intuitive for pedestrians and other drivers (not just passengers) to interact with.

The first step is partnering with the community to plan and educate residents even before the cars start driving. This means choosing routes, designating specific areas where the cars will pick up and drop off, and putting up signs that basically say, "Hey, we've got robot cars driving around, which is totally cool, but they're not the same as regular cars so please be aware of that. Okay great, thanks."

It also involves properly managing construction zones and making sure autonomous vehicles can navigate them successfully. Construction zones are still a very hard problem for the vehicles, but with just a tiny bit of training, construction workers themselves can make all the difference by not relying on gestures that humans understand but which autonomous cars cannot.

Drive.ai is working as hard as it can to get everything set up around their autonomous cars and the service that they'll be providing, and that's important. But fundamentally, everything comes down to the cars themselves—the vehicles need to be safe and useful, but they'll also need to be comfortable to be around. Historically, this has been an issue for autonomous cars, because they haven't had a good way of telling humans what they're about to do.

Humans communicate with each other while driving all the time without really thinking about it. Consider what happens at a four-way stop—there's a general precedent for which vehicle goes first, but that precedent is usually confirmed when drivers make eye contact with each other. As a pedestrian, you'll almost certainly make eye contact with the driver of any car that you cross in front of, and even that may not be enough, so then come the hand gestures.

Autonomous cars, obviously, have no eyeballs or hands. So if an autonomous car stops at a stop sign that is also a crosswalk, how would you know that a.) the car sees you, b.) the car understands that you want to cross, and c.) the car will allow you to cross before it moves forward?

Drive.aiPhoto: Evan Ackerman/IEEE Spectrum

Drive.ai is tackling this problem by bolting LED displays onto the front, back, and sides of each of their cars. Before we get into this, we should stress that these things are super duper prototypes, and that what we're looking at now may be very different by the time we see it on an active vehicle. But, this is at least a good look at what Drive has been working on, and as far as we know, it's the first time an autonomous vehicle company has done so much work on this type of interaction.

Man crossing in front of the Self-Driving vanGif: Drive.ai

The idea is fairly straightforward—when the car finds itself in a situation where it needs to interact directly with a human, it can use a combination of text and imagery on these screens to do so, autonomously. Here's a sampling of the text and graphics that Drive thinks might come in handy.

Drive.aiPhoto: Evan Ackerman/IEEE Spectrum

We spoke with Chip Alexander, Drive.ai's Head of Experience Design, who has been leading the team that’s developing these panels. They've put a lot of work into coming up with both the hardware design and the content, by doing user testing, collecting data, and iterating on the designs. There's tons to consider—for example, when the car says "Waiting for you to cross," is that the right phrase to use, or does it make it seem like the car is giving you an instruction? Perhaps "You may cross" would work better?

In addition to communicating with pedestrians, Drive.ai's vehicles also communicate with other human-driven vehicles, when doing things like merging or yielding. It's a bit more experimental to do things like ask other vehicles to stop tailgating, and then thank them for doing so, but it seems like a nice idea. I asked Alexander why they didn't include the obvious "Sorry," and the sad reason is that it's too much like an admission of fault. 

An unsolved problem, Alexander said, is that communication between humans is two-way, and communication between humans and AVs right now is one-way (at best). The cars can make educated guesses about what pedestrians might want to do: If you're standing close to the road near a crosswalk, it's likely you want to cross. But if you shake your head at the car when it stops for you, or wave your arm, it's not going to accomplish much. Alexander says he's pretty sure that Drive.ai will eventually be able to recognize and understand such gestures, but at the moment, those sorts of things are very hard for a car to reliably identify.

Drive.aiPhoto: Evan Ackerman/IEEE Spectrum

For an outside perspective on what Drive has been working on, we asked Missy Cummings (who leads the Humans and Autonomy Lab at Duke University) to take a look at Drive.ai's designs for us. She pointed out that having so much color in the displays could be a problem, because it can lower contrast and make things harder to read, especially for people with color blindness.

The text itself is potentially an issue for several reasons—it may be too small to read at a distance when the vehicle is in motion, lots of people can't read English text at all, and it turns out that most of us have an unfortunate tendency to try and read text that's presented to us, and we'll even stop in the middle of the road to do so. 

Cummings also sent over a paper that her group published a few years ago, showing that when it comes to crossing the street around moving autonomous vehicles, pedestrians actually will not even read the display at all. Instead, they focus on the speed of the vehicle, and decide independently whether it's safe to cross or not. 

One feature that Cummings did approve of was the rather striking paint job: bright orange with swirly blue stripey bits. Drive.ai’s Alexander confirmed that the intention here was to make the vehicles both unique and immediately distinctive, so that pedestrians and drivers know they're interacting with a car that isn't being controlled by a human driver. The paint doesn't help nearly as much at night, and Alexander added that it might be a good idea to add some lighting to make sure the car is still noticeable. The company is also working on audio cues to communicate with humans who have visual impairments, along with other motion cues (like creeping forward at a stop sign) that could inform other drivers of a car’s intent.

Again, these developments are just a snapshot of an ongoing process at Drive.ai. Alexander readily admits that the company is unsure of the best way to do things, but is trying to figure it out, and trying to do it through scientific studies—we've encouraged them to publish at some point, but received no promises. More generally, we may see other companies working on the same types of communication capabilities, at which point it would be worth having a serious discussion about whether standardization is necessary.

Tele-Choice Operators

Robotics companies in general, or at least the ones with complex autonomous products out in the world, are recognizing that achieving 100 percent autonomy in unstructured or even semi-structured environments is simply not realistic in the short to medium term. One solution is to simply have a human near a computer somewhere who can take over if necessary via teleoperation. "Take over" can mean physically assume control, but it can also mean just providing a service that robots are bad at but humans are good at, like interpreting sensor data or interacting with other humans.

employee operating the remote tele-choice systemPhoto: Drive.ai

Drive.ai is implementing this approach through what it’s calling "tele-choice operators." These are Drive.ai employees that have parked themselves in front of expensive monitors in some volcano lair somewhere, watching camera feeds from roaming Drive.ai vehicles. If a vehicle has an issue, it'll inform its tele-choice operator, who will remotely provide that special human touch to get it moving again.

At first, one tele-choice operator will be assigned to each car, and there will also be a human in the driver's seat of each car. Drive expects that as the company’s confidence in the system grows (and the confidence of their passengers grows as well), the in-car human will shift over to the front passenger seat. The next step is to have no Drive.ai employees in the vehicles at all, just a full-time tele-choice operator. Eventually, one tele-choice operator will be able to monitor multiple vehicles at the same time.

There's a reason why these folks are called tele-choice operators and not just teleoperators—they're generally making simple choices for the vehicles rather than taking over completely. While we don't know all the details, Tandon said the operators provide relatively simple inputs to the vehicles. This approach helps the cars act quickly and efficiently, and mitigates the risks of full teleoperation over a cellular network. Rather than asking the tele-choice operator "What should I do?" in a difficult situation, the vehicle might instead present what it thinks are its best options, and allow the human to choose based on a quick glance at the camera feeds or lidar data.

There are all kinds of situations in which a tele-choice operator might have to step in. Here's one example.

Drive.aiPhoto: Evan Ackerman/IEEE Spectrum

Here, the Drive.ai vehicle is stopped in the median of that monster intersection, waiting to get all the way across. The black car on the right is trying to turn left and technically has the right of way, but Drive's vehicle is kinda out in the middle of the median in order to preserve room on the crosswalk, so the black car is instead waiting for Drive's car to go first. There's also a group of pedestrians who walked up during this stalemate, and the right of way may have shifted to them, but they're just standing there waiting to see what the cars do. This same situation (without the complication of pedestrians) happened during my demo ride as well.

While Drive representatives couldn't immediately confirm this for me, they said that it was likely that the tele-choice operator probably had to jump in for these situations. After it became apparent that there was a standoff, the AV would have pinged the operator and effectively said, "This other car has right of way but isn't moving, should I go?" The tele-choice operator probably recognized what was going on, and could just tell the self-driving car to go ahead. This particular example will likely resolve itself over time, as Drive begins to test their communication panels and as local drivers become more accustomed to how unfailingly polite the autonomous vehicles are. 

The decisions that a tele-choice operator makes do more than just resolve a single problem at a single point in time. They are also fed back into Drive's learning software, such that whenever a tele-choice operator makes a decision, the autonomous car remembers what the right move was, and will do better by itself next time. Or, it's probably more accurate to say that all of Drive.ai's vehicles remember, since what one car learns can propagate across the entire fleet. Over time, the cars will become better decision makers, and humans will be needed less and less, and then only in the most exotic situations. 

Drive.aiPhoto: Evan Ackerman/IEEE Spectrum

While safety is certainly still the primary concern for any autonomous vehicle operating on public roads, it seems like enough progress has been made over the past several years that we're now seeing a transition from focusing primarily on the technology required to get vehicles navigating autonomously to what must be done to make them useful to society in the long term. This, potentially, is a much more nuanced problem, since it involves integrating these vehicles into a society that (let's be honest) doesn't yet know what to make of fully autonomous robots of any kind. Caution is important, but so is exploration, experimentation, and a willingness to try new things.

Drive.ai is hoping that its emphasis on an autonomous car experience that's human-centric rather than robot-centric will be what makes things work in Frisco and beyond. With a service that is safe, comfortable, and useful (in that order), this is intended to be more than a technology showcase—Drive.ai wants its vehicles to become a normal part of getting around. It's far too early to predict whether or not this is the way all autonomous vehicle pilots should be handled. But it is about finding the right approach that'll enable self-driving vehicles to go from something exciting and futuristic that you read about in engineering magazines to a boring part of reality that just quietly makes your life better.

Disclosure—Drive.ai provided travel assistance to enable the author to cover the Frisco launch event in person. 

The Conversation (0)