When Autonomous Cars Teach Themselves to Drive Better Than Humans

A human almost certainly wouldn't be this nice to a cyclist, but for an autonomous car, it's the obvious thing to do

3 min read

Illustration of a car and a bike
Illustration: Shutterstock

Whether or not autonomous vehicles are able to develop the kind of dynamic, adaptive, common-sense reasoning that humans are able to leverage (when we’re paying attention, that is), AVs have already demonstrated that their ability to leverage sensor systems to see in every direction at once can potentially result in the kind of safety that humans cannot easily match.

The trouble is that in many cases, AVs learn how to drive by observing human drivers. So if we’re teaching them, how can they effectively leverage their superior hardware into superior safety?

A few weeks ago, the CTO of Cruise Tweeted an example of one of their AVs demonstrating a safety behavior where it moves over to make room for a cyclist. What’s interesting about this behavior, though, is that the AV does this for cyclists approaching rapidly from behind the vehicle, something a human is far less likely to notice, much less react to. A neat trick—but what does it mean, and what’s next? 

In the video above, as the cyclist approaches from the rear right side at a pretty good clip, you can see the autonomous vehicle pull to the left a little bit, increasing the amount of space that the cyclist can use to pass on the right.

One important question that we’re not really going to tackle here is whether this is even a good idea in the first place, since (as a cyclist) I’d personally prefer that cars be predictable rather than sometimes doing weirdly nice things that I might not be prepared for. But that’s one of the things that makes cyclists tricky: we’re unpredictable. And for AVs, dealing with unpredictable things is notoriously problematic.

Cruise’s approach to this, explains Rashed Haq, VP of Robotics at Cruise, is to try to give their autonomous system some idea of how unpredictable cyclists can be, and then plan its actions accordingly. Cruise has collected millions of miles of real-world data from its sensorized vehicles that include cyclists doing all sorts of things. And their system has built up a model of how certain it can be that when it sees a cyclist, it can accurately predict what that cyclist is going to do next.

“There's some uncertainty of what a cyclist is likely to do just based on their intent, and then there's the potential for them to fall over and things like that,” Haq says. “So if you have historical data, that helps you understand how cyclists are likely to behave, along with these potential other things that may happen with cyclists, then these new behaviors will emerge from the objective of safety.”

Essentially, based on its understanding of the unpredictability of cyclists, the Cruise AV determined that the probability of a safe interaction is improved when it gives cyclists more space, so that’s what it tries to do whenever possible. 

This behavior illustrates some of the critical differences between autonomous and human-driven vehicles. Humans drive around with relatively limited situational awareness and deal with things like uncertainty primarily on a subconscious level. AVs, on the other hand, are constantly predicting the future in very explicit ways. Humans tend to have the edge when something unusual happens, because we’re able to instantly apply a lifetime’s worth of common-sense knowledge about the world to our decision-making process. Meanwhile, AVs are always considering the safest next course of action across the entire space that they’re able to predict. 

“I think a lot of these emergent behaviors are based on the kind of data that we’re observing, and then when we apply our objective of safety, it comes out of that process,” says Haq.

Behaviors like these go well beyond cyclists, of course, and as the Cruise system accumulates more data about uncertain events, it’ll become better able to predict when to add in those extra margins of safety. It’s honestly kind of exciting to think about these specific ways in which autonomous vehicles are transcending humans when it comes to safety, with these behaviors that are unrealistic to expect from human drivers. This is not to say that AVs will make for generally safer drivers in the near future, since everything that they know how to do is based on what they can predict, and the world is an unpredictable place. But examples like these are a good reminder that robots have a lot to offer, and that our expectation shouldn’t be human-level performance—it should be better.

The Conversation (0)