Ford Calls for Standards for Robocars to Signal Their Intentions

It's not enough to play nicely with others—you also have to be seen playing nicely

2 min read

Photograph of Ford's car with the light bar, at an intersection waiting for a pedestrian with a bike to pass.
Photo: Ford

It’s not enough for a robocar to play nicely with others—it must also be seen to be playing nicely. And that will happen only if the industry can agree on a standard for communication, the Ford Motor Company said in an announcement today.

The standard would go beyond the wireless one that cars use to talk to each other; it would include a visual component. That way, pedestrians and drivers of cars without transceivers can see right away what the robocar intends to do. 

Today, a driver who wants to yield the right-of-way to pedestrians can wave them across or just look them in the eye and smile. A robocar, lacking hands and a face, must instead resort to a standard signal, ideally an intuitively obvious one.

Gif of the light bar in actionGif: Ford

John Shutko, a human factors specialist at Ford, writes today in a blog post that the company has worked with the Virginia Tech Transportation Institute to send such signals by using a light bar on top of the windshield. Here is the code they used: 

Yielding: Two white lights moving side to side to indicate vehicle is about to come to a full stop
Active driving mode: Solid white light to indicate vehicle intends to proceed on its current course (although can respond appropriately to objects and other road users in the course of its travel)
Start-to-go: Rapidly blinking white light to indicate vehicle is beginning to accelerate from a stop

To make sure that no human cues were sent as well, Ford hid the driver in a suit that made him disappear into the car seat—a great trick to use on Halloween. Then Ford and VTTI conducted field tests together, using cars that lacked the light bar. The companies first made sure that their scheme did no harm—that it encouraged no unfortunate behavior. Then they checked to see how long it took people to get the hang of it. They found that “it took about two exposures for participants to learn what a single signal meant and between five and 10 exposures to understand the meaning of all three lighting patterns.” 

Other companies are also experimenting with visual signals to pedestrians and the drivers of other cars. Last year we wrote about how Drive.ai was bolting LED displays on the front, back, and sides of its prototype self-driving car and using them to display text and images. The problem is, even a high-resolution facsimile of a universally accepted traffic sign might not be intelligible to a schoolchild.

Shutko says that the signaling methods aren’t set in stone and that other companies are welcome to contribute ideas. Ford is already working with SAE, formerly known as the Society of Automotive Engineers, and with the  International Organization of Standardization (ISO).

Meanwhile, Ford plans to test the system on actual self-driving vehicles—Ford Fusion hybrid cars—which will soon be put through their paces in the Miami area by Argo AI, Ford’s self-driving subsidiary. Other tests are planned for Europe, just to make sure that people from other cultures react to the signals in the same way.

The Conversation (0)