Most of the autonomous vehicles that you’re likely to encounter in the near future are either Level 2 or Level 4 autonomous. Level 2, which you’ll find in a Tesla on the highway, means that the car drives itself in specific situations but expects you to be paying attention the entire time. Level 4 you might see in some experimental “fully autonomous” vehicles: They can drive themselves in specific areas when the conditions are good, and, like taxis, you sit in the back while they do all the driving no matter what happens.
There’s a reason that automotive companies have mostly skipped Level 3 autonomy: It puts a human in the loop sometimes, which is way worse than having a human in the loop either all of the time or not at all. To help us help our cars make safe, prompt transitions in and out of intermediate autonomous modes, researchers from Stanford University are experimenting with a robotic steering wheel that can physically transform, giving you a “cute little nudge” to help you pay attention when necessary.
At Level 3, an automated driving system is expected to be able to handle all aspects of a driving task in a specific driving mode such as on the highway, except when it can’t, at which point it will rely on the human driver to “respond appropriately to a request to intervene.”
The problem here is that the system tells the human, “okay, you can chill out and not pay attention at all because I got this, except you need to be able to focus on the road with very little warning whenever I think I might be getting into trouble.” Humans are bad at these types of situations. Studies have shown that we don’t reliably shift our attention (or, let’s be honest, wake up) quickly enough to make a safe transition back to driving. Research has shown that most drivers need between 5 and 8 seconds to make the switch from doing whatever to competently controlling a car. That is a very long time—and distance—at 70 miles per hour (115 km/h).
Part of the problem is that conventional cues such as sounds or flashing lights aren’t always effective at communicating whether the car is driving itself or it expects you to be in command, and any ambiguity during these transitions can be dangerous. Other tricks like vibrating seats help somewhat. But the Stanford researchers are testing whether an actuated, transforming steering wheel can help even more.
Stanford researchers conducted a user study of several different steering wheels, including the transforming one, inside of a simulator. Here’s a description of the study, which has several driving sections:
Participants are asked to enable the automated driving mode. The automation then drives for 10 minutes in a second section, mainly on straight road. At the beginning of the last section, the car approaches the critical event, a curve without lane markings, designed to appear as though road construction is in progress. Full control of the car is returned to participants a few seconds (2 or 5 seconds) before entering this critical event. That is, control of the car is instantly returned to the participants in the drive mode, with the steering wheel centered, and with no additional input by the car to the brake or throttle.
In the road test, participants who were lucky enough to get the transforming robotic steering wheel performed significantly better than participants using the regular steering wheel or the one with LEDs attached to ;it, especially when it came to avoiding “catastrophic road excursions.” This is particularly remarkable because the transformation itself took a chunk out of the time that participants had to react, and yet they performed better with the transforming steering wheel. In particular, the robotic steering wheel did far better than the LED-equipped wheel, emphasizing the fact that lights and sounds only go so far as a warning system.
It’s also worth noting that none of the participants in this experiment were distracted drivers. It seems like the difference among warning systems might be even more significant if people were looking at their phone or otherwise focused on something else.
The other cool thing about a robotic transforming steering wheel is that you can program it to do weird stuff. Along those lines, here is an excerpt describing some of the things that the researchers are thinking might be fun:
At its most basic, the steering wheel can perform an aggressive animation whenever the car detects that the driver is drowsy or sleepy, providing a form of stimulation. With current autonomous vehicles, drivers need to touch the steering wheel every few minutes to keep the vehicle automation enabled. A transforming steering wheel could gamify this “check in” activity to keep drivers even more engaged. For instance, the mechanically transforming robotic steering wheel can deploy its upper components to play a cooperative hand-clapping game with drivers.
When the automation is enabled, the robotic steering wheel system can potentially be utilized as an interactive robotic avatar to perform many different types of activities for the drivers of the autonomous vehicles. As a robotic avatar, the transforming steering wheel system can act as a physical manifestation of an app or artificial intelligence in the autonomous vehicles. For instance, the robotic steering wheel can represent a chatbot that keeps drivers entertained or engaged throughout the course of a long commute. It can also act as an avatar for another person, such as a driver in a nearby vehicle. In a scenario where drivers from two different autonomous vehicles want to communicate with each other, this robotic steering wheel system can provide a tangible and interactive interface, which can augment the telepresence experience.
For more detail, we spoke with Brian Mok, lead author of this research.
IEEE Spectrum: Where did you get the idea for a transforming steering wheel? Was there anything specific that inspired you?
Brian Mok: I had previously conducted research on how drivers would perform after a transition of control. The next step was to investigate various options to see whether a design intervention could help improve drivers’ performance. I noted that a few automotive OEMs had begun to introduce concept cars with retracting or transforming steering wheels. While these concepts were meant to increase the usable space inside concept cars for drivers, I thought it was interesting to investigate further to see if we could leverage the properties of this type of shape-changing interface to create a driving assistance system. Prior works in the Human-Robot Interaction field examining the effects of motion were also a major inspiration for me to build a prototype and evaluate the robotic transforming steering wheel.
Spectrum: How did you decide on the actuation elements and patterns for your prototype? Were there other designs that you tried first, and if so, what were they like and what made you change them?
Brian Mok: Initially, we created several foam-core prototypes to try and to evaluate different types of movement. We had three design requirements:
- Distinct Physical States: This allows a driver to clearly see when he/she has agency in the driving task.
- Noticeable Transformation: This is needed to get the attention of a driver.
- Indicating Transition: The motion needs to communicate that a transition is occurring.
We had several other designs during the initial phase. One prototype had the handles translated towards and away from the driver (similar to a prismatic joint). Another prototype had the handles translated upwards and then rotated radially. Ultimately, we picked two designs that best satisfied the design requirements and created a final prototype based on those two initial prototypes. For the behavior of the transforming steering wheel prototype, we decided to just make it deploy as fast as possible in the driving performance study.
Spectrum: Why do you think the transforming steering wheel was more effective at motivating driver performance?
Brian Mok: From our findings in the prior works in the Human-Robot Interaction field, physical motion is an element that humans are very sensitive to. Since the robotic steering wheel, which can physically transform, is able to leverage motion as an alert mechanism, I think this will be a better device in getting drivers’ attention, as compared to sound or visual cues alone. By having two different states—retracted or deployed—the transforming steering wheel is able to provide a clearer indication to the driver when he/she needs to take over the control of the vehicle. This will reduce driver hesitation and confusion.
Spectrum: Can you summarize what you learned from the evaluators about animation types and speeds?
Brian Mok: In that exploratory study, we evaluated three styles of deploy/retract and three speeds of deploy/retract for the transforming steering wheel. One of the main findings was that the maximum deploy/retract speed was able to best convey a sense of urgency to drivers. The evaluators/interaction experts also felt that it was very scary when the four handles of the robotic steering wheel simultaneously deployed. They thought that this might be necessary to get drivers’ attention and responses. The combination of speed and style that we had previously used in the driving performance study was also very startling to the evaluators.
Spectrum: What are you working on next? Will you be able to try this in a real vehicle at some point?
Brian Mok: I would like to examine the use of this transforming steering wheel as a robotic avatar and see what other messages can be conveyed through motion. We did a little bit of this study in the exploratory research that was mentioned in the paper. I would like to expand that research. It will be interesting to try the transforming steering wheel on a real vehicle in a closed course on road study. However, we are limited by the availability of an autonomous vehicle platform and also by the fidelity of the prototype (which needs to be refined in order to be safe for use for a real vehicle).
“Reinventing the Wheel: Transforming Steering Wheel Systems for Autonomous Vehicles,” by Brian Mok, Mishel Johns, Stephen Yang, and Wendy Ju from Stanford University, was presented at the 30th Annual ACM Symposium on User Interface Software and Technology. If you go through the UIST website, the full paper is available online.
Evan Ackerman is a senior editor at IEEE Spectrum. Since 2007, he has written over 6,000 articles on robotics and technology. He has a degree in Martian geology and is excellent at playing bagpipes.