On a large TV screen at the Ford Research and Innovation Center in Palo Alto this week, a robotic humanoid figure was outrunning a four-wheel drive car, cutting in front of it unexpectedly, and, at one point, forcing the car up onto a sidewalk to avoid it. The humanoid figure ran much faster and turned far more quickly than a real human could.
“We’re trying to frustrate our system,” explained researcher Tory Smith.
Smith is part of a group using virtual environments built with game development tools o create action sequences that are then fed into machine learning systems. In this way, researchers hope to teach autonomous vehicle software to better handle situations the cars encounter on the road. (By the way, that day is coming soon; Ford CEO Mark Fields announced that Ford just got permission to begin testing its autonomous vehicle prototypes on California streets in 2016.)
Using virtual environments, said researcher Ashley Micks, dramatically increased the speed of learning. “What took us 10 days now takes 20 minutes,” she said.
Ford researchers have developed a version of their simulation software to run on mobile phones as well as large game systems. “We can hand the mobile version to random people on the street,” Smith said. That’s useful, he says, because outsiders “can think of weird things to do to challenge the vehicle” that never occurred to the research team.
It’s not just dodging pedestrians that can be learned in a gaming environment; the software can pick up all sorts of knowledge, including, for example, how to figure out where the lanes are when there are no lane markings in a road.
Tekla S. Perry is a senior editor at IEEE Spectrum. Based in Palo Alto, Calif., she's been covering the people, companies, and technology that make Silicon Valley a special place for more than 30 years. An IEEE member, she holds a bachelor's degree in journalism from Michigan State University.