Hey there, human — the robots need you! Vote for IEEE’s Robots Guide in the Webby Awards.

Close bar

Gill Pratt on “Irrational Exuberance” in the Robocar World

Toyota's robocar chief says we don't know that we can't make fully self-driving cars, nor do we yet know how to do it

3 min read

Photograph of Toyota Research Institute's Gill Pratt introducing Guardian technology at CES.
Toyota Research Institute's Gill Pratt introducing Guardian advanced driving assistance technology at CES in January.
Photo: Toyota

A lot of people in the auto industry talked for way too long about the imminent advent of fully self-driving cars. 

In 2013, Carlos Ghosn, now very much the ex-chairman of Nissan, said it would happen in seven years. In 2016, Elon Musk, then chairman of Tesla, implied  his cars could basically do it already. In 2017 and right through early 2019 GM Cruise talked 2019. And Waymo, the company with the most to show for its efforts so far, is speaking in more measured terms than it used just a year or two ago. 

It’s all making Gill Pratt, CEO of the Toyota Research Institute in California, look rather prescient. A veteran roboticist who joined Toyota in 2015 with the task of developing robocars, Pratt from the beginning emphasized just how hard the task would be and how important it was to aim for intermediate goals—notably by making a car that could help drivers now, not merely replace them at some distant date.

That helpmate, called Guardian, is set to use a range of active safety features to coach a driver and, in the worst cases, to save him from his own mistakes. The more ambitious Chauffeur will one day really drive itself, though in a constrained operating environment. The constraints on the current iteration will be revealed at the first demonstration at this year’s Olympic games in Tokyo; they will certainly involve limits to how far afield and how fast the car may go.

Earlier this week, at TRI’s office in Palo Alto, Calif., Pratt and his colleagues gave Spectrum a walkaround look at the latest version of the Chauffeur, the P4; it’s a Lexus with a package of sensors neatly merging with the roof. Inside are two lidars from Luminar, a stereocamera, a mono-camera (just to zero in on traffic signs), and radar. At the car’s front and corners are small Velodyne lidars, hidden behind a grill or folded smoothly into small protuberances. Nothing more could be glimpsed, not even the electronics that no doubt filled the trunk.

Pratt and his colleagues had a lot to say on the promises and pitfalls of self-driving technology. The easiest to excerpt is their view on the difficulty of the problem.

There isn’t anything that’s telling us it can’t be done; I should be very clear on that,” Pratt says. “Just because we don’t know how to do it doesn’t mean it can’t be done.”

That said, though, he notes that early successes (using deep neural networks to process vast amounts of data) led researchers to optimism. In describing that optimism, he does not object to the phrase “irrational exuberance,” made famous during the 1990s dot-com bubble.

It turned out that the early successes came in those fields where deep learning, as it’s known, was most effective, like artificial vision and other aspects of perception. Computers, long held to be particularly bad at pattern recognition, were suddenly shown to be particularly good at it—even better, in some cases, than human beings. 

“The irrational exuberance came from looking  at the slope of the [graph] and seeing the seemingly miraculous improvement deep learning had given us,” Pratt says. “Everyone was surprised, including the people who developed it, that suddenly, if you threw enough data and enough computing at it, the performance would get so good. It was then easy to say that because we were surprised just now, it must mean we’re going to continue to be surprised in the next couple of years.”

The mindset was one of permanent revolution: The difficult, we do immediately; the impossible just takes a little longer. 

Then came the slow realization that AI not only had to perceive the world—a nontrivial problem, even now—but also to make predictions, typically about human behavior. That problem is more than nontrivial. It is nearly intractable. 

Of course, you can always use deep learning to do whatever it does best, and then use expert systems to handle the rest. Such systems use logical rules, input by actual experts, to handle whatever problems come up. That method also enables engineers to tweak the system—an option that the black box of deep learning doesn’t allow.

Putting deep learning and expert systems together does help, says Pratt. “But not nearly enough.”

Day-to-day improvements will continue no matter what new tools become available to AI researchers, says Wolfram Burgard, Toyota’s vice president for automated driving technology

“We are now in the age of deep learning,” he says. “We don’t know what will come after—it could be a rebirth of an old technology that suddenly outperforms what we saw before. We are still in a phase where we are making progress with existing techniques, but the gradient isn’t as steep as it was a few years ago. It is getting more difficult.”

The Conversation (0)