Nvidia Wants to Build the Robocar's Brain

The king of video-game hardware unveils Drive PX, a two-chip monster of parallel processing power

3 min read
Nvidia Wants to Build the Robocar's Brain

Nvidia, the graphic-card master, wants to do for self-driving cars what it’s done for gaming and supercomputing. It wants to supply the hardware core—the automotive brain onto which others can build their applications.

It’s called Drive PX, and next month it will be released to auto makers and top-tier suppliers for US $10,000 a pop (that’s a development kit—future commercial versions will cost far less). It packs a pair of the company’s Tegra X1 processors, each capable of a bit more than a teraflop—a trillion floating-point operations per second. Together they can manage up to 12 cameras, including units that monitor the driver for things like drowsiness or distractedness. “Sensor fusion,” which puts the various streams of data into a single picture, can even include input from radar and its laser-ranging equivalent, lidar. The result is the ability to recognize cars, pedestrians and street signs.

If you’ve played Grand Theft Auto, you’ll have a good idea of what a professional driving simulator is like, and if you’ve played with simulators, you’ll have a passing familiarity with self-driving cars. These systems manage parallel streams of visual data—and parallel processing is what Nvidia’s graphics processing units, or GPUs, are designed for.

Until now, the main non-gamelike application for GPUs has been in supercomputers, which also bears on the self-driving problem, where it’s important to dive into huge databases in order to learn from experience. Nvidia calls this its “deep learning” project.

“The majority of top supercomputers use Nvidia GPUs, including Titan, the largest in the U.S.,” notes Michael Houston, the technical lead for the project. “Deep learning has different applications. The focus has been on the visual analysis of imaging in video—web science, embedded systems and automotive. Fundamentally, we’re processing pixels.”

Learning as you go would be the ideal experimental method, and such a skill would come in handy whenever the high-detail maps on which autonomous cars rely fail—for instance, when a truck jackknifes, closing a lane. Right now, though, safety regulators take a dim view of such cybernetic self-assertion, so anything a car learns must first be uploaded to the cloud for analysis offline. Only later can the car get the lesson via software updates.

Auto companies that work with Nvidia (which, by the way, already has processors of one kind or another in some 8 million cars) and are presumed to be lining up for the development kit include Tesla, Audi and BMW, as well as top-tier suppliers, such as Delphi. These companies will build their own systems on top of the Nvidia framework.

“We produce a reference design,” says Danny  Shapiro, Nvidia’s automotive senior director. “But the application layer—with the software, algorithms, and libraries—is still often the role of the automaker itself.”

Nvidia’s gaming savvy is also coming into play in other ways. The company uses similar GPU-based systems in highly realistic simulators that make it much easier to model problems that robotic cars are likely to face on the streets.

“That’s the cool thing about simulation,” Houston says. “There are lots of rare events, but you can create models of them. Take the failure case involving driver-assistance using radar: anything that’s highly reflectivemetal confetti or a Mylar balloon, for examplewill build a large radar signature. We actually had an engineer driving when an empty potatochip bag blew in front and the car slammed on the brakes.”

Lidar has its own weaknesses, which simulators can model and help to correct. “Lidar doesn’t like highly reflective objects, like store windows downtown,” Houston says. “It’s hard to test in the car, but you can build it in software and test that way. That’s how we harden our stuff before we go to the test track.”

The Conversation (0)

Self-Driving Cars Work Better With Smart Roads

Intelligent infrastructure makes autonomous driving safer and less expensive

9 min read
A photograph shows a single car headed toward the viewer on the rightmost lane of a three-lane road that is bounded by grassy parkways, one side of which is planted with trees. In the foreground a black vertical pole is topped by a crossbeam bearing various instruments. 

This test unit, in a suburb of Shanghai, detects and tracks traffic merging from a side road onto a major road, using a camera, a lidar, a radar, a communication unit, and a computer.

Shaoshan Liu

Enormous efforts have been made in the past two decades to create a car that can use sensors and artificial intelligence to model its environment and plot a safe driving path. Yet even today the technology works well only in areas like campuses, which have limited roads to map and minimal traffic to master. It still can’t manage busy, unfamiliar, or unpredictable roads. For now, at least, there is only so much sensory power and intelligence that can go into a car.

To solve this problem, we must turn it around: We must put more of the smarts into the infrastructure—we must make the road smart.

Keep Reading ↓Show less