New Pedestrian Detector from Google Could Make Self-Driving Cars Cheaper

A deep learning system works 60 times faster than previous methods

3 min read
New Pedestrian Detector from Google Could Make Self-Driving Cars Cheaper
Image: Google

Google’s self-driving cars roam the sunny streets of Mountain View, Calif., in public but much of the technology that powers them has never seen the light of day. Yesterday, attendees at the IEEE International Conference on Robotics and Automation (ICRA) in Seattle got a rare glimpse into a new safety feature the tech giant is working on.

Anelia Angelova, a research scientist at Google working on computer vision and machine learning, presented a new pedestrian detection system that works on video images alone. Recognizing, tracking, and avoiding human beings is a critical capability in any driverless car, and Google’s vehicles areduly festooned with lidar, radar, and cameras to ensure that they identify people within hundreds of meters.

But that battery of sensors is expensive; in particular, the spinning lidar unit on the roof can cost nearly $10,000 (or more if for multiple units). If autonomous vehicles could reliably locate humans using cheap cameras alone, it would lower their cost and, hopefully, usher in an era ofrobotic crash-free motoring all the sooner. But video cameras have their issues. “Visual information gives you a wider view [than radars] but is slower to process,” Angelova told IEEE Spectrum.

At least it used to be. The best video analysis systems use deep neural networks—machine learning algorithms that can be trained to classify images (and other kinds of data) extremely accurately. Deep neural networks rely on multiple processing layers between the input and output layers. For image recognition, the input layer learns features of the pixels of an image. The next layer learns combinations of those features, and so on through the intermediate layers, with more sophisticated correlations gradually emerging. The output layer makes a guess about what the system is looking at.

Modern deep networks can outperform humans in tasks such as recognizing faces, with accuracy rates of over 99.5 percent. But traditional deep networks applied to pedestrian detection are very slow, dividing each street image into 100,000 or more tiny patches, explains Angelova, and then analyzing each in turn. This can take seconds or even minutes per frame, making them useless for navigating city streets. Long before a car using such a network has identified a pedestrian, it might have run the person over.

imgThese example images show Google’s deep learning system detecting pedestrians in different situations. The system performed 60 times faster than previous methods.Image: Anelia Angelova/Google

Angelova’s new, high speed pedestrian detector has three separate stages. The first is a deep network, but one that slices up the image into a grid of just a few dozen patches rather than tens of thousands. This network is trained to do multiple detections simultaneously at multiple locations, picking out what it thinks are pedestrians. The second stage is another network that refines that result, and the third is a traditional deep network to deliver the final word on whether the car is seeing a person or, say, a mailbox.

However, because that slow, accurate network only analyzes a small portion of the image where pedestrians are likely to be, the whole process runs much faster—between 60 and 100 times quicker than the best previous networks, says Angelova. Running on graphics processors similar to those in Google’s self-driving cars and fed street images, the system was trained in about a day. It could then accurately identify pedestrians in around 0.25 seconds. (The researchers use a well-known pedestrian image database, rather than video from Google cars, because it lets them compare their results to previous networks).

“That’s still not the 0.07 seconds needed for real time use,” admits Angelova. Self-driving cars need to know almost instantly whether they are facing pedestrians or not, in order to safely take evasive action. “But it means [the new system] could be complementary in case other sensors fail,” she says.

As more powerful processors become available and the capacity of the neural network increases, Angelova expects that performance will improve. “For networks with even larger fields of view, one can consider even more speedups,” she says. By the time self-driving cars are available for the general public to buy, their distinctive spinning lidars may have disappeared altogether. 

The Conversation (0)

Chinese Joint Venture Will Begin Mass-Producing an Autonomous Electric Car

With the Robo-01, Baidu and Chinese carmaker Geely aim for a fully self-driving car

4 min read
A black car sits against a white backdrop decorated with Chinese writing. The car’s doors are open, like a butterfly’s wings. Two charging stations are on the car’s left; two men stand on the right.

The Robo-01 autonomous electric car shows off its butterfly doors at a reveal to the media in Beijing, in June 2022.

Tingshu Wang/Reuters/Alamy
Purple

In October, a startup called Jidu Automotive, backed by Chinese AI giant Baidu and Chinese carmaker Geely, officially released an autonomous electric car, the Robo-01 Lunar Edition. In 2023, the car will go on sale.

At roughly US $55,000, the Robo-01 Lunar Edition is a limited edition, cobranded with China’s Lunar Exploration Project. It has two lidars, a 5-millimeter-wave radars, 12 ultrasonic sensors, and 12 high-definition cameras. It is the first vehicle to offer on-board, AI-assisted voice recognition, with voice response speeds within 700 milliseconds, thanks to the Qualcomm Snapdragon 8295 chip.

Keep Reading ↓Show less