3 New Chips to Help Robots Find Their Way Around

Intel and academic groups are designing specialized hardware to speed path planning and other aspects of robot coordination

3 min read

Samuel K. Moore is IEEE Spectrum’s semiconductor editor.

Intel and academic groups are designing specialized hardware to speed path planning and other aspects of robot coordination
Intel engineers are experimenting with specialized chips to improve path planning and coordination in multirobot systems.
Photo: Intel

Robots have a tough job making their way in the world. Life throws up obstacles, and it takes a lot of computing power to avoid them. At the IEEE International Solid-State Circuits Conference last month in San Francisco, engineers presented some ideas for lightening that computational burden. That’s a particularly good thing if you’re a compact robot, with a small battery pack and a big job to do.

Search-and-Rescue SoC

Engineers at Intel are experimenting with robot-specific accelerators as part of a collaborative multirobot system. “Robots will play a prominent role either by assisting humans or replacing humans where they’re inefficient or where they’re in dangerous situations,” Vinayak Honkote, an Intel research scientist based in Bangalore, told engineers. “The inherent complexity involved in some of these applications means they’ll need multiple robots.”

Intel mobile robots with SoC path planning systemIntel’s mobile robots carry a 16-millimeter system-on-chip that integrates everything needed for autonomy.Photo: Intel

For its mobile robots, the company developed a 16-millimeter system-on-chip (SoC) that integrates everything needed for autonomy and consumes just 37 milliwatts on average. Besides real-time sensing and processing elements, the SoC includes two purpose-built accelerators. One handles the robot’s path planning—a potentially tricky endeavor as the bots scout their way to find humans in search-and-rescue scenarios. For general-purpose processors, path planning can be so compute- and memory-intensive that it is typically off-loaded to the cloud, the Intel engineers say. So they designed a system to perform that function in a way that is tailored to a small mobile robot’s power and memory resources. They did the same for another potential bottleneck: motion control.

Will it work for drones, too? Not yet, according to another Intel engineer, Sriram Muthukumar. Expanding path planning and motion control from two dimensions to three will take some work, but it is something they’re hoping to tackle.

Wave Computer

The University of Minnesota’s ambitions for its new tech are broader than planning the path for robots, but their new in-memory “wavefront” computing chip is potentially a good match for the job. The chip is a 40 x 40 array that makes use of an unusual type of logic in which values are encoded in how long it takes a signal to pass through its gates. The elements of the array represent the vertices in a graph and the edges that connect them. By programming those elements, the graph can simulate the terrain a robot must traverse, including hills, valleys, and impassible obstacles. A wave of voltage starting at the edges sweeps across the array in a manner of nanoseconds, in the process solving the A* algorithm, which determines the shortest, lowest-energy path through the simulated terrain to a target, explained Luke Everson, a University of Minnesota Ph.D. student in the laboratory of Chris Kim.

Time-based computing’s advantages include “a very compact area and low-power consumption,” Everson said. Compared with solutions powered by CPUs and GPUs, the array is about 1 million times as energy efficient, making it a good fit for mobile robots. Future work could include a way to make wavefront pathfinding work in three dimensions.

The SLAMmer

Finding a good path through the world works only if you can map the terrain and figure out where you are in it. That’s the job of simultaneous localization and mapping (SLAM) technology, which continuously updates this information at a pace that ensures your robot doesn’t crash into anything. It starts with sensors like lidar and inertial measurement units. But for flying drones and other weight-challenged bots, cameras are best, argues Ziyun Li, a doctoral candidate working with Dennis Sylvester, David Blaauw, and Hun-Seok Kim at the University of Michigan.

The problem is that visual-only SLAM is a computational beast. At 60 frames per second of stereoscopic VGA-quality video, you’ve got about 1 gigabit per second streaming through. From that, you have to identify landmarks, see where those landmarks go as the robot moves, figure out the robot’s position and orientation from the way those landmarks moved, then find new landmarks as the old ones disappear from view. And you’ve got only milliseconds to do it, on a power budget of less than a watt. 

The Michigan team managed to pull it off using a purpose-built chip at 80 frames per second while consuming only 240 milliwatts. The chip was made up of three key parts. One was a convolutional neural network that picked out features to use as landmarks from images as they streamed in. Another was an accelerator that matched the landmarks from frame to frame. The third refined the robot’s trajectory over multiple frames. Tested on a standard automotive 1-kilometer drive recording, it was 97.9 percent accurate.

The Conversation (0)