The Accelerator Wall: A New Problem for a Post-Moore’s Law World

Specialized chips and circuits may not save the computer industry after all

2 min read

Samuel K. Moore is IEEE Spectrum’s semiconductor editor.

Illustration of a person running up against a brick wall
Illustration: iStockphoto

Accelerators are already everywhere: The world’s Bitcoin is mined by chips designed to speed the cryptocurrency’s key algorithm, nearly every digital something that makes a sound uses hardwired audio decoders, and dozens of startups are chasing speedy silicon that could make deep learning AI omnipresent. This kind of specialization, where common algorithms once run as software on CPUs are made faster by recreating them in hardware, has been thought of as a way to keep computing from stagnating after Moore’s Law peters out in one or two more chip generations.

But it won’t work. At least, it won’t work for very long. That’s the conclusion that Princeton University associate professor of electrical engineering David Wentzlaff and his doctoral student Adi Fuchs come to in research to be presented at the IEEE International Symposium on High-Performance Computer Architecture this month. Chip specialization, they calculate, can’t produce the kinds of gains that Moore’s Law could. Progress on accelerators, in other words, will hit a wall just like shrinking transistors will, and it will happen sooner than expected.

To prove their point, Fuchs and Wentzlaff had to figure out how much of recent performance gains comes from chip specialization and how much comes from Moore’s Law. That meant examining more than 1,000 chip data sheets and teasing out what part of their improvement from generation to generation was due to better algorithms and their clever implementation as circuits. In other words, they were looking to quantify human ingenuity.

So they did what engineers do: They made it into a dimensionless quantity. Chip specialization return, as they called it, answers the question: “How much did a chip’s compute capabilities improve under a fixed physical budget” of transistors?

Using this metric, they evaluated video decoding on an application specific integrated circuit (ASIC), gaming frame rate on a GPU, convolutional neural networks on an FPGA, and Bitcoin mining on an ASIC. The results were not heartening: Gains in specialized chips are greatly dependent on there continuing to be more and better transistors available per square millimeter of silicon. In other words, without Moore’s Law, chip specialization’s powers are limited.

So if specialization isn’t the answer, what is? Wentzlaff suggests that the industry learn to compute using things that will still scale even when logic has stopped. For example, the number of bits of flash memory available per square centimeter is continuing to increase independently of Moore’s Law, because the industry has moved to a 3-D technology that gives it the ability to make 256 or more layers of cells. Fuchs and Wentzlaff have already begun exploring that, developing a computer architecture that speeds computation by having the processor look up previous computations stored in memory instead of recomputing them.

The end of Moore’s Law is “not the end of the world,” says Wentzlaff. “But we need to be prepared for it.”

The Conversation (0)