The Accelerator Wall: A New Problem for a Post-Moore’s Law World

Specialized chips and circuits may not save the computer industry after all

2 min read
Illustration of a person running up against a brick wall
Illustration: iStockphoto

Accelerators are already everywhere: The world’s Bitcoin is mined by chips designed to speed the cryptocurrency’s key algorithm, nearly every digital something that makes a sound uses hardwired audio decoders, and dozens of startups are chasing speedy silicon that could make deep learning AI omnipresent. This kind of specialization, where common algorithms once run as software on CPUs are made faster by recreating them in hardware, has been thought of as a way to keep computing from stagnating after Moore’s Law peters out in one or two more chip generations.

But it won’t work. At least, it won’t work for very long. That’s the conclusion that Princeton University associate professor of electrical engineering David Wentzlaff and his doctoral student Adi Fuchs come to in research to be presented at the IEEE International Symposium on High-Performance Computer Architecture this month. Chip specialization, they calculate, can’t produce the kinds of gains that Moore’s Law could. Progress on accelerators, in other words, will hit a wall just like shrinking transistors will, and it will happen sooner than expected.

To prove their point, Fuchs and Wentzlaff had to figure out how much of recent performance gains comes from chip specialization and how much comes from Moore’s Law. That meant examining more than 1,000 chip data sheets and teasing out what part of their improvement from generation to generation was due to better algorithms and their clever implementation as circuits. In other words, they were looking to quantify human ingenuity.

So they did what engineers do: They made it into a dimensionless quantity. Chip specialization return, as they called it, answers the question: “How much did a chip’s compute capabilities improve under a fixed physical budget” of transistors?

Using this metric, they evaluated video decoding on an application specific integrated circuit (ASIC), gaming frame rate on a GPU, convolutional neural networks on an FPGA, and Bitcoin mining on an ASIC. The results were not heartening: Gains in specialized chips are greatly dependent on there continuing to be more and better transistors available per square millimeter of silicon. In other words, without Moore’s Law, chip specialization’s powers are limited.

So if specialization isn’t the answer, what is? Wentzlaff suggests that the industry learn to compute using things that will still scale even when logic has stopped. For example, the number of bits of flash memory available per square centimeter is continuing to increase independently of Moore’s Law, because the industry has moved to a 3-D technology that gives it the ability to make 256 or more layers of cells. Fuchs and Wentzlaff have already begun exploring that, developing a computer architecture that speeds computation by having the processor look up previous computations stored in memory instead of recomputing them.

The end of Moore’s Law is “not the end of the world,” says Wentzlaff. “But we need to be prepared for it.”

The Conversation (0)

The First Million-Transistor Chip: the Engineers’ Story

Intel’s i860 RISC chip was a graphics powerhouse

21 min read
Twenty people crowd into a cubicle, the man in the center seated holding a silicon wafer full of chips

Intel's million-transistor chip development team

In San Francisco on Feb. 27, 1989, Intel Corp., Santa Clara, Calif., startled the world of high technology by presenting the first ever 1-million-transistor microprocessor, which was also the company’s first such chip to use a reduced instruction set.

The number of transistors alone marks a huge leap upward: Intel’s previous microprocessor, the 80386, has only 275,000 of them. But this long-deferred move into the booming market in reduced-instruction-set computing (RISC) was more of a shock, in part because it broke with Intel’s tradition of compatibility with earlier processors—and not least because after three well-guarded years in development the chip came as a complete surprise. Now designated the i860, it entered development in 1986 about the same time as the 80486, the yet-to-be-introduced successor to Intel’s highly regarded 80286 and 80386. The two chips have about the same area and use the same 1-micrometer CMOS technology then under development at the company’s systems production and manufacturing plant in Hillsboro, Ore. But with the i860, then code-named the N10, the company planned a revolution.

Keep Reading ↓Show less
{"imageShortcodeIds":[]}