A group of some of the most noted device engineers on the planet, including several IEEE Fellows and this year’s IEEE Medal of Honor recipient, is proposing a new way of judging the progress of semiconductor technology. Today’s measure, the technology node, began to go off the rails almost two decades ago. Since then, the gap between what a technology node is called and the size of the devices it can make has only grown. After all, there is nothing in a 7-nanometer chip that is actually that small. This mismatch is much more of problem than you might think, argues one of the group, Stanford University professor H.-S. Philip Wong.
“The broader research community has a feeling that [device] technology is over,” he says. “Nothing could be further from the truth.”
But perception has proved pretty important. Nodes acted as markers along the industry roadmap that the chipmakers, their suppliers, their customers, and their future engineers could all aim for. Now the big processor makers have roadmaps that put them at a 1-nanometer node within a decade. (One nanometer is about the width of five atoms of silicon.) And that apparent endpoint is deterring students and new innovators from joining the effort to keep up the advancement of computing, says Wong, who is also vice president of corporate research at TSMC. “We felt that the number of students has dwindled at exactly the time we really need them.”
Node names made sense during the heyday of Moore’s Law, when the industry was basically shrinking the whole transistor at each node. The name actually referred to the most crucial dimension, the length of the transistor’s gate, which controls the flow of current through the device. But around 2000, physics started to get in the way of simply shrinking transistors. Engineers came up with a host of other ways to increase density and performance without necessarily changing the gate length, but the industry kept the convenient shorthand of ever-smaller node numbers. And here we are now with 7-nanometer transistors that have 15-nanometer gate lengths.
Wong and his collaborators decided they needed to come up with something better. “We’re at this juncture where the path to the future doesn’t seem to be very clear. 2D scaling is coming to an end, but we all know that there is a future,” says Wong. “So how do we quantify that?”
Their answer is a set of three numbers, which unlike today’s system, increase as technology progresses rather than decreasing. Taken together, the three metrics describe a technology’s impact on a computing system as a whole.
The first, DL, represents the density of logic transistors per square millimeter that a technology can produce. Crucially, this captures new 3D techniques in development such as the 3D carbon nanotube chips that Wong helped pioneer, in that it counts all the transistors in a vertical volume.
The second number, DM, captures the density of memory cells in a computer’s main memory. Right now main memory consists of DRAM, but the metric won’t change if some new type of memory takes over in the future.
The final metric, DC, might be the biggest departure from the traditional process node measurement. DC is the density of the interconnects linking the processor to main memory. Overcoming the energy and latency penalty of transferring data between the processor and memory has been a focus of recent research, and increasing the density of interconnects is one way to achieve that.
In a paper introducing the new density metric this week in Proceedings of the IEEE, the group writes that a system using today’s leading-edge technologies for all three metrics would pack 38 million logic transistors/mm2, 383 million DRAM cells/mm2, and 12 thousand interconnects/mm2. (They propose writing that as [38M, 383M, 12K].)
Wong hopes the new metric will give practicing engineers and future engineers “the vision that there is a lot of room for improvement going forward.”