In May, Intel announced the most dramatic change to the architecture of the transistor since the device was invented. The company will henceforth build its transistors in three dimensions, a shift that—if all goes well—should add at least a half dozen years to the life of Moore’s Law, the biennial doubling in transistor density that has driven the chip industry for decades.
But Intel’s big announcement was notable for another reason: It signaled the start of a growing schism among chipmakers. Despite all the great advantages of going 3-D, a simpler alternative design is also nearing production. Although it’s not yet clear which device architecture will win out, what is certain is that the complementary metal-oxide semiconductor (CMOS) field-effect transistor (FET)—the centerpiece of computer processors since the 1980s—will get an entirely new look. And the change is more than cosmetic; these designs will help open up a new world of low-power mobile electronics with fantastic capabilities.
There’s a simple reason everyone’s contemplating a redesign: The smaller you make a CMOS transistor, the more current it leaks when it’s switched off. This leakage arises from the device’s geometry. A standard CMOS transistor has four parts: a source, a drain, a channel that connects the two, and a gate on top to control the channel. When the gate is turned on, it creates a conductive path that allows electrons or holes to move from the source to the drain. When the gate is switched off, this conductive path is supposed to disappear. But as engineers have shrunk the distance between the source and drain, the gate’s control over the transistor channel has gotten weaker. Current sneaks through the part of the channel that’s farthest from the gate and also through the underlying silicon substrate. The only way to cut down on leaks is to find a way to remove all that excess silicon.
Over the past few decades, two very different solutions to this problem have emerged. One approach is to make the silicon channel of the traditional planar transistor as thin as possible, by eliminating the silicon substrate and instead building the channel on top of insulating material. The other scheme is to turn this channel on its side, popping it out of the transistor plane to create a 3-D device. Each approach comes with its own set of merits and manufacturing challenges, and chipmakers are now working out the best way to catch up with Intel’s leap forward. The next few years will see dramatic upheaval in an already fast-moving industry.
Change is nothing new to CMOS transistors, but the pace has been accelerating. When the first CMOS devices entered mass production in the 1980s, the path to further miniaturization seemed straightforward. Back in 1974, engineers at the IBM T. J. Watson Research Center in Yorktown Heights, N.Y., led by Robert Dennard, had already sketched out the ideal progression. The team described how steadily reducing gate length, gate insulator thickness, and other feature dimensions could simultaneously improve switching speed, power consumption, and transistor density.
But this set of rules, known as Dennard’s scaling law, hasn’t been followed for some time. During the 1990s boom in personal computing, the demand for faster microprocessors drove down transistor gate length faster than Dennard’s law called for. Shrinking transistors boosted speeds, but engineers found that as they did so, they couldn’t reduce the voltage across the devices to improve power consumption. So much current was being lost when the transistor was off that a strong voltage—applied on the drain to pull charge carriers through the channel—was needed to make sure the device switched as quickly as possible to avoid losing power in the switching process.