Can Chips Keep Their Cool?

Increasing speed but not power consumption is just one challenge for the next eight years

Loading the podcast player...

Hi, this is Steven Cherry for IEEE Spectrum’s “Techwise Conversations.”

A friend of mine had a graphic design firm years ago with the name Pick Two. It came from an old saying in the design world that went like this: “Fast, good, cheap: Pick two.” But for more than 40 years, thanks to Moore’s Law—the rule of thumb that semiconductor progress more or less doubles every two years—chip designers haven’t had to pick two.

Yet many engineers find it hard to contemplate the future of semiconductors without seeing a bunch of stumbling blocks on the way to making better, faster, cheaper chips. So if you ask the experts where chips will be 8 or 10 years from now, you’ll get a lot of different answers.

Back in February, my colleague, Associate Editor Rachel Courtland, went to the annual International Solid-State Circuits Conference, or ISSCC, in San Francisco, hoping to find some reality amid the hope and despair. I thought we’d get her into the studio to find out what she learned there. Rachel, welcome to the podcast.

Rachel Courtland: Thanks, Steven.

Steven Cherry: So, people have been saying for years that Moore’s Law is going to come to an end, but scientists and engineers keep finding new ways to keep it going. On the third hand, we’re approaching the size of individual atoms, so clearly this can’t go on forever—or can it? You can see why us civilians can get confused. Help us out here.

Rachel Courtland: Well, there’s reason to be confused. People have been predicting the end of Moore’s Law for almost as long as it’s existed, and I think it’s partly confusing because Moore’s Law is an economic prediction as much as it is a prediction based on basic physical laws. So nowadays I think it’s fair to really talk about the end of Moore’s Law because chips are getting far more difficult and expensive to produce at smaller feature sizes. And at the same time, we really are approaching fundamental physical limits—transistors getting so tiny that current is leaking across them the smaller and smaller they get. So I’m not going to make any promises. Maybe Moore’s Law will continue on for another 30 years, but I think a lot of people are thinking that 10 years is a pretty good prediction for how long we might see it lasting.

Steven Cherry: I understand there’s a new book that tries to predict the future here. It’s called Chips 2020, and you spoke with its author at the conference. Tell us about it.

Rachel Courtland: That’s right. It’s edited by this retired electrical engineer named Bernd Hoefflinger, and he got a whole bunch of experts together to write a whole bunch of chapters on interesting new technologies that could emerge by 2020 and at the same time, all the things that sort of have to happen between now and then in order for these technologies to emerge.

Steven Cherry: So, is Hoefflinger an optimist or a pessimist?

Rachel Courtland: I’m not sure. So during our conversation, he spent a lot of time talking about all these technologies that have been developed in past decades but have never really emerged into circuits—things that he finds really exciting. But it’s still not clear whether they’re going to be implemented by chipmakers:

Bernd Hoefflinger [audio excerpt from interview]: We will have 6 billion users pretty soon who all want to have communication at the video level. Those data streams cannot be handled with an extrapolation of the present progress of the energy efficiency of server or data centers. They expect 1000 times more computations per second within a decade. If we were to try to accomplish this with today’s technology, we would eat up the world’s total electric power within five years. Total electric power!

Steven Cherry: So, this is a whole new thing to worry about? We need processors to not only double in performance every couple of years, but they have to do it without increasing their power consumption? Has this been part of Moore’s Law all along and we just haven’t talked about it much?

Rachel Courtland: Yeah. I guess it’s sort of a corollary to Moore’s Law called Dennard’s Law, which is rules that have to do with what kind of benefits you get as transistors get smaller. So, typically what happens is if you shrink a transistor, it gets faster, and it also consumes less power. But that’s not so much the case anymore.

Steven Cherry: Okay. So what did Chips 2020 have to say about power consumption?

Rachel Courtland: Well, Hoefflinger says one of the first things we need to do is have a good target for where we want power consumption to be, and the target he settled on is the energy it would take for a simple multiplier to perform a single computation. He says if we can get the energy of a very simple multiplier down from 1 picojoule—where it is right now—to about a femtojoule, we’ll be able to reach the energy efficiency we need to make the expectations for the coming decade. He likes to put that energy in terms of biology:

Bernd Hoefflinger: One femtojoule is 10 times lower than the firing energy of a synapse, be it cat or mouse or human.

Rachel Courtland: He says the synapse is basically the biological equivalent of a multiplier. And the reason the multiplier is good in this energy efficiency target, is important, is because it’s very important for things like video and for mobile computing. And if you can get the energy down to the femtojoule level that Hoefflinger is talking about, you can enable all sorts of interesting applications like biomorphic computing and very, very smart, low-energy chips that are very good at pattern recognition, like recognizing faces or identifying troubling patterns in cardiac signals and things of that sort.

Steven Cherry: I guess small, tiny sensors, also, if they need hardly any power, we can keep them going for years.

Rachel Courtland: That’s right. So no more batteries. Hopefully, we can just scavenge energy and power them.

Steven Cherry: So that’s the goal. How do we get there?

Rachel Courtland: Well, one thing Hoefflinger mentioned was more technology that exploits the third dimension on the chip. We’ve covered a lot of 3-D circuits activity at IEEE Spectrum lately: There’s 3-D packaging, where you have wires that connect one stack to another, which is increasingly being implemented. And then also three-dimensional transistors, which basically take a gate and wrap it around a channel. Intel introduced that technology last year, and it’s in production this year. Hoefflinger says we have to go even further:

Bernd Hoefflinger: We are going short on the third dimension.... There’s an incredible amount of worthwhile and very helpful activity on 3-D interconnects of chips. We have no activity that I can see on three-dimensional integration at the transistor level. And let’s be fair—the trigate, or the FinFET, is being called a three-dimensional device—that’s correct. But what we have to shoot for is one level further, and that is 3-D merged transistors.

Steven Cherry: Okay, wait—so what’s a 3-D merged transistor?

Rachel Courtland: Well, it’s basically two transistors combined together. So if you want to make a simple inverter, which is basically a p-doped transistor and an n-doped transistor, ordinarily both of those things have gates of their own. But in a merged transistor, they share a gate. So you have a PMOS transistor on one side, an NMOS transistor on the other side, and a gate in between them. Hoefflinger says that they’re occasionally referred to as “hamburger inverters.”

Steven Cherry: So the gate is the meat here.

Rachel Courtland: The gate is the meat.

Steven Cherry: So is that going to be enough?

Rachel Courtland: No. So, he also says we should change the way we perform calculations:

Bernd Hoefflinger: Why don’t we detect in a multiplier the most significant bit, the leading one, first? In other words, each and every one in the world, when we learn in the classroom to multiply two numbers, what do we do in the decimal system? We start with the least significant tens first. Then what do we want to know when we multiply one figure, like an interest rate, with the money that I have available? I want to know the order of magnitude of my result. So basically, I should look at the most significant bits, the leading ones, first—or the leading tens, first.

Rachel Courtland: So the reason this works so well is that it reduces the complexity that’s normally accorded to multiplications. So instead of doing all these really complicated things to make a very precise multiplication outcome, you basically just boil the process down to a series of additions. You start with the most significant part first, and then you continue on from there. So Hoefflinger says this will reduce the number of transistors that you need to put in a multiplier by a significant amount, which will save power and also save space in the process. Intel, he says, has started to experiment with this; at ISSCC, they mentioned they are implementing this in some of their chips, but we’re not really far along. And Hoefflinger says that if it were ever fully implemented, we could cut down the energy consumption of chips by an order of magnitude or two.

Steven Cherry: But that’s still not going to get circuits down to 1 femtojoule.

Rachel Courtland: No. So Hoefflinger has a third suggestion, which is that we basically need to change the way that circuits are architected to make them more like the circuitry that are used in communications. He suggests basically making it so that there is some sort of error-correction circuitry in chips, and the idea is that this will help us operate chips at a lower voltage:

Bernd Hoefflinger: We want to process it that we process both the signal and its complement—that is what differential signaling means. It is being used in the communication between chips, but it is not being used in logic any more. At some time, mankind produced the most effective computers, with differential emitter-coupled logic, in the days of bipolar technology. So it’s been around for decades, and it needs to be reconsidered.

Rachel Courtland: Right now one of the things that’s preventing us from really continuing on an energy-scaling path is that as we lower the operating voltage of chips, we actually wind up with transistors that don’t work at all, basically, because there’s so much variability in the process of manufacturing now that we can’t guarantee that we’re going to wind up with 100 percent of our transistors working or even close to that.

Steven Cherry: So it sounds like neither of these strategies is entirely new?

Rachel Courtland: No, each of these ideas has its roots in the 1970s and the 1980s, but they’ve been set aside for quite a while.

Steven Cherry: And that’s—why?

Rachel Courtland: Hoefflinger says it’s because of the streamlined assembly line that’s been developed to make chips:

Bernd Hoefflinger: It is a result of process standardization. We want to port manufacturing technology overnight from here to Taiwan to Malaysia to Dubai; the result of that is no deviation in processing routines. It’s the power of standard libraries, which is a fantastic base to generate new chips. But over a decade or over two decades, it has, of course, basically some inertia in it.

Steven Cherry: All right. So we’re just eight years away from 2020 now. Are we going to make it?

Rachel Courtland: Well, it’s hard to say. Hoefflinger says there’s more attention to energy efficiency now, so these kinds of technologies are getting a second look. But he’s retired now, and I think he’s a bit more philosophical about it.

Steven Cherry: So, it’s a battle between the philosophers and the pessimists, and in eight years we’ll find out who’s right. But I don’t think it will take us that long to get you back in the studio, Rachel. Thanks for joining us today.

Rachel Courtland: Thanks, Steven.

Steven Cherry: We’ve been speaking with Rachel Courtland about whether chip designers and semiconductor engineers can keep Moore’s Law going for another eight years.

For IEEE Spectrum’s “Techwise Conversations,” I’m Steven Cherry.

Announcer: “Techwise Conversations” is sponsored by National Instruments.

This interview was recorded 21 May 2012.
Audio engineer: Francesco Ferorelli
Read more
Techwise Conversations or follow us on Twitter.

NOTE: Transcripts are created for the convenience of our readers and listeners and may not perfectly match their associated interviews and narratives. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

Advertisement
Advertisement