Is There a Moore’s Law for Energy Efficiency?
The energy needed to perform computations is falling exponentially, a trend that predates the transistor
Rachel Courtland: Hello, this is Rachel Courtland for IEEE Spectrum radio.
The world’s first general-purpose computer, the ENIAC, weighed 30 tons and needed about 150 kilowatts to perform its calculations—enough electricity to power about 100 000 smartphones, with just a fraction of the speed of an iPod Nano. As powerful as computers have gotten since the days of the ENIAC, they’ve also come to use a lot less energy—and thank goodness, or else computing as we know it would be impossible. As The Atlantic recently pointed out, if a MacBook Air had the efficiency of a machine from 20 years ago, it would only be able to last a few seconds on battery power.
My guest today is Jonathan Koomey, who is a consulting professor in the department of civil and environmental engineering at Stanford University. Earlier this year, Koomey and a few colleagues published a paper looking at the Moore’s Law–like exponential progress in the energy efficiency of computers over the past six decades. He joins us by phone from his home in Burlingame, California. Dr. Koomey, welcome.
Jonathan Koomey: Glad to be here.
Rachel Courtland: How did you get interested in the subject of electrical-efficiency trends in computers?
Jonathan Koomey: Back around the year 2000, there were a couple of guys running around talking about how all computers and related equipment use 13 percent of U.S. electricity, and that total was going to grow to half of all U.S. electricity in 10 years. Now that turned out to be bunk, but out of those spurious claims grew interesting research that focused on the energy use and efficiency of different kinds of computing equipment. At the time there was also growth in data centers. And so people started to wonder just how much electricity those big computing facilities were using, and so I developed a research program around the electricity used for these different kinds of computing devices. And as that research developed, I started to wonder whether there were any long-term regularities in those trends. And as I started to explore them, I and other colleagues discovered this relationship going back for 60+ years—the one that relates to the energy efficiency of computation doubling every roughly year and a half.
Rachel Courtland: And were you surprised by what you found?
Jonathan Koomey: People, when they hear about Moore’s Law, they know that it’s the doubling of the number of components on a chip every two years, starting in 1975. But this trend actually precedes Moore’s Law—it is a fundamental characteristic of information technology that uses electrons for switching. And so that was the big surprise, was that these trends started at the beginning of the electronic computer age, not with microprocessors.
Rachel Courtland: And why do you think this trend works so well from the time of vacuum tubes on? There are a lot of very different technologies that are all thrown together into this one trend line.
Jonathan Koomey: So what we, what we figured out was that the things that you do to improve performance almost invariably improve computations per kilowatt hour. So, for microprocessors, that means shrinking the transistors; that means there will be fewer electrons in the transistor, a shorter distance from the source to the drain, and a smaller distance between the transistors, so the communication goes faster, but it’s also true for tube computers. So shrinking the tubes, reducing their capacitance, reducing the current going through them, all of those things do both for you. They reduce energy use, but they also improve performance. So those things are intimately connected.
Rachel Courtland: I read somewhere that you did make some measurements of your own. Is that right?
Jonathan Koomey: Yes. I made measurements first by going up to Lawrence Berkeley lab, where I used to work, and crawling under the desks of my colleagues there, who patiently allowed me to do that, but also I visited the Microsoft computer archives, and they have a temperature- and humidity-controlled vault where they have many different kinds of old computers, not just those running Microsoft operating systems, so all of those are in working order; they have the original disks, and so I got to measure a number of computers up there, and also I visited the home of one of the local collectors of computers there, Erik Klein, and he’s got a whole garage of old computers. So I got to measure some of his as well, including the first personal computer, the Altair 8800.
Rachel Courtland: So in order to make these power consumption measurements, you have to run the computer at full capacity. Is that right? Did you have a particular program that you were using to challenge these computers?
Jonathan Koomey: In general our goal was to stress the computer so that it was operating at its full capacity, because our measurements of efficiency are the total power use for an hour of the computer operating at full capacity and then the total number of computations you can do in that hour. It depended on what access we had to the software on the computer. So there wasn’t a standard piece of software that we ran, which obviously would be a preferable thing to do, but because each of these computers was so different, we had to improvise in a lot of ways.
Rachel Courtland: Some people have gone as far as to call this trend a new law, Koomey’s Law. Do you think it deserves its own name?
Jonathan Koomey: That’s not for me to decide. I think it’s an important trend, and I think that it has big implications for the development of computers going forward.
Rachel Courtland: What do you think this trend can tell us about the future of computing?
Jonathan Koomey: So, if these trends continue—and my friends at Intel say that they have at least another 10 years of these performance and efficiency trends improving—if they continue, they basically bode well for the increased sophistication and lower-power use of mobile computing centers and controls. So doubling every year and a half means over the course of 10 years, the efficiency of computers will go up by a factor of 100. So that means that we will have the ability to have more and more mobile centers that use less and less power to collect information and also to bring it back to some wireless base station. The people who are doing the most interesting work now are focused on ultralow power computing that was never before possible. And so Josh Smith, who’s at Intel and also the University of Washington, is working with mobile centers that have no power source. No battery. They use the power from stray television and radio signals to power themselves. And that means that you’re free from the constraint of having to have a power source. That’s an extreme example. There are, of course, other ways to scavenge energy from the local environment. You could imagine the use of photovoltaic cells for converting light to electricity; you can convert motion to energy, like in the old watches that you would, you know, shake your hand and they would be able to then operate for hours at a time. You can also use various technologies to convert heat to electricity. So we’re going to get a whole lot more clever about how we scavenge small amounts of energy to allow us to put sensors everywhere—cheap, small sensors everywhere that connect wirelessly.
Rachel Courtland: So at the same time we seem to be facing the end of Moore’s Law, a point where transistors can’t be scaled down anymore. What kind of implications does that have for this trend?
Jonathan Koomey: So people have been talking about the end of Moore’s Law almost since Moore came up with it back in the ’60s and ’70s. And we’ve always been clever figuring out ways to get around the constraints that we’ve found, but there are physical limits. And back in 1985, Richard Feynman, the physicist, calculated the theoretical limit of efficiency improvements compared to current computing technology. And he found that there was a factor of 10 to the 11th—100 billion improvement that one could imagine compared to the current technology. Now since then, using the data from our paper, we’ve improved by a factor of about 40 000. And that leaves significant headroom for future improvements, but because of the power of exponential growth, that still only gives us 32 years before we hit Feynman’s theoretical limit. And so we will come to an end of our particular methods for scaling transistors. But there will potentially be other ways to do computing. When Feynman did his calculation, he said, I’m postulating a transistor that uses three atoms. If we’re clever enough to use individual nuclei to do our computing—some kind of, you know, some kind of quantum computing, or some other clever variation on it, then of course the theoretical limits will be pushed further back. So I think we will need to get to some radical shift in the way we deliver computing, because of the physical limits that you’ve identified
Rachel Courtland: So is there some device that you’re really looking forward to having?
Jonathan Koomey: Anything of course that is so low power that doesn’t require external electricity in the form of a battery or some other source opens all sorts of possibilities for transforming the way we think about the world, collect data about the world, and make decisions.
Rachel Courtland: It sounds about as far removed from the ENIAC as I could imagine.
Jonathan Koomey: We’ve come a long way, and ENIAC was a very limited-purpose device, right? They were using it for doing firing tables and some other calculations for military, but now we’re talking about computing devices that are cheap and ultralow power and able to be dispersed everywhere throughout the society. And that will just transform everything about how we think about the world and operate in that world.
Rachel Courtland: Great. Well, thank you very much for joining us, Dr. Koomey.
Jonathan Koomey: My pleasure.
Rachel Courtland: We’ve been speaking with Jonathan Koomey of Stanford University about energy efficiency in computing. For IEEE Spectrum radio, I’m Rachel Courtland.
This interview was recorded 21 November 2011.
Audio engineer: Francesco Ferorelli.
NOTE: Transcripts are created for the convenience of our readers and listeners and may not perfectly match their associated interviews and narratives. The authoritative record of IEEE Spectrum’s audio programming is the audio version.