Moore’s Law Might Be Slowing Down, But Not Energy Efficiency

Miniaturization may be tough, but there's still room to drive down power consumption in modern computers

4 min read
Moore’s Law Might Be Slowing Down, But Not Energy Efficiency
Illustration: Serge Bloch

opening illustration Moores Efficiency Illustration: Serge Bloch

No one can say exactly when the era of Moore’s Law will come to a close. Nevertheless, semiconductor experts like us can’t resist speculating about that day because it will mark the end of an extraordinary period of history, with uncertain implications for one of the world’s most important industries.

Here’s what we do know. The last 15 years have seen a big falloff in how much performance improves with each new generation of cutting-edge chips. So is the end nigh? Not exactly, because even though the fundamental physics is working against us, it appears we’ll have a reprieve when it comes to energy efficiency.

There are many ways to gauge a computer’s efficiency, but one of the most easily calculated metrics is peak-output efficiency, which measures the efficiency of a processor when it’s running at its fastest.

Peak-output efficiency is typically quoted as the number of computations that can be performed per kilowatt-hour of electricity consumed. And according to a peer-reviewed paper published in 2011 in the IEEE Annals of the History of Computing, it doubled like clockwork every year and a half or so for more than five decades.

This trend started well before the first microprocessor, way back in the mid-1940s. But it began to come to an end around 2000. Growth in both peak-output efficiency and performance started to slow, weighed down by the physical limitations of shrinking transistors. Chipmakers turned to architectural changes—such as putting multiple computing cores in a single microprocessor—but they weren’t able to maintain historical growth rates.

These days, we’ve found, it takes about 2.7 years for peak-output efficiency to double. That’s a substantial slowdown. Historically, a decade of doubling boosted efficiency by a factor of a hundred; at current rates, it would take 18 years to see a hundredfold gain.

Fortunately, the news isn’t all bad. Our computing needs have changed. For years after Moore’s landmark 1965 paper, computers were expensive, relatively rare, and regularly pushed to their computing peak. Now that they’re ubiquitous and cheap, the emphasis in chip design has shifted from fast CPUs in stationary machines to ultralow-power processing in mobile appliances, such as laptops, cellphones, and tablets.

Today, most computers run at peak output only a small fraction of the time (a couple of exceptions being high-performance supercomputers and Bitcoin miners). Mobile devices such as smartphones and notebook computers generally operate at their computational peak less than 1 percent of the time based on common industry measurements. Enterprise data servers spend less than 10 percent of the year operating at their peak. Even computers used to provide cloud-based Internet services operate at full blast less than half the time.

In this new regime, a good power-management design is one that minimizes how much energy a device consumes when it’s idle or off. And the better indicator of energy efficiency is how much electricity a computer consumes on average—not when it’s operating at full blast.

We’ve recently defined a measure of efficiency that’s more in sync with how chips are used nowadays, which we call “typical-use efficiency.” Like peak-output efficiency, it’s measured in computations per kilowatt-hour. This time, however, it’s calculated by dividing the number of computations performed over the course of a year by the total electricity consumed—a weighted sum of the energy a processor and its supporting circuitry use in different modes over that same period. For example, a laptop might operate at peak power when its user is playing a game, but this only happens a tiny fraction of the time. Other common activities, such as word processing or video playback, might consume a tenth as much electricity, since only a fraction of the chip is needed for these functions, and smart power management can actively shut off circuitry between keystrokes and video frames.

Encouragingly, typical-use efficiency seems to be going strong, based on tests performed since 2008 on Advanced Micro Devices’ chip line. Through 2020, by our calculations for an AMD initiative, typical-use efficiency will double every 1.5 years or so, putting it back to the same rate seen during the heyday of Moore’s Law.

graph -moore's efficiency chart Data sources: AMD, Koomey et al. (2011)

These gains come from aggressive improvements to circuit design, component integration, and software, as well as power-management schemes that put unused circuits into low-power states whenever possible. The integration of specialized accelerators, such as graphics processing units and signal processors that can perform certain computations more efficiently, has also helped keep average power consumption down.

Of course, as with any exponential trend, this one will eventually end, and circuit designers will have become victims of their own success. As idle power approaches zero, it will constitute a smaller and smaller fraction of the energy consumed by a computer. In a decade or so, energy use will once again be dominated by the power consumed when a computer is active. And that active power will still be hostage to the physics behind the slowdown in Moore’s Law.

Over the next few decades, we’ll have to rethink the fundamental design of computers if we want to keep computing moving forward at historical rates. In the meantime, steady improvements in everyday energy efficiency will give us a bit more time to find our way.

This article originally appeared in print as “Efficiency’s Brief Reprieve.”

About the Authors

Jonathan Koomey is a research fellow at the Steyer-Taylor Center for Energy Policy and Finance at Stanford University. IEEE Fellow Samuel Naffziger is an Advanced Micro Devices corporate fellow. They began collaborating on computing efficiency in 2014, as part of 25x20, an AMD energy-efficiency initiative that is targeting a 25X improvement in PC efficiency by 2020.

The Conversation (0)

Did Apple Really Embrace Right-to-Repair?

iFixit's CEO on Apple's new DIY repair policy and the barriers that remain

4 min read
Two hands pinch adhesive beneath a large battery with an Apple icon on it in an opened phone.

An iFixit step by step teardown shows the battery in an iPhone 12 Pro

IEEE Spectrum recently spoke with Kyle Wiens, co-founder of iFixit, which provides repair parts and guidance for Apple devices among others, about Apple’s announcement last month that it would provide users with options to repair their devises themselves.

Spectrum: Take a moment to describe the situation before a couple of weeks ago, when Apple announced a change of policy on self-repair. Suppose I bought a new Apple phone and I sat on it and cracked the screen and decided I wanted to repair it. What could I do?

Keep Reading ↓ Show less

High School Students Built This iPhone App for the Visually Impaired

Atheia uses computer vision and facial recognition

5 min read
In this photo of two young males wearing surgical masks, the one on the left has a smartphone on his belt with the camera facing out looking down at a bracelet on his right wrist. In the photo on the right is another male holding a smartphone in front of his face, with a target area superimposed on his body.

On the left Choi is using the Atheia app on an Iphone along with a prototype of a smart-watch-like bracelet. On the right is what the Atheia application shows when the iPhone camera is pointed at a person (Ravella).

Eugene Choi and Pranav Ravella

Members of a high school machine-learning club have developed an inexpensive smartphone application for those with low or impaired vision.

Eugene Choi, Raffu Khondaker, Irfan Nafi, and Pranav Ravella are seniors at the Thomas Jefferson High School for Science and Technology, in Alexandria, Va.

Keep Reading ↓ Show less

Learn How to Use a High-Performance Digitizer

Join Teledyne for a three-part webinar series on high-performance data acquisition basics

1 min read

Webinar: High-Performance Digitizer Basics

Part 3: How to Use a High-Performance Digitizer

Date: Tuesday, December 7, 2021

Time: 10 AM PST | 1 PM EST

Keep Reading ↓ Show less