Across the globe, data centers are stacked to the ceiling with shelf after shelf of humming servers. The collective number of operating CPUs is taking a toll on data centers’ energy bills, but the real culprit driving up the facilities’ energy costs may actually be their thermostats.
Like humans, computers work best in a small temperature window roughly between 18 °C and 27 °C, with the sweet spot being about 24 °C. Data centers are on track to require an estimated 848 terawatt-hours by 2030, and up to 40 percent of that total will go toward cooling alone.
Small improvements in energy consumption can be eked out by improving server operation efficiency. However, some experts believe that drastically changing how data centers are kept cool—or even, warm—may be the solution.
A paper published on 18 October in Cell Reports Physical Science and another presented at the 2022 International Electron Devices Meeting conference present two very different visions: One in which data centers are kept at a sweat-inducing 41 °C, and another in which they’re cooled down to an inhospitable 100 kelvins (roughly -173 °C), respectively.
The Case for Cryogenic Cooling
Arnout Beckers and Alexander Grill are coauthors on the cryogenic cooling paper. Beckers is an engineer and Grill is a researcher, both at Belgian nanoelectronics and digital technologies company Imec. They explain that cryogenically cooling a data center would not mean turning the whole building into an ice cube. Instead, the idea makes use of extremely cold and nonconductive liquids, like liquid nitrogen, to cool server systems by immersing them in the liquid.
“The main difference is in the cooling with liquids instead of air,” Beckers and Grill write in a joint email response. “Liquid-immersion cooling is already a trend coming to data centers, but with liquids above ambient temperature.”
At these extremely cold temperatures, computing systems can see increases in efficiency as a result of reducing obstacles like mechanical resistance and transistor switching. Yet Beckers and Grill say that cooler isn’t always better. For example, cooling these classical servers down to the temperatures needed for quantum computers (1 kelvin or -272 °C) wouldn’t make the computers hyperefficient.
By bringing servers’ temperatures down through cryogenic cooling, Beckers, Grill, and their coauthors argue that data centers could see a 16-fold increase in computational performance—partially offset by a 4-fold increase in the energy used to power the cooling system.
“In a cold data center, most of the energy will go to the cooling, and only a small fraction will be for compute. The aim is to lower the compute energy as much as possible to maximize the net benefit,” Beckers and Grill write.
The Case for Warm Data Centers
Rakshith Saligram is a graduate student in electrical and computer engineering at the Georgia Institute of Technology whose research focuses on cryogenic computing. He says that while work toward cryogenic server cooling has gained traction in recent years, it still faces many practical challenges, including prohibitive costs to transform the cooling systems and the introduction of new points of failure in the systems.
With those challenges in mind, perhaps a warmer data center is the solution. This is the argument that Shengwei Wang, director of the Research Institute for Smart Energy at the Hong Kong Polytechnic University, and his coauthors make in their Cell Reports Physical Science paper on the global energy saving potential of warming up data centers.
Wang and his colleagues evaluated ongoing research on raising the temperature of data centers and found that allowing temperatures to reach 41 °C could result in an energy savings of 56 percent globally. Unlike cryogenic cooling, which aims to reduce energy costs by improving compute efficiency, warm data centers would instead reduce energy costs by reducing the overall use of “chiller cooling,” such as air conditioning, in favor of “free cooling” from ambient external air.
Essentially, raising the internal temperature of the data center creates a smaller difference between internal and external temperatures, and thus requires less active cooling to maintain the internal temperature.
Even though servers have traditionally been kept cool, Wang says that advances in material and server technology mean this isn’t necessarily the case anymore. In their paper, Wang and his colleagues cite server-performance guidelines from the American Society of Heating, Refrigeration and Air-Conditioning Engineers (ASHRAE) that already recommend operating temperatures up to 45 °C for newer classes of data processors.
However, several things still need to go right for this warm data center future to come to fruition, such as the widespread proliferation of microprocessors and transistors that can handle warmer temperatures. Even then, these systems may face physical obstacles, says Benjamin Lee, a professor of electrical and systems engineering at the University of Pennsylvania who has previously written about data centers for IEEE Spectrum.
“Microprocessors and their transistors could be designed to operate at higher temperatures, but their performance may suffer,” Lee says. “Current leakage increases with temperature, which means a microprocessor operating at higher temperatures will use more power to compute the same answer. Transistors could be tuned to control leakage better, but those solutions may harm transistor performance, causing the microprocessor to compute more slowly.”
Ultimately, Lee says that both warm and cryogenically cooled data centers could have their own benefits and downsides. The ultimate winner will come down to the balance of cost to performance as well as each data center’s threshold for risk.
“Warm data centers represent an incremental optimization and improvement beyond the current state of the art [while] cryogenic cooled data centers represent a more expensive, speculative solution,” Lee says. “Warm data centers reduce the costs of cooling without much impacting performance. Cryogenic data centers significantly increase the costs of cooling with the goal of improving performance by even more.”