Servers consume a lot of energy in data centers, but it’s easy to forget their carbon footprints begin before they’re ever placed on racks inside air-conditioned mega-warehouses. After all, it takes energy to extract minerals and manufacture them into things like processors, motherboards and memory modules.
These “embodied” carbon emissions are a target of research by computer scientists at Carnegie Mellon University, Microsoft, and the University of Washington, who created and tested prototype servers they call GreenSKUs designed to run in the Azure cloud service environment. (SKU refers to a stock keeping unit, which is how the hardware world sometimes refers to physical products.) In a paper presented at the Annual International Symposium on Computer Architecture in June and July, the researchers describe a method for identifying older components that can be re-used in servers without affecting operations.
That’s important, because there’s currently a lot of components taken out of commission when they’ve still got some life in them, says Ashkitha Sriraman, a computer science professor at Carnegie Mellon who was involved in the research. “Very frequently, if one component goes bad or is not efficient, the entire server goes waste,” she says.
To optimize the refurbished servers further, the researchers are looking to software. In a paper presented on 3 November at the HotInfra conference, the researchers discussed their ongoing efforts to add a software layer that plans which compute tasks to run on the GreenSKUs versus standard Azure servers based on performance needs.
The reused components are 4th and 5th generation RAM modules, as well as solid state drives, all of which come from previously used servers that are no longer operational. In addition to these reused parts, the GreenSKUs rely on a more energy efficient processor, eliminating some of the emissions tied to running the server.
The Next Frontier for Reducing Carbon Emissions
Reductions in the carbon emissions involved in cloud computing are increasingly essential, as they could represent 20 percent of emissions by 2030, according to the Association for Computing Machinery’s Technology Policy Council. The same report found that the cloud computing industry currently accounts for 3 percent of the energy consumed each year globally. But the path to reducing emissions caused by running server farms is relatively clear. Cloud companies are already invested in increasing energy efficiency and relying more on renewable energy sources.
When accounting for Microsoft Azure’s performance requirements and the energy required to run cloud server operations, the methodology netted a reduction of 8 percent of total embodied and operational carbon emissions.
At scale, the research team calculated that their reuse and reduction technique could lead to a 0.1 to 0.2 percent reduction in global carbon emissions. As Sriraman points out, that’s a small percentage but a huge number, comparable to all of the emissions from smart phone use in the United States.
Backwards Compatibility Makes Reuse Possible
Microsoft currently replaces Azure servers every 3 to 5 years to optimize efficiency, the researchers wrote, and reusable components aren’t taken out to run on other servers. To make reuse possible, the researchers decided to take advantage of the advances in technology that allow for backward compatibility.
Compute Express Link (CXL) controllers that connect processors, memory and drives can now accommodate the two most recent generations of memory modules that store RAM, DDR4 and DDR5 respectively.
To help data server engineers retrofit servers, the researchers put together a framework to identify used components that won’t create unacceptable losses in performance, or consume so much energy to operate that the benefit of reusing them is lost.
Overcoming Operational Shortcomings
The researchers say the components they used inevitably came with tradeoffs, and they had to find workarounds to mitigate these tradeoffs.
The older generation RAM introduced latency and lower memory bandwidth, which researchers addressed with a memory pooling technique called Pond’s approach. The older SSDs also came with lower bandwidth, along with slower read/write speeds. A configuration called RAID striping eliminated these problems.
“You’re saving in terms of embodied emissions,” Sriraman says, “but not compromising too much.”
The AMD Bergamo processors the researchers used for increased energy efficiency had lower frequency and capacity in its last-level cache than chips currently deployed at Azure. While there wasn’t a workaround for this problem, researchers included it in their larger framework for identifying combinations of components that would meet the performance needs of a given server.
The prototypes go to show that you don’t always need the “latest and greatest” to meet the requirements of a cloud service, Sriraman says.
“A lot of these applications do perfectly fine,” she says.



