Hey there, human — the robots need you! Vote for IEEE’s Robots Guide in the Webby Awards.

Close bar

HP's Water-Cooled Supercomputer is Designed for the Hydrophobic

HP’s Apollo 8000 supercomputer does direct water cooling for efficiency but keeps water out of server trays to minimize the risk of drips

3 min read
HP's Water-Cooled Supercomputer is Designed for the Hydrophobic
Photos: HP

Anybody who has spilled a beverage on a smart phone or laptop knows that water and computers don’t mix. But Hewlett Packard has designed a way to water-cool its servers with minimal risk of water leaking onto electrical components.

On Monday, HP introduced a supercomputer, called the Apollo 8000, which uses water-cooling to improve energy efficiency. Engineers have designed the system in a way that the active components are cooled directly by circulating water through server racks yet the water doesn’t enter into server enclosures.

Fluids are far more dense than air, which makes them a more efficient medium to transport heat from computing gear. Energy efficiency is increasingly important to supercomputer operators, who can pay millions of dollars in yearly energy bills and sometimes struggle to get enough power to their facilities.

There are a number of water-cooled systems already on the market. IBM, for example, circulates water through copper pipes [pdf] that run through server boards to cool its SuperMUC supercomputer. But not everyone is convinced with the idea of creating direct contact between water pipes and servers. Google, for one, uses water cooling in some of its data centers by circulating water along the outside of its server racks, rather than directly into server trays.

Like other systems, HP circulates cooled water into server racks but the company developed a multi-step heat-removal system that segregates the water from servers. On top of each server is a package that holds a series of heat pipes, or small copper pipes filled with gas and liquid. These heat pipes wick away heat from the servers to a heat sink.

As the servers run, the liquid turns into a gas, which flows toward a narrow stainless steel plate at the edge of each server tray. These plates absorb the heat from the processors and other components. To remove the heat from the plates, each rack has “water walls,” or tubes circulating cooled water, explains John Gromala, senior director for hyperscale product management at HP.

“The heat pipes take the heat from the top of the processor and move it to the side where the thermal bus bar is,” he says. “The cooling from the water causes a phase change (from gas to liquid) inside the pipes.”

The first installation of this system is at the National Renewable Energy Laboratory, which is running a petaflop-scale data center without the need for compressor-based air conditioners. Instead, the building has pumps that circulate the water through the server racks and the heat is used to warm the building. It's a very efficiency arrangement, achieving a power usage effectiveness of 1.06, compared to 1.8 for the average data center

The supercomputer also runs on 480-volt direct current to save energy, rather than a traditional power distribution unit. “Because we have so many fewer transitions (between AC and DC), it can be much more efficient,” said Gromala. “Any conversions required are done in the system.”

HP also introduced an air-cooled high-performance computer with a power distribution unit and networking hardware designed for the whole rack, rather than individual components within it. In comparison to a blade server configuration, HP says it performed more computations with 60 percent less space and 40 percent less power.

As researchers pack more compute power into smaller spaces, energy-related issues will become more acute. Some even worry that the cost of energy will make power a bottleneck to the future of supercomputing. Liquid cooling, whether it’s circulating water through server racks or immersing them in fluids, won’t solve all those problems but it is one way to keep computing costs—and the environmental footprint from data centers—in check.

The Conversation (0)