Schools Pool to Stay Cool

Massachusetts research universities set up joint computing center to save on power costs

5 min read

21 March 2012—When one of the world’s most ambitious university computer centers opens later this year, it will be a “green” facility—green in its environmental cred and green in its bottom line, too.

The US $95 million, 8400-square-meter Massachusetts Green High-Performance Computing Center (MGHPCC), located in Holyoke, Mass., will be on a par with the data centers that house the world’s fastest supercomputers, say its proponents. The center has pooled the computing resources of five of the top research universities in the northeastern United States: Boston University, Harvard University, MIT, Northeastern University, and the University of Massachusetts.

Labs, researchers, and computer centers within each member university will be invited to install their racks of servers and clusters. The motivation to do so, the center’s director says, comes down to economies of scale: Build it bigger, smarter, and cheaper, and they will come.

Located on a canal fed by the nearby Connecticut River, the center, built with funding from the state of Massachusetts, EMC Corp., and Cisco Systems, will draw most of its electricity from local hydropower stations. Its electricity provider is also completing a 4.5-megawatt photovoltaic farm—the largest in New England—and a portion of that power will go to the computing center.

In total, says MGHPCC interim executive director John Goodhue, at least 80 percent of the center’s electricity sources will be carbon-free. But it takes more than a low carbon footprint to get a genomics research group at Boston University or a statistics lab at MIT to move its computing hardware 150 kilometers away to an economically depressed former mill town in Western Massachusetts. Holyoke has some of the cheapest electricity in the Northeast, and cutting-edge sustainable computer-center design will enable further cost savings.

As an example, Goodhue says, consider a hypothetical $100 000 server rack today. It has 288 processor cores and cranks through 2 to 3 teraflops at peak performance. At maximum load, that rack uses electricity at a rate of about 15 kilowatts. In Cambridge, Mass., where electricity costs 14 cents per kilowatt-hour, the rack would cost $16 000 to power for a year.

MGHPCC, with its substantially lower power rates, would first cut the rack’s operating electricity bill nearly in half, says Goodhue. It would also realize further cost savings from cheaper cooling costs. (Cooling accounts for a substantial portion of a high-end computer’s power budget these days, Goodhue says.) So the hypothetical rack’s real bottom line for the year—electricity plus cooling—would run $32 000 in Cambridge but only $13 000 at MGHPCC. “Start adding up the racks, and suddenly you’re talking about money,” Goodhue says.

Moreover, at all five member universities, real estate is precious, says Goodhue. “Space is scarce and expensive,” he says. “You could build a costly 90 000-square-foot [8400-square-meter] office building in Cambridge.” Or by relocating 680 racks of university server farms and data centers to Holyoke, member universities could free up that same amount of coveted space in existing on-campus buildings. 

MGHPCC was ultimately able to relocate the computing hub of major Massachusetts universities a 2-hour drive west of Boston, because it can use otherwise dormant fiber-optic lines laid down during another era in the Internet’s history, Goodhue says.

“Each university will have a gigabit [per second] or more,” Goodhue says. “The vast majority of the fiber was put in the ground around 2000 as part of the dot-com boom.” MGHPCC can easily use the dark fiber, Goodhue says, as it happens to terminate in Cambridge—home of MIT and Harvard—and Chicopee, a town that’s a few kilometers from the new computer center.

“From the point of view of the user of the system, if you’re at Boston University, your machine will look like it’s on the BU campus network,” he says. “The same as if the machine were down the hall.”

Of course, many research computers in university labs and departments are indeed just down the hall. But, says Curtis Hillegas, director of research computing at Princeton University, clusters with the amount of power necessary to do cutting-edge research also pose cooling nightmares and can require more resources than a makeshift down-the-hall setup affords.

This problem is discussed in terms of the computer center’s power usage effectiveness (PUE) number, which measures how much extra power a high-performance computer system consumes. A computer that requires as much energy to stay cool as it does to run has a PUE of 2. The ideal computer—one that requires no cooling or any additional power other than the electrons needed to operate its circuits—has a PUE of 1.

“Our old data center had a PUE of about 2.2,” Hillegas says. “An academic building—a closet or converted server room—has a PUE of around 3.”

By contrast, Hillegas says, Princeton’s new High-Performance Computing Research Center runs at a PUE of between 1.3 and 1.5. Google boasts of data centers in 2011 scoring between 1.11 and 1.14. (Especially for a company like Google, which burns through 100 MW of power or more for just one data center, shaving off fractions from its PUE number can mean saving millions of dollars in electricity costs per year.)

Because MGHPCC is still completing its construction, industry consortium The Green Grid hasn’t rated its PUE score yet. But, Goodhue says, their design simulations show that the center would weigh in with a leading-edge PUE score in the range of the Google and Princeton data centers.

Goodhue says two main factors at the center contribute to its competitive estimated PUE score. First, continuous monitoring of weather and wind data, as well as networks of sensors inside the center that track temperature throughout the building, mean the cooling system needs to be used only 20 percent of the time, according to Goodhue. During the rest of the time—especially over three chilly New England seasons plus cooler summer nights—outside air can cool the computers just as well as air-conditioning.

The other factor, Goodhue says, comes from the cooling vents themselves. Older data centers just pointed their air-conditioning units into the computer room, spraying cold air like a garden hose shooting water into a swimming pool.

But with a hot-aisle containment system, the vented hot air and piped-in cooler air all flow over the server racks themselves. The rest of the room never feels any breeze. Hot-aisle containment is a popular idea in sustainable data-center design today; MGHPCC is hardly alone on that front. But the next-generation cooling technique is just one more draw for a university photonics lab or structural engineering department that would never have the budget to install such large-scale cost-saving computing resources.

The heat MGHPCC’s systems will temper comes not from MGHPCC’s own computer clusters—it doesn’t own any. The center provides the facilities for powering, cooling, communicating with, and backing up computer clusters. But it does not provide the computers themselves. Each university’s faculty, department, or research group must do that instead.

The electrical, computer, and social engineering problems of clustering the many computing resources of five different, unique, and often-competing institutions under one roof is another challenge altogether—and the subject of the next story in this two-part series.

About the Author

Mark Anderson, an IEEE Spectrum contributing editor, was once a VAX lab assistant at the Carleton College computer center. In January 2012, he reported for us on a data-mining analysis of Scrabble and Words with Friends.

 

This article is for IEEE members only. Join IEEE to access our full archive.

Join the world’s largest professional organization devoted to engineering and applied sciences and get access to all of Spectrum’s articles, podcasts, and special reports. Learn more →

If you're already an IEEE member, please sign in to continue reading.

Membership includes:

  • Get unlimited access to IEEE Spectrum content
  • Follow your favorite topics to create a personalized feed of IEEE Spectrum content
  • Save Spectrum articles to read later
  • Network with other technology professionals
  • Establish a professional profile
  • Create a group to share and collaborate on projects
  • Discover IEEE events and activities
  • Join and participate in discussions