The December 2022 issue of IEEE Spectrum is here!

Close bar

What Do You Do With an Old Supercomputer? Crush It Like a Rusty Car!

Some of the most sophisticated machines ever built end their life as scrap

2 min read
What Do You Do With an Old Supercomputer? Crush It Like a Rusty Car!
Photo: Randi Klett

Several new supercomputers are slated to go online within the next two years, including the highly anticipated 2017 Summit system, which will be located at Oak Ridge National Laboratory in Tennessee. But when Summit fires up, Titan, the number two ranked U.S. supercomputer, which currently resides at Oak Ridge, will be pushed out—and then crushed, like a rusty jalopy.   

That’s what happens to these massive, multimillion dollar U.S. national laboratory machines when they reach the ends of their lives. 

Douglas Doerfler, a former computer architect at Sandia National Laboratories in Albuquerque, New Mexico, witnessed the demise of one of the top supercomputers of its time, Red Storm. Red Storm “literally got crushed, like you would an old car,” Doerfler recalls.

Most of the computer can’t be reused. Chips that ran classified operations, such as national security problems, must be completely wiped and then physically demolished. 

Doerfler is now working on Lawrence Berkeley National Laboratory’s next supercomputer, a Cray XC40 called Cori. Even though Cori won’t be completed until mid-2016, a termination date has already been put in place, Doefler says. Engineers must know the retirement plan so they can prepare for the transition between machines.

Such a transition is happening to Los Alamos National Laboratory’s Cielo, which is being phased out to make way for the new Trinity system this summer, says Jim Lujan, the New Mexico lab’s high performance computing program manager.

The life of supercomputers is surprisingly short. They last about three to five years—about how long you might keep your laptop. This is due to the high turnover rate in new processor technology, the pursuit for exascale computing speeds (over a billion billion calculations per second), and the taxing demands of more complicated calculations and algorithms. What’s more, the older the machine gets, the harder it is to find replacements for failed components.

“As reliability gets worse, the amount of repairs and the cost of ownership increases,” says Lujan. It makes more economic sense to buy a new machine and recycle as much of the old supercomputer as possible.

When Cielo’s time comes, all of its memory will be erased and a certified Department of Energy recycler will securely destroy all of the equipment. It recovers the majority of the metals—gold, copper, aluminum—and gives the lab the money from the recyclables. The lab will also try to find a new home for the components of the machine that did not handle classified information, such as the power and cooling infrastructures and portions of the computer that ran only unclassified operations.

For example, parts of Cielo’s predecessor, Roadrunner, are now used at Carnegie Mellon University in Pittsburgh and at the New Mexico Consortium research center. Depending on the architecture, about 10 percent of a supercomputer can be reissued. For Cielo, a few of the 90 cabinets can be used in other parts of the Los Alamos National Lab or at different institutions, says Lujan. The rest will end up as scrap.

According to Lujan, while the life of a supercomputer is short, the current retirement process is cost-effective. 

“You have to put a price tag on science,” says Cori’s designer, Doerfler. “The machine needs to evolve in order to keep up with these calculations… they all eventually become obsolete.”

The Conversation (0)

Why Functional Programming Should Be the Future of Software Development

It’s hard to learn, but your code will produce fewer nasty surprises

11 min read
Vertical
A plate of spaghetti made from code
Shira Inbar
DarkBlue1

You’d expectthe longest and most costly phase in the lifecycle of a software product to be the initial development of the system, when all those great features are first imagined and then created. In fact, the hardest part comes later, during the maintenance phase. That’s when programmers pay the price for the shortcuts they took during development.

So why did they take shortcuts? Maybe they didn’t realize that they were cutting any corners. Only when their code was deployed and exercised by a lot of users did its hidden flaws come to light. And maybe the developers were rushed. Time-to-market pressures would almost guarantee that their software will contain more bugs than it would otherwise.

Keep Reading ↓Show less
{"imageShortcodeIds":["31996907"]}