The Supercomputer Goes Personal

The data

This story was corrected on 28 April 2009.

 

Click on the image to see full-size graphic.

Video gamers and graphic artists have funded a revolution in high-performance computing. The same massively parallel chips that figure out exactly the right color for each pixel in Street Fighter IV are now being used to calculate protein folding, predict climate change, and crack the encryption of hitherto-secure Web sites.

In October 2007, astrophysicist Gaurav Khanna, of the University of Massachusetts Dartmouth, lashed together eight Sony PlayStation 3s to continue the simulations of black holes he used to perform using rented time on traditional supercomputers funded by the National Science Foundation. He got about 1.2 teraflops; the hardware cost in PS3s at, say, Best Buy would be about US $3200. Twenty years earlier you would have needed more than 500 Cray X-MPs, at a cost of at least $78 billion, to break the teraflops boundary.

Today, graphics titan Nvidia advertises its new workstation, the Tesla, as a ”personal supercomputer.” It clusters four Nvidia C1060 processing boards, each of which unites 240 graphics cores to process instructions at nearly teraflops speeds. We calculate it as about 17 percent more cost-effective than Khanna’s PS3 solution, and a lot more elegant. Of course, neither is perfect. For one thing, the Tesla and the PS3 do single-precision floating-point calculations using four 8â¿¿bit bytes. [Correction: According to Andrew Humber of Nvidia, the company added double-precision support within its Tesla 10 series graphics-processing units.] The Roadrunner, by contrast, uses 64-bit floating integers. But is the greater accuracy worth an additional $117 722 400?

About the Author

PAUL WALLICH is a contributing editor based Vermont whose workshop is filled with past and current do-it-yourself projects, several of which he has written about in recent months.

Related Stories

Advertisement
Advertisement