Exascale supercomputers: Can't get there from here?

Today Darpa released a report I've been hearing about for months concerning whether and how we could make the next big leap in supercomputing: exascale computing, a 1000x increase over today's machines. Darpa was particularly interested in whether it could be done by 2015.

With regard to whether it could be done by 2015, the answer, according to my read of the executive summary, is a qualified no.

In it's own words, here's what the study was after:

The objectives given the study were to understand the course of mainstream computing technology, and determine whether or not it would allow a 1,000X increase in the computational capabilities of computing systems by the 2015 time frame. If current technology trends were deemed as not capable of permitting such increases, then the study was also charged with identifying where were the major challenges, and in what areas may additional targeted research lay the groundwork for overcoming them.

The study was led by Peter Kogge, an IEEE Fellow and professor of computer science and engineering at Notre Dame University. (We'll be talking to him next week about the study for further coverage in IEEE Spectrum) And it had contributions from some of the profession's leading lights including Stanford's William Dally, HP's Stanley Williams, Micron's Dean Klein, Stanford's Kunle Olukotun, Georgia Tech's Rao Tumala, Intel's Jim Held and Katherine Yeolick (who I include in this list not because I know who she is, but because she lectured about the "Berkeley Dwarfs").

Darpa's helpers seem to have come to the decision that current technology trends will not allow for exascale computing. That's summed up pretty neatly in this graph, which clearly shows that the trend line in computer performance undershoots exascale in 2015 by an appreciable amount:

exascaleGflops.gif

The group found four areas where "current technology trends are simply insufficient" to get to exascale. The first and what they deemed the most pervasive was energy and power. The Darpa group was unable to come up with any combination of mature technologies that could deliver exascale performance at a reasonable power level:

exascaleGflopswatt.gif

The key, they found, is the power needed not to compute but to move data. Data needs to move on interconnects and they found that even using some really cool emerging technology it still cost 1-3 picojoules for a bit to go through just one interconnect level (like from chip to board or board to rack). Scale that up and you're talking 10-30 MW (167 000 - 500 000 60 watt light bulbs) per level. Eeesh.

The other 3 problems are memory storage (how to handle 1 billion 1GB DRAM chips), concurrency and locality (how to write a program that can handle a billion threads at once), and resiliency (how to prevent and recover from crashes).

These are equally interesting, but the power problem is, I think, what much of today's computing work is really boiling down to. Solve that, and things will look a lot sunnier for everything from high performance computing to embedded sensors.

The full (297 page) Darpa Exascale Computing report is here.

(In the November issue of IEEE Spectrum, watch for a cool simulation that Sandia computer architects did to show another bump in the road to future supercomputers. Their simulations show that as the multicore phenomenon advances in the processor industry, some very important applications will start performing worse.)

Advertisement

Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Advertisement