The June 2011 ranking of the top 500 supercomputers was unveiled at a conference in Hamburg, Germany today, and a Japanese machine is topping the charts for the first time in seven years. If historical trends hold true, it could keep the top spot for some time. (You can find all the specs for the number one computer—the Fujitsu K Computer—here).
The rankings are produced by Top500 and are based on computing speed, measured in the number of calculations that a computer can execute in a second when running a standardized program. In the case of the K Computer, that number is 8.16 quadrillion calculations, or 8.16 petaflops. It's more than three times as fast as the machine that topped the last list, released in November 2010, a Chinese supercomputer manufactured by NUDT that computes at a speed of 2.57 petaflops.
That's a big jump: over twice as large as the average and the second biggest since the Top500 list began in 1993, according to my analysis. The largest jump—a factor five increase that happened between November 2001 and June 2002—was also made by a Japanese machine. That supercomputer, the Earth-Simulator, held the number one spot for two and half years. The chart below, from Top500.org, shows how the speeds of the number 1 and number 500 machines have changed over time.
I called Jack Dongarra, one of the curators of the list and the developer of the software program used to measure computing speed, to talk about some of the historical trends of supercomputing. The first thing I was interested in was the staircase-like shape of the curve representing the speed of the fastest supercomputer (shown in red above). As Dongarra puts it, "things will be more or less stable and flat, and then there will be a big bump up." Why the fits and starts? He attributes it to the way that technology develops—breakthroughs followed by minor improvements—and the fact that new sources of funding open up over time.
If the pattern continues, Japan's big jump should secure K Computer Top500's crown for the next two years or so, just as it did with the Earth-Simulator. But Dongarra thinks that things might be different this time around. The United States has three machines in the works—at the University of Illinois, at Lawrence Livermore National Laboratory, and at Oak Ridge National Laboratory—each designed for speeds of at least 10 petaflops and slated for construction in 2012. "All are contingent on funding—each machine will cost maybe $100 to $200 million —that hasn't been put in place fully," he says.
In spite of the historical bumps, it's clear that the fastest supercomputer is getting faster at an exponential rate (which is why it looks almost like a straight line on the log-scale graph). "This is Moore's Law being applied," Dongarra says.
Actually, it's faster. IEEE Medal of Honor winner Gordon Moore's eponymous law states that the number of transistors you can squeeze into the same area of silicon will double every two years, but the speed of supercomputing has doubled about every 14 months. The reason is that, not only are processors getting faster, but supercomputers are using more and more of them, which they can do because the interconnects that allow processors to communicate amongst themselves are improving, according to Dongarra.
Interestingly, this same fast rate of doubling applies to the number 500 computer as well, suggesting that advances are trickling down. The two curves—for the number 1 and for the number 500 computers—are parallel on the graph, offset by about seven years.
This means that, while Japan's K Computer is blisteringly fast today, by the end the decade, there's a good chance it won't make the Top500 at all. (Of course, that assumes supercomputers can breach the exaflop barrier, a task some supercomputer specialists see as extremely impractical if not impossible.)