The December 2022 issue of IEEE Spectrum is here!

Close bar

IBM’s Sequoia Crowned King of Supercomputers

With more than 16 petaflops it soundly beats previous champion

2 min read
IBM’s Sequoia Crowned King of Supercomputers

The Sequoia supercomputer a system built by IBM for the U.S. Department of Energy’s Lawrence Livermore National Laboratory, in California, is the now the most powerful supercomputer on earth, according to rankings released today. It led the list, which ranks the worlds supercomputers according to a standard software benchmark, delivering 16.32 petaflops (a thousand trillion [<?] floating point operations per second) using 1 572 864 processor cores. It marks the first time since November 2009 that a U.S. supercomputer has topped the charts.

The IBM machine made use of the company’s BlueGene/Q computing system, which features 18-core processors based on the PowerPC architecture. Overall, IBM systems had a good showing, accounting for 47.5 percent of the computing power in the top 500 list, easily outpacing it’s next nearest competitor Hewlett Packard.

Sequoia’s nearest competitor, Fujitsu’s K computer, has topped the charts during 2011. It managed 10.51 petaflops using 705 024 cores. It was followed by a U.S. system—the Mira supercomputer, another IBM machine, that pulled 8.1 petaflops with 786 432 cores.

European computers had a good showing, with two German machines and the first Italian top 10 system on the list, as well as France grabbing the number 9 spot with it’s homebrew Bull supercomputer.

Meanwhile, China’s Tianhe-1A took number five, and the Nebulae system, in Shenzhen, came in at number 10.


Rank Site Computer
United States
Sequoia - BlueGene/Q, Power BQC 16C 1.60 GHz, Custom
2RIKEN Advanced Institute for Computational Science (AICS)
K computer, SPARC64 VIIIfx 2.0GHz, Tofu interconnect
3DOE/SC/Argonne National Laboratory
United States
Mira - BlueGene/Q, Power BQC 16C 1.60GHz, Custom
4Leibniz Rechenzentrum
SuperMUC - iDataPlex DX360M4, Xeon E5-2680 8C 2.70GHz, Infiniband FDR
5National Supercomputing Center in Tianjin
Tianhe-1A - NUDT YH MPP, Xeon X5670 6C 2.93 GHz, NVIDIA 2050
6DOE/SC/Oak Ridge National Laboratory
United States
Jaguar - Cray XK6, Opteron 6274 16C 2.200GHz, Cray Gemini interconnect, NVIDIA 2090
Cray Inc.
Fermi - BlueGene/Q, Power BQC 16C 1.60GHz, Custom
8Forschungszentrum Juelich (FZJ)
JuQUEEN - BlueGene/Q, Power BQC 16C 1.60GHz, Custom
Curie thin nodes - Bullx B510, Xeon E5-2680 8C 2.700GHz, Infiniband QDR
10National Supercomputing Centre in Shenzhen (NSCS)
Nebulae - Dawning TC3600 Blade System, Xeon X5650 6C 2.66GHz, Infiniband QDR, NVIDIA 2050


The Conversation (0)

Why Functional Programming Should Be the Future of Software Development

It’s hard to learn, but your code will produce fewer nasty surprises

11 min read
A plate of spaghetti made from code
Shira Inbar

You’d expectthe longest and most costly phase in the lifecycle of a software product to be the initial development of the system, when all those great features are first imagined and then created. In fact, the hardest part comes later, during the maintenance phase. That’s when programmers pay the price for the shortcuts they took during development.

So why did they take shortcuts? Maybe they didn’t realize that they were cutting any corners. Only when their code was deployed and exercised by a lot of users did its hidden flaws come to light. And maybe the developers were rushed. Time-to-market pressures would almost guarantee that their software will contain more bugs than it would otherwise.

Keep Reading ↓Show less