The December 2022 issue of IEEE Spectrum is here!

Close bar

Sequoia Supercomputer Tops Graph500 Rankings Again

Sequoia unmoved from top spot on Graph500's most powerful supercomputers

3 min read
Sequoia Supercomputer Tops Graph500 Rankings Again

Top500 is the best-known list that ranks the supercomputers of the world, but what if there was another way to rate and review these mighty machines? On Tuesday, the new Graph500 rankings were revealed at the International Supercomputing Conference in Leipzig, Germany, and Lawrence Livermore National Laboratory's Sequoia supercomputer was shown to have prevailed at No. 1—unmoved from its spot since the list was last released in November 2012. Topping Graph500 means that Sequoia is still the world's most efficient at processing extremely vast (petabyte and exabyte-size) data sets. It also means that some computer scientists could view this system—and not China's Tianhe-2, which capped the recently released Top500 list—as the world's most powerful supercomputer.

For two decades, the Top500 list has ranked the world's top supercomputers by how many floating-point operations each machine could process per second—basically judging them in terms of their raw number-crunching power. But computer scientists have been increasingly putting these machines to work analyzing massive data sets instead of executing more traditional modeling and simulation tasks. This changed the supercomputing game. At the Sandia National Laboratory in Alburquerque, N.M., then senior researcher Richard Murphy, along with several colleagues, noticed this trend and used it as the basis for compiling a complementary list they started in November 2010: the Graph500.

Murphy and his cohorts believe that high-performance computers should be appraised not only by measuring how fast a machine can solve equations, but also by how quickly it can sift through its entire memory. 

In Top500, supercomputers are ranked based on the Linpack, a software package of calculation speed tests developed in 1974. For the Graph500 benchmark, each supercomputer is given a massive set of data to crunch, called a graph. A graph consists of several interconnected sets of data, with vertices and edges, similar to what you might imagine a map of your Facebook network would look like. Each vertex would be represented by a user, while edges would be represented by the connection between two users. Starting with one vertex, a supercomputer is charged with discovering all other vertices in the graph by following each connection (or edge). How fast it can accomplish this task determines how high it ranks on Graph500.

This also explains why the units of measure in the two lists are different: In the Top500 list, judges rate entrants in petaflops (quadrillions of floating-point operations per second). In Graph500, "gigateps" (billions of traversed edges per second) are the established currency.

Sequoia, Graph500's No. 1 machine, traversed 15 363 billion edges per second, unchanged from November 2012. This round, IBM's BlueGene/Q systems clearly dominate the Graph500. Eight out of the top 10 supercomputers on the list are BlueGene/Q models. Another interesting point is that while China's Tianhe-2 outpaces its competition in the Top500 list by far—executing almost twice as many petaflops as the next contender—it places sixth in Graph500.

The list is called Graph500, but the 500 is "aspirational," Murphy told IEEE Spectrum in 2010. The list has yet to live up to its name and attract 500 machines, though this summer's list saw a slight uptick in entrants for a final tally of 142 supercomputers.

Installation Site
(IBM - BlueGene/Q)
Lawrence Livermore National Laboratory  (USA)
15 363
(IBM - BlueGene/Q)
Argonne National Laboratory (USA)
14 328
(IBM - BlueGene/Q)
Forschungszentrum Juelich (Germany)
5 848
K computer
(Fujitsu - Custom supercomputer)
RIKEN Advanced Institute for Computational Science (Japan)
5 524.12  
Fermi (IBM - BlueGene/Q)
CINECA (Italy)
2 567  
Changsha (China)
2 061.48  
Turing (IBM - BlueGene/Q)
1 427  
Blue Joule (IBM - BlueGene/Q)
Science and Technology Facilities Council - Daresbury Laboratory (UK)
1 427  
DIRAC (IBM - BlueGene/Q)
University of Edinburgh (UK)
1 427  
Zumbrota (IBM - BlueGene/Q)
EDF R&D (France)
1 427  
Avoca (IBM - BlueGene/Q)
Victorian Life Sciences Computation Initiative (Australia)
1 427  

Photo: IBM

The Conversation (0)

Why Functional Programming Should Be the Future of Software Development

It’s hard to learn, but your code will produce fewer nasty surprises

11 min read
A plate of spaghetti made from code
Shira Inbar

You’d expectthe longest and most costly phase in the lifecycle of a software product to be the initial development of the system, when all those great features are first imagined and then created. In fact, the hardest part comes later, during the maintenance phase. That’s when programmers pay the price for the shortcuts they took during development.

So why did they take shortcuts? Maybe they didn’t realize that they were cutting any corners. Only when their code was deployed and exercised by a lot of users did its hidden flaws come to light. And maybe the developers were rushed. Time-to-market pressures would almost guarantee that their software will contain more bugs than it would otherwise.

Keep Reading ↓Show less