A U.S. Machine Recaptures the Supercomputing Crown

Oak Ridge’s Summit is now the world's top-ranked supercomputer

3 min read

A U.S. Machine Recaptures the Supercomputing Crown
Photo: Oak Ridge National Laboratory

Today, U.S. supercomputer advocates are cheering, because for the first time since 2012, a U.S. supercomputer—Oak Ridge National Laboratory’s newly installed Summit supercomputer—has been ranked No. 1 in performance, capturing the world crown back from China in the twice-yearly TOP500 assessment of supercomputers, which was announced at the ISC High Performance conference in Frankfurt.

The top five positions, ranked using the traditional HPC (LINPACK) benchmark, went like this:

TOP500 Ranking of June 2018
Rank ComputerLocation Performance (petaflops)
#1SummitOak Ridge National Laboratory, U.S.A.122.3
#2Sunway TaihuLightNational Supercomputing Center in Wuxi, China93.0
#3SierraLawrence Livermore National Laboratory, U.S.A.71.6
#4Tianhe-2ANational Supercomputing Center in Guangzhou, China61.4
#5AI Bridging Cloud InfrastructureNational Institute of Advanced Industrial Science and Technology, Japan19.9

Technology advocates in the United States who have been miffed by China’s dominance in recent years will be even more pleased with the High-Performance Conjugate Gradient (HPCG) results, an alternative computing benchmark now also being considered in these semiannual rankings.

Using HPCG, the top five supercomputers in the world are:

HPCG Ranking of June 2018
RankComputerLocationPerformance (petaflops)
#1SummitOak Ridge National Laboratory, U.S.A.2.93 
#2SierraLawrence Livermore National Laboratory, U.S.A.1.80 
#3K computerRiken Advanced Institute for Computational Science, Japan0.60
#4TrinityLos Alamos National Laboratory, U.S.A.0.55
#5Piz DaintSwiss National Supercomputing Centre, Switzerland0.49

Wait, wait. China doesn’t even make the top five here? And why are the numbers for floating-point operations per second so much lower?

That’s because the HPC benchmark is biased toward peak processor speed and number. And the HPC benchmark tests the computer’s ability to solve so-called dense-matrix calculations, which aren’t representative of many “sparse” real-world problems. HPCG was devised to remedy these shortcomings. 

The TOP500 organization notes that while HPC is relevant to many supercomputing applications, “HPCG is designed to exercise computational and data access patterns that more closely match a different and broad set of important applications, and to give incentive to computer system designers to invest in capabilities that will have impact on the collective performance of these applications.”

Translation: There’s more than one way to skin a supercomputer, and as the latest rankings demonstrate, the results can be very different depending on your metric for judging them.

Does this mean that the older HPC benchmark is now less important among experts, a change in attitude that Chinese supercomputer designers perhaps missed when they were at the drawing board?

Perhaps so, but it’s still a meaningful benchmark according to Jack Wells, director of science for Oak Ridge’s Leadership Computing Facility. He points out that Titan, a different supercomputer at Oak Ridge, which achieved 17.6 petaflops on the HPC benchmark, has run real applications at greater than 20 petaflops. And applications that were considered for the prestigious Gordon Bell Prize—meaning ones of special significance—ran at “double-digit petaflop levels,” he says. 

Which ranking method is really the most meaningful?

True, this benchmark, or any benchmark for that matter, makes it easier to assess supercomputer performance. And it makes it possible for funding agencies, national laboratories, and the industrial partners they hire to build these giant machines to set clear goals. But this exercise seems to be getting harder and harder to me.

The TOP500 organization now produces three rankings: the traditional one based on the HPC benchmark, an alternative based on the HPCG benchmark, and the “GREEN500” list, which rewards efficiency—the number of flops per watt of electricity used. By the last of these measures, Japanese supercomputers dominate, holding the top three positions.

So which ranking method is really the most meaningful?

That question is difficult to answer now and will become more difficult as time goes by. That could make it especially problematic to celebrate the construction of the first “exascale” supercomputer—something that has long been anticipated for the early 2020s. After all, the benchmark used to judge that milestone should be one that reflects either the most valuable or the most popular sorts of applications being run. What those will be some years down the road is unclear.

One very real possibility is that over the next few years, supercomputers will be increasingly used to compute results for artificial neural networks, ones for which the inference computations can often be run at relatively low precision levels. And when using lower precision, the GPUs in these machines can perform that many more operations per second than they can do when high precision is required. Indeed, viewed in that light Summit had “broken the exascale barrier” even before it was completed, according to Oak Ridge’s director, Thomas Zacharia.

Funny how we all missed that.

The Conversation (0)