Today, U.S. supercomputer advocates are cheering, because for the first time since 2012, a U.S. supercomputer—Oak Ridge National Laboratory’s newly installed Summit supercomputer—has been ranked No. 1 in performance, capturing the world crown back from China in the twice-yearly TOP500 assessment of supercomputers, which was announced at the ISC High Performance conference in Frankfurt.
The top five positions, ranked using the traditional HPC (LINPACK) benchmark, went like this:
|#1||Summit||Oak Ridge National Laboratory, U.S.A.||122.3|
|#2||Sunway TaihuLight||National Supercomputing Center in Wuxi, China||93.0|
|#3||Sierra||Lawrence Livermore National Laboratory, U.S.A.||71.6|
|#4||Tianhe-2A||National Supercomputing Center in Guangzhou, China||61.4|
|#5||AI Bridging Cloud Infrastructure||National Institute of Advanced Industrial Science and Technology, Japan||19.9|
Technology advocates in the United States who have been miffed by China’s dominance in recent years will be even more pleased with the High-Performance Conjugate Gradient (HPCG) results, an alternative computing benchmark now also being considered in these semiannual rankings.
Using HPCG, the top five supercomputers in the world are:
|#1||Summit||Oak Ridge National Laboratory, U.S.A.||2.93|
|#2||Sierra||Lawrence Livermore National Laboratory, U.S.A.||1.80|
|#3||K computer||Riken Advanced Institute for Computational Science, Japan||0.60|
|#4||Trinity||Los Alamos National Laboratory, U.S.A.||0.55|
|#5||Piz Daint||Swiss National Supercomputing Centre, Switzerland||0.49|
Wait, wait. China doesn’t even make the top five here? And why are the numbers for floating-point operations per second so much lower?
That’s because the HPC benchmark is biased toward peak processor speed and number. And the HPC benchmark tests the computer’s ability to solve so-called dense-matrix calculations, which aren’t representative of many “sparse” real-world problems. HPCG was devised to remedy these shortcomings.
The TOP500 organization notes that while HPC is relevant to many supercomputing applications, “HPCG is designed to exercise computational and data access patterns that more closely match a different and broad set of important applications, and to give incentive to computer system designers to invest in capabilities that will have impact on the collective performance of these applications.”
Translation: There’s more than one way to skin a supercomputer, and as the latest rankings demonstrate, the results can be very different depending on your metric for judging them.
Does this mean that the older HPC benchmark is now less important among experts, a change in attitude that Chinese supercomputer designers perhaps missed when they were at the drawing board?
Perhaps so, but it’s still a meaningful benchmark according to Jack Wells, director of science for Oak Ridge’s Leadership Computing Facility. He points out that Titan, a different supercomputer at Oak Ridge, which achieved 17.6 petaflops on the HPC benchmark, has run real applications at greater than 20 petaflops. And applications that were considered for the prestigious Gordon Bell Prize—meaning ones of special significance—ran at “double-digit petaflop levels,” he says.
True, this benchmark, or any benchmark for that matter, makes it easier to assess supercomputer performance. And it makes it possible for funding agencies, national laboratories, and the industrial partners they hire to build these giant machines to set clear goals. But this exercise seems to be getting harder and harder to me.
The TOP500 organization now produces three rankings: the traditional one based on the HPC benchmark, an alternative based on the HPCG benchmark, and the “GREEN500” list, which rewards efficiency—the number of flops per watt of electricity used. By the last of these measures, Japanese supercomputers dominate, holding the top three positions.
So which ranking method is really the most meaningful?
That question is difficult to answer now and will become more difficult as time goes by. That could make it especially problematic to celebrate the construction of the first “exascale” supercomputer—something that has long been anticipated for the early 2020s. After all, the benchmark used to judge that milestone should be one that reflects either the most valuable or the most popular sorts of applications being run. What those will be some years down the road is unclear.
One very real possibility is that over the next few years, supercomputers will be increasingly used to compute results for artificial neural networks, ones for which the inference computations can often be run at relatively low precision levels. And when using lower precision, the GPUs in these machines can perform that many more operations per second than they can do when high precision is required. Indeed, viewed in that light Summit had “broken the exascale barrier” even before it was completed, according to Oak Ridge’s director, Thomas Zacharia.
Funny how we all missed that.
David Schneider is a senior editor at IEEE Spectrum. His beat focuses on computing, and he contributes frequently to Spectrum's Hands On column. He holds a bachelor's degree in geology from Yale, a master's in engineering from UC Berkeley, and a doctorate in geology from Columbia.