The August 2022 issue of IEEE Spectrum is here!

Close bar

A U.S. Machine Recaptures the Supercomputing Crown

Oak Ridge’s Summit is now the world's top-ranked supercomputer

3 min read
A U.S. Machine Recaptures the Supercomputing Crown
Photo: Oak Ridge National Laboratory

Today, U.S. supercomputer advocates are cheering, because for the first time since 2012, a U.S. supercomputer—Oak Ridge National Laboratory’s newly installed Summit supercomputer—has been ranked No. 1 in performance, capturing the world crown back from China in the twice-yearly TOP500 assessment of supercomputers, which was announced at the ISC High Performance conference in Frankfurt.

The top five positions, ranked using the traditional HPC (LINPACK) benchmark, went like this:

TOP500 Ranking of June 2018
Rank ComputerLocation Performance (petaflops)
#1SummitOak Ridge National Laboratory, U.S.A.122.3
#2Sunway TaihuLightNational Supercomputing Center in Wuxi, China93.0
#3SierraLawrence Livermore National Laboratory, U.S.A.71.6
#4Tianhe-2ANational Supercomputing Center in Guangzhou, China61.4
#5AI Bridging Cloud InfrastructureNational Institute of Advanced Industrial Science and Technology, Japan19.9

Technology advocates in the United States who have been miffed by China’s dominance in recent years will be even more pleased with the High-Performance Conjugate Gradient (HPCG) results, an alternative computing benchmark now also being considered in these semiannual rankings.

Using HPCG, the top five supercomputers in the world are:

HPCG Ranking of June 2018
RankComputerLocationPerformance (petaflops)
#1SummitOak Ridge National Laboratory, U.S.A.2.93 
#2SierraLawrence Livermore National Laboratory, U.S.A.1.80 
#3K computerRiken Advanced Institute for Computational Science, Japan0.60
#4TrinityLos Alamos National Laboratory, U.S.A.0.55
#5Piz DaintSwiss National Supercomputing Centre, Switzerland0.49

Wait, wait. China doesn’t even make the top five here? And why are the numbers for floating-point operations per second so much lower?

That’s because the HPC benchmark is biased toward peak processor speed and number. And the HPC benchmark tests the computer’s ability to solve so-called dense-matrix calculations, which aren’t representative of many “sparse” real-world problems. HPCG was devised to remedy these shortcomings. 

The TOP500 organization notes that while HPC is relevant to many supercomputing applications, “HPCG is designed to exercise computational and data access patterns that more closely match a different and broad set of important applications, and to give incentive to computer system designers to invest in capabilities that will have impact on the collective performance of these applications.”

Translation: There’s more than one way to skin a supercomputer, and as the latest rankings demonstrate, the results can be very different depending on your metric for judging them.

Does this mean that the older HPC benchmark is now less important among experts, a change in attitude that Chinese supercomputer designers perhaps missed when they were at the drawing board?

Perhaps so, but it’s still a meaningful benchmark according to Jack Wells, director of science for Oak Ridge’s Leadership Computing Facility. He points out that Titan, a different supercomputer at Oak Ridge, which achieved 17.6 petaflops on the HPC benchmark, has run real applications at greater than 20 petaflops. And applications that were considered for the prestigious Gordon Bell Prize—meaning ones of special significance—ran at “double-digit petaflop levels,” he says. 

Which ranking method is really the most meaningful?

True, this benchmark, or any benchmark for that matter, makes it easier to assess supercomputer performance. And it makes it possible for funding agencies, national laboratories, and the industrial partners they hire to build these giant machines to set clear goals. But this exercise seems to be getting harder and harder to me.

The TOP500 organization now produces three rankings: the traditional one based on the HPC benchmark, an alternative based on the HPCG benchmark, and the “GREEN500” list, which rewards efficiency—the number of flops per watt of electricity used. By the last of these measures, Japanese supercomputers dominate, holding the top three positions.

So which ranking method is really the most meaningful?

That question is difficult to answer now and will become more difficult as time goes by. That could make it especially problematic to celebrate the construction of the first “exascale” supercomputer—something that has long been anticipated for the early 2020s. After all, the benchmark used to judge that milestone should be one that reflects either the most valuable or the most popular sorts of applications being run. What those will be some years down the road is unclear.

One very real possibility is that over the next few years, supercomputers will be increasingly used to compute results for artificial neural networks, ones for which the inference computations can often be run at relatively low precision levels. And when using lower precision, the GPUs in these machines can perform that many more operations per second than they can do when high precision is required. Indeed, viewed in that light Summit had “broken the exascale barrier” even before it was completed, according to Oak Ridge’s director, Thomas Zacharia.

Funny how we all missed that.

The Conversation (0)

Quantum Error Correction: Time to Make It Work

If technologists can’t perfect it, quantum computers will never be big

13 min read
Quantum Error Correction: Time to Make It Work
Chad Hagen
Blue

Dates chiseled into an ancient tombstone have more in common with the data in your phone or laptop than you may realize. They both involve conventional, classical information, carried by hardware that is relatively immune to errors. The situation inside a quantum computer is far different: The information itself has its own idiosyncratic properties, and compared with standard digital microelectronics, state-of-the-art quantum-computer hardware is more than a billion trillion times as likely to suffer a fault. This tremendous susceptibility to errors is the single biggest problem holding back quantum computing from realizing its great promise.

Fortunately, an approach known as quantum error correction (QEC) can remedy this problem, at least in principle. A mature body of theory built up over the past quarter century now provides a solid theoretical foundation, and experimentalists have demonstrated dozens of proof-of-principle examples of QEC. But these experiments still have not reached the level of quality and sophistication needed to reduce the overall error rate in a system.

Keep Reading ↓Show less
{"imageShortcodeIds":["29986363","29986364"]}