Two Different Top500 Supercomputing Benchmarks Show Two Different Top Supercomputers

In the new TOP500 Supercomputer Rankings, who’s number one depends on which benchmark you use

3 min read

Illustration of supercomputing machines as a bar graph
Illustration: iStockphoto

The 50th TOP500 semi-annual ranking of the world’s supercomputers was announced earlier today. The topmost positions are largely unchanged from those announced last June, with China’s Sunway TaihuLight and Tianhe-2 supercomputers still taking the #1 and #2 positions, and the Swiss Piz Daint supercomputer still at #3. The only change since June, really, to the handful of computers at the very top of the list is that the one U.S. computer to make the top-five cut, Oak Ridge National Laboratory’s Titan, slipped from #4 to #5, edged out by a Japanese supercomputer called Gyoukou.

The top 10 now look like this:

Top500.org’s November 2017 ranking
PositionNameCountryTeraflopsPower (kW)
1Sunway TaihuLightChina93,01515,371
2Tianhe-2China33,86317,808
3Piz DaintSwitzerland19,5902,272
4GyoukouJapan19,1361,350
5TitanUnited States17,5908,209
6SequoiaUnited States17,1737,890
7TrinityUnited States14,1373,844
8CoriUnited States14,0153,939
9Oakforest-PACSJapan13,5552,719
10K ComputerJapan10,51012,660

What’s more interesting to me is not this usual “TOP500” ranking but a second ranking the TOP500 organization has tracked recently using a different software benchmark, called High Performance Conjugate Gradients, or HPCG. This relatively new benchmark is the brainchild of Jack Dongarra, one of the founders of the TOP500 ranking, and Pitor Luszczek (both of the University of Tennessee) along with Michael Heroux of Sandia National Laboratories.

Why was there a need for a new benchmark? The normal ranking is determined by how fast various supercomputers can run something called a LINPACK (or HPL) benchmark. The LINPACK benchmarks originated in the late 1970s and started being applied to supercomputers in the early 1990s. The first TOP500 list, which used a LINPACK benchmark, came out in 1993. Initially, the LINPACK benchmarks charted how fast computers could run certain FORTRAN code. The newer (HPL) benchmarks measure execution time of code written in C.

Experts have long understood that the LINPACK benchmark is biased toward peak processor speed and number, missing important constraints like the bandwidth of the computer’s internal data network. And it tests the computer’s ability to solve so-called dense-matrix calculations, which aren’t representative of many “sparse” real-world problems. HPCG was devised to remedy these shortcomings.

And when you rank the current crop of supercomputers according to the newer HPCG benchmark, the picture looks very different:

Top500.org’s November 2017 ranking using the HPCG benchmark
PositionNameCountryTeraflops
1K ComputerJapan603
2Tianhe-2China580
3TrinityUnited States546
4Piz DaintSwitzerland486
5Sunway TaihuLightChina481
6Oakforest-PACSJapan385
7CoriUnited States355
8SequoiaUnited States330
9TitanUnited States322
10MiraUnited States167

The 10th-ranking computer on the TOP500 list, Fujitsu’s K computer, floats all the way up to #1. And the computer that had been at the top, the Sunway TaihuLight, sinks to the #5 position. Perhaps more important is the drastic difference in performance all of these computers show when you compare results from the two benchmarks.

Take, for example, the Sunway TaihuLight. It’s theoretical top speed, known as Rpeak, is 125 petaflops (that’s 125 x 1015 floating point operations per second). Judged using the LINPACK benchmark, that computer can manage 93 petaflops, about three-quarters of theoretical performance. But with the HPCG benchmark, it achieves a mere 481 teraflops. That’s just 0.4 percent of the computer’s theoretical performance. So running many problems on the Sunway TaihuLight is like getting into a Dodge Viper, which can in theory go 200 miles per hour [322 kilometers per hour], and never driving it any faster than a Galapagos tortoise.

So are the LINPACK (HPL) results or the HPCG results more representative of real-world operations? Experts regard them as “bookends,” bracketing the range users of these supercomputers can expect to experience. I don’t have statistics to back me up, but I suspect the distribution is skewed closer to the HPCG side of the shelf. If that’s true, maybe the TOP500 organization should be using HPCG for its main ranking. That would be more logical, I suppose, but I expect the organizers would be reluctant to do that, given people’s hunger for big numbers, now squarely in the petaflop range for supercomputers and soon to flirt with exaflops.

Perhaps supercomputers should just be required to have written in small letters at the bottom on their shiny cabinets: “Object manipulations in this supercomputer run slower than they appear.”

The Conversation (0)