For the first time in 21 years, the United States no longer claimed even the bronze medal. With this week’s release of the latest Top 500 supercomputer ranking, the top three fastest supercomputers in the world are now run by China (with both first and second place finishers) and Switzerland. And while the supercomputer horserace is spectacle enough unto itself, a new report on the supercomputer industry highlights broader trends behind both the latest and the last few years of Top500 rankings.
The report, commissioned last year by the Japanese national science agency Riken, outlines a worldwide race toward exascale computers in which the U.S. sees R&D spending and supercomputer talent pools shrink, Europe jumps into the breach with increased funding, and China pushes hard to become the new global leader, despite a still small user and industry base ready to use the world’s most powerful supercomputers.
Steve Conway, report co-author and senior vice president of research at Hyperion, says the industry trend in high-performance computing is toward laying groundwork for pervasive AI and big data applications like autonomous cars and machine learning. And unlike more specialized supercomputer applications from years past, the workloads of tomorrow’s supercomputers will likely be mainstream and even consumer-facing applications.
“Ten years ago the rationale for spending on supercomputers was primarily two things: national security and scientific leadership, and I think there are a lot of people who still think that supercomputers are limited to problems like will a proton go left or right,” he says. “But in fact, there’s been strong recognition [of the connections] between supercomputing leadership and industrial leadership.”
“With the rise of big data, high-performance computing has moved to the forefront of research in things like autonomous vehicle design, precision medicine, deep learning, and AI,” Conway says. “And you don’t have to ask supercomputing companies if this is true. Ask Google and Baidu. There’s a reason why Facebook has already bought 26 supercomputers.”
As the 72-page Hyperion report notes, “IDC believes that countries that fail to fund development of these future leadership-class supercomputers run a high risk of falling behind other highly developed countries in scientific innovation, with later harmful consequences for their national economies.” (Since authoring the report in 2016 as part of the industry research group IDC, its authors this year formed the spin-off research firm Hyperion.)
Conway says that solutions to problems plaguing HPC systems today will be found in consumer electronics and industry applications of the future. So while operating massively parallel computers with multiple millions of cores may today only be a problem facing the world’s fastest and second-fastest supercomputers—China’s Sunway TaihuLight and Tianhe-2, running on 10.6 and 3.1 million cores, respectively—that fact won’t hold true forever. However, because China is the only country tackling this problem now means they are more likely to develop the technology first, technology that the world will want when cloud computing with multiple millions of cores approaches the mainstream.
The same logic applies to optimizing the ultra-fast data rates that today’s top HPC systems use and minimizing the megawatt electricity budgets they consume. And as the world’s supercomputers approach the exascale, that is, the 1 exaflop or 1000 petaflop mark, new challenges will no doubt arise too.
So, for instance, the report says that rapid shut-down and power-up of cores not in use will be one trick supercomputer designers use to trim back some of their systems’ massive power budgets. And, too, high-storage density—in the 100 petabyte range—will become paramount to house the big datasets the supercomputers consume.
“You could build an exascale system today,” Conway says. “But it would take well over 100 megawatts, which nobody’s going to supply, because that’s over a 100 million dollar electricity bill. So it has to get the electricity usage under control. Everybody’s trying to get it in the 20 to 30 megawatts range. And it has to be dense. Much denser than any computing today. It’s got to fit inside some kind of building. You don’t want the building to be 10 miles long. And also the denser the machine, the faster the machine is going to be too.”
Conway predicts that these and other challenges will be surmounted, and the first exaflop supercomputers will appear on the Top500 list around 2021, while exaflop supercomputing could become commonplace by 2023.
Yet for all the attention paid to supercomputer rankings, he also cautions about reading too much significance into any individual machine’s advertised speed, the basis for its rank on the Top500 list.
“The Top500 list is valuable, because the numbers there are kind of like what’s printed on the carton that the computer comes in,” he says. “It’s a not-to-exceed kind of number. But there are computers, I promise you, that don’t even make that list that can run huge and key problems, say, in the automotive industry, faster than anything on that list.”