The TOP500 ranking is based on contenders’ performance running the LINPACK Benchmarks, which measure how fast a computer can solve large systems of linear equations. While this is a convenient way to rank computer performance, it doesn’t reflect all the tasks supercomputers might be faced with. In particular, some have to analyze and process huge datasets, meaning that it’s more valuable for them to be able to quickly determine the connections between data points than to perform numerical calculations. Their ability to identify such connections is reflected in the newer Graph500 ranking system. But the fact remains that computers that hit these benchmarks are lightning-fast—and able to take on more and more complicated modeling and analysis projects.
The combined speed of all 500 systems—or how fast they’d be if they could all work together—has reached 274 petaflops, up from the 250-petaflop total of the previous TOP500 list in November. This increase (according to the organization’s infographic [pdf]) represents a slowdown in the rate of growth compared with the trajectory based on recent lists, but the curators of the TOP500 list still say it’s likely that one such behemoth will break the exaflop barrier by 2020.
The news that no challenger has overtaken the world’s fastest number cruncher in the past six months might disappoint some and call progress toward the exaflop into question. But efforts to produce a 1000-petaflop supercomputer are just ramping up. The Japanese government, for example, has chosen RIKEN, whose supercomputer ranked fourth on the TOP500 list, to develop its own exascale machine by 2020. Mont-Blanc, an EU consortium, is aiming for an exascale computer comprising ARM cores, the processors that imbue smartphones and tablets with high processing speed at low power. China, home of the Tianhe-2, has yet to make any public claims.
Researchers have plenty of reason beyond bragging rights to want exaflop supercomputers.
The most obvious motivation is to handle more data. Next-generation radio telescopes, for instance, may gather too much data to store and process using current supercomputers. Scientists hope to better model physical systems including the Earth's climate and the human body and design new, smart materials.
An initiative based in Geneva called the Human Brain Project is also waiting on exascale computers. With them, researchers hope to be able to model the human brain, which will allow them to incorporate and study everything known about how brains process information.
Exascale machines will have to overcome many of the same problems facing the current generation of petascale supercomputers, only more so: excessive power consumption, difficulties transferring information between parallel lines of computation, and having to make tradeoffs between specialized computations and flexibility.
And, of course, they’ll also have to get some 30 times as fast as the current world record holder. So even though there’s currently no progress to report in this famous race, I suggest you keep your seatbelt fastened.