Beyond Tianhe-2

What’s next for super-fast supercomputers?

2 min read
Beyond Tianhe-2
Photo: Long Hongtao/Xinhua/Corbis

The TOP500 semi-annual ranking of the world’s most powerful supercomputers, announced yesterday, revealed that China’s Tianhe-2 has kept its first-place position. The three-time winner, capable of performing 33.86 petaflops (a maximum of 33.86 quadrillion computations per second) remains nearly twice as fast as its nearest competitor, the Titan supercomputer at Oak Ridge National Laboratory.

The TOP500 ranking is based on contenders’ performance running the LINPACK Benchmarks, which measure how fast a computer can solve large systems of linear equations. While this is a convenient way to rank computer performance, it doesn’t reflect all the tasks supercomputers might be faced with. In particular, some have to analyze and process huge datasets, meaning that it’s more valuable for them to be able to quickly determine the connections between data points than to perform numerical calculations. Their ability to identify such connections is reflected in the newer Graph500 ranking system. But the fact remains that computers that hit these benchmarks are lightning-fast—and able to take on more and more complicated modeling and analysis projects.

The combined speed of all 500 systems—or how fast they’d be if they could all work together—has reached 274 petaflops, up from the 250-petaflop total of the previous TOP500 list in November. This increase (according to the organization’s infographic [pdf]) represents a slowdown in the rate of growth compared with the trajectory based on recent lists, but the curators of the TOP500 list still say it’s likely that one such behemoth will break the exaflop barrier by 2020.

The news that no challenger has overtaken the world’s fastest number cruncher in the past six months might disappoint some and call progress toward the exaflop into question. But efforts to produce a 1000-petaflop supercomputer are just ramping up. The Japanese government, for example, has chosen RIKEN, whose supercomputer ranked fourth on the TOP500 list, to develop its own exascale machine by 2020. Mont-Blanc, an EU consortium, is aiming for an exascale computer comprising ARM cores, the processors that imbue smartphones and tablets with high processing speed at low power. China, home of the Tianhe-2, has yet to make any public claims.

Researchers have plenty of reason beyond bragging rights to want exaflop supercomputers.

The most obvious motivation is to handle more data. Next-generation radio telescopes, for instance, may gather too much data to store and process using current supercomputers. Scientists hope to better model physical systems including the Earth's climate and the human body and design new, smart materials.

An initiative based in Geneva called the Human Brain Project is also waiting on exascale computers. With them, researchers hope to be able to model the human brain, which will allow them to incorporate and study everything known about how brains process information.

Exascale machines will have to overcome many of the same problems facing the current generation of petascale supercomputers, only more so: excessive power consumption, difficulties transferring information between parallel lines of computation, and having to make tradeoffs between specialized computations and flexibility.

And, of course, they’ll also have to get some 30 times as fast as the current world record holder. So even though there’s currently no progress to report in this famous race, I suggest you keep your seatbelt fastened.


The Conversation (0)

Why Functional Programming Should Be the Future of Software Development

It’s hard to learn, but your code will produce fewer nasty surprises

11 min read
A plate of spaghetti made from code
Shira Inbar

You’d expectthe longest and most costly phase in the lifecycle of a software product to be the initial development of the system, when all those great features are first imagined and then created. In fact, the hardest part comes later, during the maintenance phase. That’s when programmers pay the price for the shortcuts they took during development.

So why did they take shortcuts? Maybe they didn’t realize that they were cutting any corners. Only when their code was deployed and exercised by a lot of users did its hidden flaws come to light. And maybe the developers were rushed. Time-to-market pressures would almost guarantee that their software will contain more bugs than it would otherwise.

Keep Reading ↓Show less