Why Aren't Supercomputers Getting Faster Like They Used To?

Experts dream of reaching exascale computation rates by 2020, but it may take longer than expected

3 min read
Why Aren't Supercomputers Getting Faster Like They Used To?
Photo: iStockphoto

Currently, the world’s most powerful supercomputers can ramp up to more than a thousand trillion operations per second, or a petaflop. But computing power is not growing as fast as it has in the past. On Monday, the June 2015 listing of the Top 500 most powerful supercomputers in the world revealed the beginnings of a plateau in performance growth.

There are a number of technical aspects and economic factors that interfere with supercomputing improvements. Experts disagree on the cause, but the result could be a slowing of the pace of improvement in some scientific fields.

Computing hardware development projections are based on Moore’s Law, which predicts that the number of transistors on integrated circuits will double about every two years, causing an exponential growth in performance. Supercomputer power is, for the most part, expected to follow the same curve.

In the past, that was exactly the case. The rate of performance development—the change in aggregate number of petaflops between Top500 lists—had doubled each year. But in the past few years, each new annual total has been only 1.5 times as great as the one preceding it.

The development rate began tapering off around 2008. Between 2010 and 2013, aggregate increases ranged between 26 percent and 66 percent. And on this June’s list, there was a mere 17 percent increase from last November.

“We were aiming for 2020, but it may not be till 2023 when we reach exascale computational rates”

What’s behind the trend? One reason, says IBM senior manager of Data Centric Systems, John Gunnels, is that the pace of Moore’s Law has slowed. “If you can’t shrink these chips at the rate you were shrinking them before, then you aren’t going to get that doubling of computational power.”

The semiconductor industry seems to be reaching the limits of its ability to shrink chips using conventional chip technology. (IBM researchers are trying to prop up Moore’s law using silicon-germanium transistor channels in effort to create a 7-nanometer chip within the next four years.)

The cost of the electricity to power these behemoths has also played a role in slowing the speed of supercomputer development. “Can somebody make a computer that has higher performance?” asks Gunnels. “Probably. But it would take a lot more money and power than someone would be willing to supply.”

According to Jack Dongarra, one of the curators of the Top500 list and a faculty member at the University of Tennessee and Oak Ridge National Laboratory, Moore’s Law is not the problem.

“Some people think it’s the end of Moore’s Law, but I don’t think that’s true,” he says. “It comes down to money. It’s not a question of anything else but funding.” Other laboratories could reach the performance of the number one supercomputer, Tianhe-2, if they want to pay US $390 million for the same technology, he points out. Dongarra predicts that the China’s Tianhe-2 will remain at the top of the supercomputer pyramid for at least two more lists because of the lack of funding for new systems.

Despite the slowdown, many computational scientists expect performance to reach exascale, or more than a billion billion operations per second, by 2020. But the actual trend depicts a different story. 

“We were aiming for 2020, but it may not be till 2023 when we reach exascale computational rates,” Dongarra says.

Though computer scientists in the United States say 2023 is a more feasible timeline to construct an exascale supercomputer (and the U.S. government is planning out an exascale supercomputer that would cost $200 million), institutions in China and Japan are determined to reach this computational milestone by 2020.

In the meantime, this recent period of apparent stagnation could impact the many fields that rely on supercomputing, such as weather projection.

“The National Weather Service uses supercomputers to run physical models to predict what will happen in three to five days,” says Dongarra. “Bigger and faster computers will make those predictions better.” 

This post was corrected on 20 July to better reflect the relationship between transistor density and Moore’s Law.

The Conversation (0)