Growth in the performance of ever-faster supercomputers has started tapering off even as experts look to the next big milestone: an exascale supercomputer capable of performing 1 million trillion floating-point operations per second (1 exaflops). Perhaps that’s why President Barack Obama signed a new executive order to coordinate U.S. efforts in pushing supercomputers beyond today’s limits on semiconductor technology.
Obama’s July 29 order creates a National Strategic Computing Initiative aimed at coordinating the government’s strategy for developing and deploying high-performance computers. One huge strategic goal: “accelerating delivery of a capable exascale computing system that integrates hardware and software capability to deliver approximately 100 times the performance of current 10 petaflop systems across a range of applications representing government needs.” Countries such as China and Japan aim to build their first exascale supercomputer by 2020, whereas the U.S. is currently on track to achieve is first exascale supercomputer by 2023 at the earliest.
A second strategic goal involves establishing “a viable path forward for future high-performance computing systems even after the limits of current semiconductor technology are reached.” That 15-year goal refers to a future beyond Moore’s Law, which predicts the number of transistors on integrated circuits will double about every two years. Today’s engineers face an increasingly tougher time in creating smaller computer chips that they can cram into the same amount of space to boost computing performance. (See IEEE Spectrum’s special report: “50 Years of Moore’s Law.”)
The challenge of maintaining the expected growth in computing power has become evident in the Top 500 list of the most powerful supercomputers in the world. The latest ranking released in June 2015 revealed the beginning of a plateau in performance growth.
Obama’s executive order lays out three other strategic goals that are no less important than achieving exascale supercomputing and finding a path for computing beyond Moore’s Law. A third goal aims to create lasting partnerships between the public and private sectors to share the fruits of research and development in high-performance computing. Specifically, government agencies would work with computer manufacturers and cloud providers to make high-performance computing more widely available for both public and private researchers.
The fourth strategic goal aims to create “an enduring national HPC ecosystem by employing a holistic approach that addresses relevant factors such as networking technology, workflow, downward scaling, foundational algorithms and software, accessibility, and workforce development.” For example, the government might help develop new programmer tools that make it easier to create applications on new high-performance computing machines. Ideally, such tools would allow programmers to “express programs at more abstract levels and then automatically map them onto specific machines.”
A fifth strategic goal aims to merge two different classes of high-performance computing systems. Historically, such computers have focused on simulations of systems such as flying aircraft, weather and climate, and the interactions of tiny molecules. In the past decade, a second class of high-performance computers has arisen with a focus on collecting and analyzing “Big Data” being collected online and from physical sensors. The National Strategic Computing Initiative would encourage development of hardware and software that combines the best of both classes of computers.
The fact sheet for the National Strategic Computing Initiative lists an additional strategic theme: “Keep the United States at the forefront of HPC capabilities.” If the U.S. government can coordinate well with both academia and industry, it might have a shot at achieving that goal. Horst Simon, deputy director of the Lawrence Berkeley National Laboratory in California, told Science Magazine that the executive order’s creation of a coordinated supercomputing effort within the federal government could make it much easier for the Department of Energy and other individual agencies to make the case for a boost in supercomputing budgets.
Jeremy Hsu has been working as a science and technology journalist in New York City since 2008. He has written on subjects as diverse as supercomputing and wearable electronics for IEEE Spectrum. When he’s not trying to wrap his head around the latest quantum computing news for Spectrum, he also contributes to a variety of publications such as Scientific American, Discover, Popular Science, and others. He is a graduate of New York University’s Science, Health & Environmental Reporting Program.