When Will We Have an Exascale Supercomputer?

2023 if we do it right; tomorrow if we do it crazy

4 min read
When Will We Have an Exascale Supercomputer?
Supercomputer, Superceded: Lawrence Livermore National Laboratory, home to Sequoia (above), will host a much more powerful machine in 2017.
Photo: Lawrence Livermore National Laboratory

The global race to build more powerful supercomputers is focused on the next big milestone: a supercomputer capable of performing 1 million trillion floating-point operations per second (1 exaflops). Such a system will require a big overhaul of how these machines compute, how they move data, and how they’re programmed. It’s a process that might not reach its goal for eight years. But the seeds of future success are being designed into two machines that could arrive in just two years.

China and Japan each seem focused on building an exascale supercomputer by 2020. But the United States probably won’t build its first practical exascale supercomputer until 2023 at the earliest, experts say. To hit that target, engineers will need to do three things. First they’ll need new computer architectures capable of combining tens of thousands of CPUs and graphics-processor-based accelerators. Engineers will also need to deal with the growing energy costs required to move data from a supercomputer’s memory to the processors. Finally, software developers will have to learn how to build programs that can make use of the new architecture.

Keep Reading ↓ Show less

Stay ahead of the latest trends in technology. Become an IEEE member.

This article is for IEEE members only. Join the world’s largest professional organization devoted to engineering and applied sciences and get access to all of Spectrum’s articles, podcasts, and special reports. Learn more →

Membership includes:

  • Get unlimited access to IEEE Spectrum content
  • Follow your favorite topics to create a personalized feed of IEEE Spectrum content
  • Save Spectrum articles to read later
  • Network with other technology professionals
  • Establish a professional profile
  • Create a group to share and collaborate on projects
  • Discover IEEE events and activities
  • Join and participate in discussions

The Future of Deep Learning Is Photonic

Computing with light could slash the energy needs of neural networks

10 min read

This computer rendering depicts the pattern on a photonic chip that the author and his colleagues have devised for performing neural-network calculations using light.

Alexander Sludds
DarkBlue1

Think of the many tasks to which computers are being applied that in the not-so-distant past required human intuition. Computers routinely identify objects in images, transcribe speech, translate between languages, diagnose medical conditions, play complex games, and drive cars.

The technique that has empowered these stunning developments is called deep learning, a term that refers to mathematical models known as artificial neural networks. Deep learning is a subfield of machine learning, a branch of computer science based on fitting complex models to data.

Keep Reading ↓ Show less