Power Problems Threaten to Strangle Exascale Computing

Three possible solutions: specialized architectures, millivolt switches, and 3-D memory

4 min read
Power Problems Threaten to Strangle Exascale Computing
Stacks Up Nicely: A 3-D memory stack from Tezzaron Semiconductor boosts speed and reduces power draw.
Photo: Tezzaron Semiconductor

/img/quantumNewV2-1451401939930.jpg Exascale Trade-offs The road to an exaflops supercomputer won’t be smooth. The millivolt switch, for example, would dramatically reduce power draw. But how to make one, and when it would be ready, is anybody’s guess.

For most of the decade, experts in high-performance computing have had their sights set on exascale computers—supercomputers capable of performing 1 million trillion floating-point operations per second, or 1 exaflops. And we’re now at the point where one could be built, experts say, but at ridiculous cost.

Keep Reading ↓ Show less

Stay ahead of the latest trends in technology. Become an IEEE member.

This article is for IEEE members only. Join the world’s largest professional organization devoted to engineering and applied sciences and get access to all of Spectrum’s articles, podcasts, and special reports. Learn more →

Membership includes:

  • Get unlimited access to IEEE Spectrum content
  • Follow your favorite topics to create a personalized feed of IEEE Spectrum content
  • Save Spectrum articles to read later
  • Network with other technology professionals
  • Establish a professional profile
  • Create a group to share and collaborate on projects
  • Discover IEEE events and activities
  • Join and participate in discussions

The Future of Deep Learning Is Photonic

Computing with light could slash the energy needs of neural networks

10 min read

This computer rendering depicts the pattern on a photonic chip that the author and his colleagues have devised for performing neural-network calculations using light.

Alexander Sludds
DarkBlue1

Think of the many tasks to which computers are being applied that in the not-so-distant past required human intuition. Computers routinely identify objects in images, transcribe speech, translate between languages, diagnose medical conditions, play complex games, and drive cars.

The technique that has empowered these stunning developments is called deep learning, a term that refers to mathematical models known as artificial neural networks. Deep learning is a subfield of machine learning, a branch of computer science based on fitting complex models to data.

Keep Reading ↓ Show less