Reconfigurable Optical Networks Will Move Supercomputer Data 100X Faster

Newly designed HPC network cards and software that reshapes topologies on-the-fly will be key to success

2 min read
data center interconnects conceptual image
Photo-illustration: Shutterstock

Imagine being able to read an entire book in a single second, but only receiving the pages individually over the course of a minute. This is analogous to the woes of a supercomputer.

Supercomputer processors can handle whopping amounts of data per second, but the flow of data between the processor and computer subsystems is not nearly as efficient, creating a data transfer bottleneck. To address this issue, one group of researchers has devised a system design involving re-configurable networks called FLEET—which could potentially speed up the transfer of data 100-fold. The initial design, as part of a “DARPA-hard” project, is described in a study published on April 30 in IEEE Internet Computing.

Network interface cards are critical hardware components that link computers to networks, facilitating the transfer of data. However, these components currently lag far behind computer processors in terms of how fast they can handle data.

“Processors and optical networks operate at Terabits per second (Tbps), but [current] network interfaces used to transfer data in and out typically operate in gigabit per second ranges,” explains Seth Robertson, Chief Research Scientist with Peraton Labs (previously named Perspecta Labs) who has been co-leading the design of FLEET.

Part of his team’s solution is the development of Optical Network Interface Cards (O-NICs), which can be plugged into existing computer hardware. Whereas traditional network interface cards typically have one port, the newly designed O-NICs have two ports and can support data transfer among many different kinds of computer subcomponents. The O-NICs are connected to optical switches, which allow the system to quickly re-configure the flow of data as needed.

“The connections can be modified before or during execution to match different devices over time,” explains Fred Douglis, a Chief Research Scientist with Peraton Labs and co-Principal Investigator of FLEET. He likens the concept to the peripatetic Grand Staircase in the Harry Potter series’s Hogwarts School. “Imagine Hogwarts staircases if they always appeared just as you needed to walk someplace new,” he says.   

To support re-configurability, the researchers have designed a new software planner that determines the best configuration and adjusts the flow of data accordingly. “On the software side, a planner that can actually make use of this flexibility is essential to realizing the performance improvements we expect,” Douglis emphasizes. The wide range of topologies can result in many tens of terabits of data in flight at a given moment.”

The development of FLEET is still in its early stages. The initial design of the O-NICs and software planner was achieved in the first year of what is expected to be a four-year project. But once complete, the team anticipates that the new network interface will reach speeds of 12 Tbps based on the current (fifth) generation of PCIe (an interface standard that connects interface network cards and other high-performance peripheral devices), and could reach higher speeds with newer generations of PCIe.

Importantly, Robertson notes that FLEET will depend almost entirely on off-the-shelf components, with the exception of the newly designed O-NICs, meaning FLEET can be easily integrated into existing computer systems.

“Once we can prove [FLEET] meets its performance targets, we'd like to work to standardize its interfaces and see traditional hardware vendors make this highly adaptable networking topology widely available,” says Robertson, noting that his team plans to open-source the software.

This article appears in the June 2021 print issue as “Computing on FLEET.”

The Conversation (0)

The Future of Deep Learning Is Photonic

Computing with light could slash the energy needs of neural networks

10 min read

This computer rendering depicts the pattern on a photonic chip that the author and his colleagues have devised for performing neural-network calculations using light.

Alexander Sludds

Think of the many tasks to which computers are being applied that in the not-so-distant past required human intuition. Computers routinely identify objects in images, transcribe speech, translate between languages, diagnose medical conditions, play complex games, and drive cars.

The technique that has empowered these stunning developments is called deep learning, a term that refers to mathematical models known as artificial neural networks. Deep learning is a subfield of machine learning, a branch of computer science based on fitting complex models to data.

Keep Reading ↓ Show less