The December 2022 issue of IEEE Spectrum is here!

Close bar

New Photonics Engine Promises Low-Loss, Energy-Efficient Data Capacity for Hyperscale Data Centers

A research team at Intel has developed a co-packaged optical processor that could bypass a fast-approaching bottleneck in server hardware

3 min read
The team at Intel has developed a photonic engine with the equivalent processing power of sixteen 100-GB transceivers, or 4 of the latest 12.8 TB generation.
Photo: Intel

At the Optical Networking and Communication Conference in San Francisco, which wrapped up last Thursday, a team of researchers from Intel described a possible solution to a computing problem that keeps data server engineers awake at night: how networks can keep up with our growing demand for data.

The amount of data used globally is growing exponentially. Reports from last year suggest that something like 2.5 quintillion bytes of data are produced each day. All that data has to be routed from its origins—in consumer hard drives, mobile phones, IoT devices, and other processors—through multiple servers as it finds its way to other machines.

“The challenge is to get data in and out of the chip,” without losing information or slowing down processing, said Robert Blum, the Director of Marketing and New Business at Intel. Optical systems, like fiber-optic cables, have been in widespread use as an alternative computing medium for decades, but loss still occurs at the inevitable boundaries between materials in a hybrid optoelectronic system.

The team at Intel has developed a photonic engine with the equivalent processing power of sixteen 100-GB transceivers, or 4 of the latest 12.8 TB generation. The standout feature of the new chip is its co-packaging, a method of close physical integration of the necessary electrical components with faster, lossless optical ones.

It's the first step toward an all-optical input-output scheme, which may offer future data centers a way to cope with the rapidly expanding data demands of the Internet-connected public.

The close integration of the optical components allows Intel’s engine to “break the wall,” of the maximum density of pluggable port transceivers on a switch ASIC, according to Blum. More ports on a switch—the specialized processor that routes data traffic—mean higher processing power, but only so many connectors can fit together before overheating becomes a threat.

The photonic engine brings the optical elements right up to the switch. Optical fibers require less space to connect and improve air flow throughout the server without adding to its heat waste. “With this [co-packaging] innovation, higher levels of bit rates are possible because you are no longer limited by electrical data transfer,” said Blum. Once you get to optical computing, distance is free—2 meters, 200 meters, it doesn’t matter.”

Driving huge amounts of high-speed data over the foot-long copper trays, as is necessary in standard server architectures, is also expensive—especially in terms of energy consumption. “With electrical [computation], as speed goes higher, you need more power; with optical, it is literally lossless at any speed,” said lead device integration engineer Saeed Fathololoumi.

“Power is really the currency on which data centers operate,” added Blum. “They are limited by the amount of power you can supply to them, and you want to use as much of that power as possible to compute.”

The co-packaged photonic engine currently exists as a functional demo back at Intel’s lab. The demonstration at the conference used a P4-programmable Barefoot Tofino 2 switch ASIC capable of speeds reaching12.8 terabits per second, in combination with Intel's 1.6-Tbps silicon photonics engines. “The optical interface is already the standard industry interface, but in the lab we’re using a switch that can talk to any other switch using optical protocols,” said Blum.

It’s the first step toward an all-optical input-output scheme, which may offer future data centers a way to cope with the rapidly expanding data demands of the Internet-connected public. For the Intel team, that means working with the rest of the computing industry to define the initial deployments of the new engines. “We’ve proven out the main technical building blocks, the technical hurdles,” said Fathololoumi. “The risk is low now to develop this into a product.”

The Conversation (0)

Why Functional Programming Should Be the Future of Software Development

It’s hard to learn, but your code will produce fewer nasty surprises

11 min read
Vertical
A plate of spaghetti made from code
Shira Inbar
DarkBlue1

You’d expectthe longest and most costly phase in the lifecycle of a software product to be the initial development of the system, when all those great features are first imagined and then created. In fact, the hardest part comes later, during the maintenance phase. That’s when programmers pay the price for the shortcuts they took during development.

So why did they take shortcuts? Maybe they didn’t realize that they were cutting any corners. Only when their code was deployed and exercised by a lot of users did its hidden flaws come to light. And maybe the developers were rushed. Time-to-market pressures would almost guarantee that their software will contain more bugs than it would otherwise.

Keep Reading ↓Show less
{"imageShortcodeIds":["31996907"]}