Sad but true: About three-quarters of the time, your computer processor is doing nothing more than waiting for data—the cybernetic equivalent of twiddling one's thumbs. It doesn't matter whether you've got the latest processor, surrounded it with high-speed RAM, or lovingly hot-rodded your system with the latest in liquid cooling. Your speed is primarily set not by the processing power you have but by the connections that stand between that processor and the data it needs.
The problem is that data transfer is accomplished by the movement of an electronic signal along old-fashioned copper wires—the same basic phenomenon that a century and a half ago carried news of the U.S. Civil War over telegraph lines. It's time we saw the light—literally—and stopped shackling ourselves to electrons moving along copper conductors.
For decades, engineers have sought to transfer signals from chip to chip with photons. Photons are more than just fast; unlike electrons, they lack an electric charge. That means they can't interfere with each other, causing cross talk that, like the din of a boisterous party, can turn a conversation into a game of charades. For many years, however, the optoelectronic strategy has been hindered by the problem of getting photons to go where you want them to go. Metal connections can be laid down on semiconductor wafers with exquisite precision, and they can easily be formed into networks that branch out from central lines, the same way capillaries branch out from arteries. It's far harder to accomplish this feat when laying down a system of tiny optical channels.
At IBM, we have now developed a first-of-its-kind optical data-transfer system, or bus, built right onto the circuit board. With it, we will soon unveil computer systems 100 times as fast as anything available today. With that much muscle, scientists will at last be able to visualize wondrous things in detail: how the climate will react to man-made greenhouse gases, how neurons organize to form a brain, how to custom design a drug to treat an individual patient.
Ever since the early days of microprocessors, data has shot back and forth far faster inside the chip than between the chip and external components, such as memory and input/output ports. Data transfers within a microprocessor—for example, between the processing core and on-chip cache memories—have been operating at multigigahertz clock rates for more than a decade. But transfers between the chip and external memories along those copper conduits are typically an order of magnitude slower. This bandwidth gap will continue to widen as processor performance continues to climb and multicore architectures become more elaborate.
Copper can't keep up, because it faces simple physical limits. Shoot an oscillating signal down a long copper line on a printed circuit board and it'll lose about half its strength at 2 gigahertz—and a staggering 98 percent at 10 GHz. Most of that loss stems from two effects. First, the oscillating signal induces stray currents in the board's conductors that suck away energy. Second, induced currents inside the wire itself push electrons to the surface of the metal, reducing the effective cross section of the wire and thus raising resistance. The higher the frequency—that is, the clock rate—of the signal, the greater the losses will be.
To make matters worse, severe resonances occur at a few gigahertz, at which point the signal begins to reflect off metal paths in the vias, the vertical conductors that connect elements of a circuit board. It gets worse still as bit rates approach 10 gigabits per second, when cross talk blurs the signal, even at distances of less than a meter.
These problems are particularly bad when you're yoking together the multichip modules of a massively parallel computer. When one module must link up with another at the other end of a circuit board or, worse still, in a different rack of equipment, the bandwidth bottleneck becomes particularly severe. That's why today's highly parallel machines can reach peak performance only when solving those specialized problems that can be readily divided into many tasks that can be processed independently.
By avoiding all those signal-loss and cross-talk problems, an optical bus would make supercomputers go much faster. It would also make them easier to program, because programmers wouldn't have to take special measures to compensate for such severe communication delays among processors.
Fiber-optic lines first began proliferating in the 1980s in long-distance telecom networks. By the late 1990s, fiber-optic links had found their way into local and storage area networks, interconnecting systems hundreds of meters apart. Over the next decade, the technology kept moving down to ever smaller dimensions as its cost and power needs kept falling and the bandwidth requirements of computer systems kept rising.