Ethernet link speeds of 100 megabits per second or even 1 gigabit per second are typical right now in local area networks, but it's very unlikely that you need that much bandwidth all the time. Studies show that on average, people use their Ethernet links at full throttle less than 5 percent of the time. But the circuitry on the network-interface controller, the chip that connects your computer to the network, is always running at full speed, wasting power. In 2005, all the network-interface controllers in the United States--computers, switches, and routers all have them--burned through 5.3 terawattâ''hours of energy, enough to keep 6 million 100-watt lightbulbs shining all year.
”There's no reason to have a 1â''gigabit link when there's no traffic on it,” says Ken Christensen, computer science and engineering professor at the University of South Florida, in Tampa. Christensen and Bruce Nordman, a researcher at the Lawrence Berkeley National Laboratory, in California, have devised one of two schemes vying to become a standard that if put into practice would save some of the wasted watts. Their seemingly simple solution: adapt the Ethernet link's speed to match a device's needs. If you were checking e-mail, for instance, 100Mb/s would be enough, but the network controller would shift to 1Gb/s when downloading a large file. The researchers described the concept, called Adaptive Link Rate, last month in IEEE Transactions on Computers.
At the low data speeds, the network controller chip's circuits would work at slower clock rates, and some might be turned off, cutting power use. Christensen and Nordman estimate that with networking devices in homes, offices, and data centers running at 1Gb/s, switching to 100Mb/s whenever possible could save more than US$300million in energy costs.
The savings would be even greater if the links were switching between 10Gb/s and 100Mb/s. Ten-gigabit links--expected to be widespread by 2010--use 10 to 20W more power than 100Mb/s links, while 1 Gb/s uses about 4 W more.
But Christensen and Nord man's concept will take some effort to implement. Switching between Ethernet speeds is time- consuming. ”When you change link rate today, you have to drop the link and reestablish it, which takes [up to] 2seconds,” says Nordman.
However, rate switching would have to happen in less than a milli second to be practical. That means researchers will need to come up with a much faster protocol for the two ends of an Ethernet link--say, a PC and a switch--to coordinate their link rates.
The industry is weighing the Adaptive Link Rate scheme against another one, hatched at Intel, which promises to be even more energy efficient. Called low-power idle, it proposes transferring data on an Ethernet link at the highest possible rate and then putting the network controller chip into a sleeplike state. ”You're better off sending data faster and getting to sleep quicker, which allows you to save more power over the long haul,” says Robert Hays, a strategic planner for networking products at Intel.
The trouble is that turning on a dormant network card quickly is a challenge. Still, for link speeds up to 1 Gb/s, Hays says, turning circuits on and off is easier than switching between rates. An IEEE standards task force recommended the Intel scheme for 1-Gb/s links.
But for faster, 10-Gb/s links, where there is more potential for power savings, it's not yet clear which of the schemes would be easier to implement and would save more power.
No matter what scheme the industry chooses, a complete redesign of the network-interface controller system is needed, says Hugh Barrass, a technical leader at Cisco. ”[We] should expect to take two to three generations before equipment gets the most efficient it can be,” Barrass says.