Hardware Hangover

Vendors hope companies start replacing obsolete machines, but customers want more bang for bucks already spent

9 min read
Hardware Hangover

This is part of IEEE Spectrum's special report: What's Wrong—What's Next: 2003 Technology Forecast & Review.

Click on the image for the full infographic.

For corporations the world over, the tech bubble of the late 1990s was an orgy of excess, which, like all parties that go on too long and involve far too much consumption, ended in a brutal hangover. Information technology (IT) departments simply bought too many servers, storage devices, and PCs in preparation for Y2K, the introduction of the euro, and an e-commerce bonanza that, like an absinthe-induced hallucination, seemed very real at the time, but vanished following the dot-com crash.

The numbers are sobering. After declining in 2001 following a decade of double-digit growth, worldwide IT spending will gain an anemic 3.4 percent in 2002 over 2001, with revenue totaling US $2.3 trillion, according to Dataquest Inc., a unit of Gartner Inc. (Stamford, Conn.).

Trouble Signs

Companies still are digesting hardware bought during the TECH BUBBLE

REPLACEMENT of obsolete hardware is being DELAYED until the economy picks up

IT departments find it hard to cope with the COMPLEXITY OF SYSTEMS already in place

Wish List

Software that will help companies manage IT RESOURCES more cost-effectively

CONSOLIDATED, integrated systems that give users access to computing and storage power ON DEMAND

Faster INTERCONNECTS to transfer data among computer and storage resources

To Gartner, the IT sector encompasses hardware, software, and services as well as telecommunications equipment and services [for more on this topic, see pp. 26-38]. While spending on software and services has risen over the last couple of years, hardware spending has lagged far behind, declining 13.8 percent in 2001 and 1.3 percent in 2002. With hardware spending off and profit margins on hardware perpetually shrinking, companies like Hewlett-Packard, Sun Microsystems, and EMC Corp. are following IBM Corp. to where the biggest growth opportunities lie: in software and services that help companies consolidate and manage their existing systems.

For some hardware segments, like workstations and PCs, the problem is clearly a consequence of Moore's Law—falling prices [see "Tackling the Chip Glut." Unit shipments of workstations remained steady at around 1 456 000 in both 2001 and 2002 and are projected to rise in 2003 to 1 486 000, but revenues are falling from $6.4 billion in 2001 to a projected $5.1 billion in 2003. Unit shipments of desktop PCs also languished. They barely increased in 2002 from 2001 (the first year since 1985 that sales shrank), but look poised to rise from 98 million to 107 million units, or 8.6 percent, in 2003, thanks to strength in Asia. Revenues will be up, too, though just barely—3.1 percent, according to Gartner's projections.

Overall, the IT market is maturing its way to sustainable, albeit unspectacular, growth. The hockey stick growth curves of the 1990s just aren't feasible over the long term. In equipment-saturated markets like the United States and Europe, replacement demand is all hardware vendors can hang their hopes on for 2003, says Axel Pols, manager of market research for Bitkom eV (Berlin, Germany), a technology research firm.

New growth opportunity lies with software that helps companies manage the systems they have now. With a consolidated system in hand, they can do more than replace—they can move up to more powerful storage and servers.

Maintenance is a high priority. "The No. 1 problem today is the complexity of the systems we put in," says Tony Scott, chief technology officer of General Motors Corp. (GM, Detroit, Mich.). "When a system fails, it's usually because we failed to understand how the various pieces fit together and what effect a change in one part of the system will have on another."

Perhaps surprisingly, maintaining existing equipment sops up 75 percent of IT budgets, according to Zeus Kerravala, vice president of enterprise infrastructure for Yankee Group, the market research firm in Boston. Sales of new equipment in effect depend on chief information officers' ability to save money elsewhere. It can be done if companies consolidate IT assets to make them cheaper to operate. Otherwise, money spent to "keep the lights on" will keep rising to the point where IT budgets go entirely to support the current infrastructure.

But while companies squeeze every last drop of life from their servers, PCs, and workstations, hoping to stave off the replacement cycle for another few months, they still have an insatiable appetite for storage, right?

Don't bet on it...

Tech Watch

THE iSCSI INTERCONNECT PROTOCOL lets companies leverage the Ethernet and Internet to transfer data to and from storage-area networks, thereby making them affordable to small and medium-sized businesses

A BLADE SERVER is a compact and powerful board of processors, memory, and network connections that normally take up several boards. Blades enable the consolidation of data centers and are big space and money savers

These days, the disk storage business is looking a lot like the server market. "There has been growth in unit sales of servers, but not much growth in dollars and I think storage is beginning to take on that same characteristic," says Gartner's research vice president Roger W. Cox.

Spending on storage IT infrastructure has been diminishing and that trend won't change any time soon, says Cox. Down about 19 percent in 2001, the numbers for 2002 look almost as dismal, with another decline of 12 or 13 percent likely. Direct attach storage—disks hooked up directly to servers—continues its slide toward oblivion, with a 15 percent decline projected for 2002. For 2003, Cox forecasts a decline of more than 3 percent for the overall storage market, and that could yet prove too optimistic if the world economic malaise lingers. He pegs the compound annual growth rate for the overall storage market as precisely zero between 2001 and 2006.

There is some hope. The market for what Gartner calls fabric-attached storage will grow at a robust annual rate of 17 percent between 2001 and 2006. Fabric-attached storage encompasses both network-attached storage (NAS) and storage-area networks (SANs). NAS embraces servers dedicated exclusively to file sharing, and SANs consist of a subnetwork of shared disk arrays that are made available via high-speed interconnects, or "fabric," to the network at large. The strength of fabric-attached storage schemes is that they free servers from storage management so they can concentrate on running applications and completing computational tasks.

But SAN and NAS aside, nothing is going to spur an increase in storage hardware sales over the next five years, Cox tells IEEE Spectrum. "Storage is becoming an established market. And when you get to be a $20-$25-billion-a-year market, you're hard pressed to say this thing is going to grow 20 percent year after year," he adds.

Managing most efficiently is key

Companies want to get the most out of the systems and equipment they already own, says IEEE Fellow Mark E. Dean, vice president and Fellow of architecture and design at IBM's Storage Systems Group, (Tucson, Ariz.).

"Companies should be asking themselves 'what are the software elements needed to dynamically configure—or reconfigure—our machines to match our changing needs?,' " he says. "It's not all that we love to put out that's going to be the key—it's how to efficiently use it and manage it."

GM has asked, and answered, the kinds of questions Dean poses. It has consolidated systems to slash operational costs and spent the savings on new initiatives that increased productivity, improved inventory control, and helped the company trim more fat in other areas of the business, according to GM's Scott. The company's IT budget now amounts to around $3 billion per year, compared to $4 billion in 1996, with most of the savings due to reduced operational costs over the last seven years.

GM's consolidation strategy spreads across the breadth of its multinational enterprise. It combined four e-mail systems into one, for instance, and 14 different CAD systems used by its design groups into one common platform. As a result, product development cycles shrank from four years to 18 months.

The customers of Dell Computer Corp. (Round Rock, Texas) are also consolidating, says Randy Groves, vice president of Dell's Enterprise Systems Group. To wring out more performance from systems already in place, they're combining what was on two or three servers onto one to reduce both maintenance expenses and administrative overhead.

But when stand-alone servers from separate departments within a company are moved to central data centers, those data centers get crowded. "We're butting up against limits of standard form factors," says Groves.

Enter the blade server

So Dell, like its rivals, is beginning to sell blade servers. A blade is essentially a single circuit board that combines the processors, memory, and network connections that used to take up several boards. It can be slotted into a chassis with other blades, so they can share a common power subsystem and common system management software, often in a configuration known as a cluster.

One start-up company looking to exploit the power of blade clusters is Tsunami Research Inc. (St. Louis, Mo.). Its Hive computing software hooks up several cheap blade servers of the sort Dell sells and makes them appear as one machine, a technique known as virtualization.

With a Hive, one administrator can install a security patch for Windows XP while the Hive is running applications on 1000 machines spread over multiple locations. Individual blades, or workers, in the Hive negotiate the order in which to update themselves. A worker will complete the task it is doing, take itself offline, install the patch, test itself, and start working again if all goes well. The next worker will begin the installation and so on, until the whole system is updated.

Tsunami expects commodity components to fail and so makes mission-critical applications robust enough to continue to work through both routine maintenance procedures and disasters. Tsunami Research founder and CEO Bob Lozano also urges prospective customers to build systems "with the most aggressive price performance. Use Intel's Pentiums instead of the more expensive Xeon microprocessors."

Utility computing powers up

Unsung Hero

Mark E. Dean, IEEE Fellow, member of the National Inventor's Hall of Fame, an IBM Fellow, and a vice president of IBM's storage systems group, creates future storage systems that will leverage the power of autonomic (self-repairing) computing models and the flexibility of mass-market communication devices


Under Fire

Sun Microsystems CEO Scott McNealy presides over Sun's share price collapse (to US $2.34 from $64 two years ago), says bye-bye to top-level management (and $195 billion in market capitalization) and watches sales swoon 32 percent from a high of over $18 billion two years ago as Dell and other commodity server vendors eat Sun's lunch, dinner, and dessert

As Dell builds server market share with its standardized Intel platforms, and as start-ups like Tsunami design applications to boost the efficiency and reliability of commodity components, big IT vendors seek new revenue streams from software that automates enterprise management and maintenance tasks.

This past fall, IBM spun out its autonomic computing initiative into a separate unit and Sun Microsystems Inc. (H-P, Palo Alto, Calif.) put grid computing front and center with its N1 program [see "Helping Computers Help Themselves," September, pp. 49-53]. For its part, Hewlett-Packard Co. (Palo Alto, Calif.) is assisting companies to consolidate their computing and storage resources into data and storage centers that are in essence the power plants of a utility model of computing.

For its Utility Data Center (UDC), H-P does everything from designing the physical layout of data centers to providing software that turns individual machines into pools of computing power and storage capacity. The idea, according to Nicholas ("Nick") Vanderzweep, director of infrastructure solutions for H-P, is that "you plug into the wall, get access to compute power, networking, and storage, use as much as you need, and then unplug from the wall, and free it all up again for others to use."

Consider a company that has located its Web retail system and its financial system in the same conventional data center. If the Web retail system uses up all its capacity, transactions must wait their turn to be processed. Even though the financial system might be running at only 30 percent of its capacity, the retail system has no automatic way of borrowing from it.

In a UDC, an administrator can use a mouse to drag a server running accounting software away from the financial system and drop it into the Web retail system, where it is automatically reconfigured into a Web server. The UDC software disconnects the server from the accounting system, scrubs it, loads an operating system, wires it into the network, connects it to the SAN, and brings it on line.

Storage networks take off

Once consolidation has been achieved, GM's Scott says, an enterprise can take advantage of "storage-area networks, bigger servers, and more powerful technology across the board."

Indeed, a number of large enterprises, including GM, have already turned to SANs to increase the memory utilization 10 percent or more over conventional direct-attach storage. But while companies can always use more memory, the money to buy more disks simply isn't there, says Cox.

And considering that by 2005, 50 percent of storage vendors' revenue will come from software and services, compared to an 80-20 split today between hardware and software, hardware-centric companies are rejiggering their business models. Cox suggests that just as Lou Gerstner changed IBM from a hardware model to a software services model over the last decade, other companies like H-P, EMC, and Hitachi are making a similar transition.

All in all, the hottest trends in storage are software-centric, with the possible exception of faster, fatter interconnects that are being introduced from the chip level up to storage network backplanes [see "Unsnarling the Interconnect Tangle."

While already something of a fixture in the server domain, virtualization is just now moving into storage. The basic idea is the same: take separate storage pools and, with the help of some virtualization software, make the separate pools appear as one big ocean of storage.

These storage environments comprise systems and devices from different vendors all trying to work together. According to Cox, vendors such as EMC, IBM, H-P, and Veritas Software Corp. (Mountain View, Calif.) are developing software that allows users to manage myriad storage assets from a single screen.

Small and medium-sized companies have been reluctant to trade their networks' Ethernet protocols for the Fibre Channel protocols required by the interconnects now used in SANs. However, with the advent of new interconnect protocols such as iSCSI, which allows the small computer system interface (SCSI) disk protocol to run over Ethernet, SANs should become more affordable and easier to implement. The iSCSI standard will be blessed officially by the Internet Engineering Task Force next month; the first products based on the technology are already rolling out.

Home remedy

It's clear that a Bloody Mary of consolidation and two aspirin won't be enough to get companies consuming the hard stuff again. Software, though, is another story.

IBM's Dean suggests that popular consumer technologies such as instant messaging [see "IM Means Business," Spectrum, November, pp. 28-32], PDAs, and the like are beginning to infiltrate the business world, where they will leverage the power of enterprise networks and revolutionize the IT landscape.

"Business won't drive the need for new technology. Entertainment and personal use will," says Dean, who believes that a combination of enterprise software and corporate networks will supercharge mundane information gadgets. "I would be much more productive if, when I walked into my office, my PDA would automatically update the changes to my calendar or business events that happened while I was asleep. That's not the hardware; this is information management, the software."

To Probe Further

For more on next-generation interconnects such as iSCSI, Hypertransport, and PCI Express, read "Unsnarling the Interconnect Tangle"

James H. Morris, dean of the School of Computer Science at Carnegie Mellon University, has an irreverent take on "The Chip Glut"

For more on Hive computing, go to https://www.tsunamiresearch.com/

This article is for IEEE members only. Join IEEE to access our full archive.

Join the world’s largest professional organization devoted to engineering and applied sciences and get access to all of Spectrum’s articles, podcasts, and special reports. Learn more →

If you're already an IEEE member, please sign in to continue reading.

Membership includes:

  • Get unlimited access to IEEE Spectrum content
  • Follow your favorite topics to create a personalized feed of IEEE Spectrum content
  • Save Spectrum articles to read later
  • Network with other technology professionals
  • Establish a professional profile
  • Create a group to share and collaborate on projects
  • Discover IEEE events and activities
  • Join and participate in discussions