What is it about technology and eponymous "laws"? In between discovering the electron and putting 50 million transistors on an integrated circuit, engineers and their predecessors have poured out a torrent of mathematical observations, pithy pronouncements, and even a few enduring self-fulfilling prophecies.
Some are sublime, and a lot more are silly. But you needn't speculate wildly on the burning issue of just how many there are of each: Sturgeon's Law, named for science fiction author Theodore Sturgeon, posits that "90 percent of everything is crud." Silly or sublime? You be the judge.
Mixed among the musings are a few laws that actually are laws, really and truly. For technology, they define how things operate. Ohm's Law relates voltage across a component to the product of its resistance and the current through it, and Kirchhoff's Laws deal with the sum of currents at any point in a circuit; they are the bedrock of electrical engineering. But the laws that have reverberated and become ingrained in mainstream culture and even the popular consciousness aren't really laws at all, but folksy rules of thumb.
Murphy's Law was first uttered by the military/aerospace engineer Edward A. Murphy Jr., who is said to have declared after an improbably botched test in 1949 that "if there are two or more ways to do something, and one of those ways can result in a catastrophe, then someone will do it." And then there's Moore's Law. People who don't know the difference between CMOS and Spanish moss generally have heard of it. A little more than half a century into the solid-state age, a half dozen or so rule-of-thumb "laws" have stood out. How have they fared? Let's take a look.
1. MOORE'S LAW: The number of transistors on a chip doubles annually
The mother of all engineering laws, Moore's Law, was suggested in a paper by Intel Corp. legend Gordon E. Moore 38 years ago. However, he never used the word "law" to predict an annual doubling of the number of transistors that could be fabricated on a semiconductor chip. The paper, in the April 1965, 35th anniversary issue of Electronics , was titled "Cramming More Components onto Integrated Circuits." At the time, Moore was director of the research and development laboratories at the Fairchild Semiconductor Division of Fairchild Camera and Instrument Corp. The entire article was just three and a half pages, including two charts and a corny cartoon-like drawing of a shopper eyeing a sales booth for "handy home computers" (yes, Moore predicted them as well; it was arguably the more prescient insight). He noted the historical trend in fabricating transistors, then rather brief; observed that no technical barriers stood against further improvements in the enabling technology, photolithography; and reasoned that the trend in fabrication would continue for at least another decade, raising the chip transistor count to 65 000.
In fact, by 1975 the leading chips had maybe one-tenth as many components as Moore had predicted. The doubling period had stretched out to an average of 17 months in the decade ending in 1975, then slowed to 22 months through 1985 and 32 months through 1995. It has revived to a now relatively peppy 22 to 24 months in recent years.
These statistics come from G. Dan Hutcheson, CEO of VLSI Research Inc., in Santa Clara, Calif., which compiles confidential industry data and releases it in aggregate form. Hutcheson, an economist, together with his engineer father, has studied Moore's Law perhaps more intently than anybody else [see illus]. "It's averaged every two years since the late 1970s, although Intel's PR department likes to average the earlier number with the later and call it 18 months," Hutcheson notes.
From the beginning, Moore concentrated on the economic underpinnings of the trend, a focus he has always maintained, in contrast to the view that only what is technologically possible determines how long it takes for transistor density to double. The paper noted that the cost per electronic component was inversely proportional to the number of those components in simple circuits, but that diminishing returns occurred as the circuit grew more complex. In other words, eventually there would come a time when it just wouldn't be economically worthwhile to put more transistors on a chip.
"If you do see an end [to the law], it will be an economic end, not a technical end," Hutcheson says. "One of the most famous 'won't work' predictions was made in 1988 by Erich Bloch, then head of IBM Corp.'s research division, when chip features were around one micrometer. He said Moore's Law wouldn't work [at feature sizes] under a quarter micron." And, of course, it is holding up nicely, thank you, at today's 0.1 µm.
Bloch left IBM not long after making that unfortunate prediction. Hutchenson credits Moore for having the economic basis of the law down cold but notes that later on he grossly underestimated the technical staying power of photolithography, thinking that the industry would soon have to turn to electron-beam techniques to continue doubling transistor density.
Since then, Moore's stock answer has been that "no exponential trend lasts forever, but forever can be postponed," Hutcheson says. "We may one day have to go to some sort of nanotechnology, with self-assembly of molecules, and so on, and that might not show the same economics. But we have a long time; semiconductors will be around for another 15 years at least."
One particularly intriguing question raised by the law is its "legal" status: does it go beyond description to prescription? In other words, does the law merely describe reality or does it create it? Hutcheson says that it was Carver Mead, then at the California Institute of Technology, in Pasadena, and not Moore, who dubbed Moore's Law a "law," and he did so many years after Moore's paper was published. (Mead was instrumental in developing the MOSFET and a host of other inventions.)
The rest is history: the industry accepted the law as enunciated by Mead and incorporated it into a "road map" that set the bar for achievements in many areas--minimum line width, maximum wafer size, tolerances of tools, cleanliness of clean rooms. The technology seemed to take on a life of its own. Imagine that the industry decreed that the doubling cycle must speed up, say, by 7 percent, and all the relevant disciplines set their sights that much higher and made the decree a reality. In that case, the speedup in the transistor doubling period would happen. That's the argument that the pace is governed by technological capability.
No, say Moore, Hutcheson, and all the economists who study these things. The "law" reflects the economic constraints on the industry, above all, the yield rate (how many good chips are produced on a wafer) that obtains when producing the most complex chips of a given generation. Semiconductor manufacturers keep on adding elements to their circuitry until it no longer pays to add more; then they stop.
In other words, it all comes down to fabrication costs, which are spelled out in Rock's Law.