graphic link for Moore's Law special report

In the 50 years since Gordon Moore published his prediction about the future of the integrated circuit, the term “Moore’s Law” has become a household name. It’s constantly paraphrased, not always correctly. Sometimes it’s used to describe modern technological progress as a whole.

As IEEE Spectrum put together its special report celebrating the semicentennial, I started a list of key facts that are often overlooked when Moore’s Law is discussed and covered. Here they are (sans animated gifs):

1. Moore’s forecast changed over time. Gordon Moore originally predicted the complexity of integrated circuits—and so the number of components on them—would double every year. In 1975, he revised his prediction to a doubling every two years. 

2. It’s not just about smaller, faster transistors. At its core, Moore’s prediction was about the economics of chipmaking, building ever-more sophisticated chips while driving down the manufacturing cost per transistor. Miniaturization has played a big role in this, but smaller doesn’t necessarily mean less expensive—an issue we’re beginning to run into now. 

3. At first, it wasn’t just about transistors. Moore’s 1965 paper discussed components, a category that includes not just transistors, but other electronic components, such as resistors, capacitors, and diodes. As lithographer Chris Mack notes, some early circuits had more resistors than transistors.

4. The origin of the term “Moore’s Law” is a bit murky. Carver Mead is widely credited with coining the term “Moore’s Law”, but it’s unclear where it came from and when it was first used. 

5. Moore’s Law made Moore’s Law. Silicon is a pretty unique material, but maintaining Moore’s Law for decades was hard work and it’s getting harder. As historian Cyrus Mody argues, the idea of Moore’s Law kept Moore’s Law going: it has long been a coordinating concept and common goal for the widely-distributed efforts of the semiconductor industry.

The Conversation (0)

The First Million-Transistor Chip: the Engineers’ Story

Intel’s i860 RISC chip was a graphics powerhouse

21 min read
Twenty people crowd into a cubicle, the man in the center seated holding a silicon wafer full of chips

Intel's million-transistor chip development team

In San Francisco on Feb. 27, 1989, Intel Corp., Santa Clara, Calif., startled the world of high technology by presenting the first ever 1-million-transistor microprocessor, which was also the company’s first such chip to use a reduced instruction set.

The number of transistors alone marks a huge leap upward: Intel’s previous microprocessor, the 80386, has only 275,000 of them. But this long-deferred move into the booming market in reduced-instruction-set computing (RISC) was more of a shock, in part because it broke with Intel’s tradition of compatibility with earlier processors—and not least because after three well-guarded years in development the chip came as a complete surprise. Now designated the i860, it entered development in 1986 about the same time as the 80486, the yet-to-be-introduced successor to Intel’s highly regarded 80286 and 80386. The two chips have about the same area and use the same 1-micrometer CMOS technology then under development at the company’s systems production and manufacturing plant in Hillsboro, Ore. But with the i860, then code-named the N10, the company planned a revolution.

Keep Reading ↓Show less
{"imageShortcodeIds":[]}