Gordon Moore: The Man Whose Name Means Progress

The visionary engineer reflects on 50 years of Moore’s Law

6 min read
Gordon Moore: The Man Whose Name Means Progress
Photo: Olivier Koning

photo of Gordon Moore It’s Good To Be Gordon: Since retiring from Intel, Gordon Moore has focused on philanthropy through the Gordon and Betty Moore Foundation. He stands here in the backyard of his home in Hawaii. Photo: Olivier Koning

Gordon Moore pioneered the integrated circuit and cofounded the chip giant Intel; in retirement he has focused on science- and technology-oriented philanthropy. But thanks to an article he published in April 1965 in Electronics magazine, he’s known most widely for Moore’s Law, the prediction that has reflected—and helped drive—steady and staggeringly fast progress in computing technology. In preparation for the 50th anniversary of Moore’s prediction, IEEE Spectrum Associate Editor Rachel Courtland visited the man himself at his home on Hawaii’s Big Island.

Rachel Courtland: It’s been 50 years since the article came out.

Gordon Moore: It’s hard to believe. I never would have anticipated anyone remembering it this far down the road.

R.C.: Why is that?

G.M.: At the time I wrote the article, I thought I was just showing a local trend. The integrated circuit was changing the economy of the whole [electronics] industry, and this was not yet generally recognized. So I wrote the article to try to get the point across—this is the way the industry is going to get things really cheap.

R.C.: At that point, the integrated circuit was still fairly new.

G.M.: The integrated circuit had been around a few years. The first few had hit the market with as many as about 30 components on the chip—the transistors, resistors, and so forth. I looked back to the beginning of the technology I considered fundamental—the planar transistor—and noticed that the [number of components] had about doubled every year. And I just did a wild extrapolation saying it’s going to continue to double every year for the next 10 years.

And it proved to be amazingly correct. I had a colleague who saw that and dubbed this Moore’s Law. It’s been applied to far more than just semiconductors. Sort of anything that changes exponentially these days is called Moore’s Law. I’m happy to take credit for all of it.

R.C.: You spoke to a colleague of mine after winning the 2008 IEEE Medal of Honor, and I believe you told her that you didn’t want Moore’s Law to be your legacy. You’d moved on to other things.

G.M.: Well, I couldn’t even utter the term “Moore’s Law” for a long time. It just didn’t seem appropriate. But as it became something that almost drove the semiconductor industry rather than just recording its progress, I became more relaxed about the term.

R.C.: How long did it take to come to terms with having a law named after you?

G.M.: Oh, 20 years or so. It really took a long time. But [now] it is well established. A while back I googled “Moore’s Law” and “Murphy’s Law” and discovered that Moore’s Law had more references than Murphy.

R.C.: Did that feel like a coup of some sort?

G.M.: I think so. It’s about as profound a law [as Murphy’s Law] too.

R.C.: Coming from a science background, when I think of laws, I think of ironclad, mathematically grounded laws of nature. And Moore’s Law is…

G.M.: It’s not a law in any real respect. It was an observation and a projection.

R.C.: Technological improvements are nothing new, but the rapid progress that’s been made under Moore’s Law has been pretty special. Is there something fundamentally different about the nature of silicon?

photo of Gordon Moore lecturing Moore’s Big Move: Moore (above) wrote his seminal 1965 paper while working at Fairchild Semiconductor. Just three years later, he and colleague Robert Noyce left to cofound Intel. Photo: Intel

G.M.: The semiconductor technology has some unique characteristics that I don’t see duplicated many other places. By making things smaller, everything gets better. The performance of devices improves; the amount of power dissipated decreases; the reliability increases as we put more stuff on a single chip. It’s a marvelous deal.

I used to give talks about how other industries might have progressed. You know, had the auto industry made progress at the same rate [as silicon microelectronics], you would have gotten a million miles per gallon of fuel, had cars that could go several hundred thousand miles an hour. It’d be more expensive to park [one] downtown for the night than to buy a new Rolls-Royce. And one of the members of the audience pointed out, yeah, but it’d only be 2 inches long and a half-inch high; it wouldn’t be much good for your commute.

R.C.: You’ve predicted the end of Moore’s Law several times in the past. How long do you think it will continue?

G.M.: Well, I have never quite predicted the end of it. I’ve said I could never see more than the next couple of [chip] generations, and after that it looked like [we’d] hit some kind of wall. But those walls keep receding. I’m amazed at how creative the engineers have been in finding ways around what we thought were going to be pretty hard stops. Now we’re getting to the point where it’s more and more difficult, and some of the laws are quite fundamental. I remember we had Stephen Hawking, the famous cosmologist, in Silicon Valley one time. He gave a talk, and afterward he was asked what he saw as the limits to the integrated circuit technology.

Now this is not his field, but he came up with two things: the finite velocity of light and the atomic nature of materials. And I think he’s right. We’re very close to the atomic limitation now. We take advantage of all the speed we can get, but the velocity of light limits performance. These are fundamentals I don’t see how we [will] ever get around. And in the next couple of generations, we’re right up against them.

R.C.: What happens then, once you’ve reached those limits?

G.M.: Well, things change when we get to that point. No longer can we depend on making things smaller and higher density. But we’ll be able to make several billion transistors on an integrated circuit at that time. And the room this allows for creativity is phenomenal. Now there are other technologies that are proposed to extend beyond what we can do with silicon. Some of the things coming out of nanotechnology may have a role to play, and materials like graphene, a single layer of carbon hexagons, are very interesting. I’m not close enough to predict that any of them is going to be successful, but they have a tough competitor. [Multiple] billion transistors on a silicon chip is hard to beat.

R.C.: So do you think the kind of progress we expect from chips will change?

G.M.: Some things will change. We won’t have the rate of progress that we’ve had over the last few decades. I think that’s inevitable with any technology; it eventually saturates out. I guess I see Moore’s Law dying here in the next decade or so, but that’s not surprising.

R.C.: Do you think the way we consume electronics will change as Moore’s Law comes to a close?

G.M.: I don’t think it’s likely to change much. As long as the new products offer incremental capability, I think they will replace the older ones pretty rapidly. When we run out of ideas of what to add, then people may decide they don’t need a new one every year, hang on to the same piece of equipment for three, four, five years. That’ll slow down the industry quite a bit. But I think it’s inevitable that something like that occur.

R.C.: There’re the fundamental physical limits—the atomic scale, the speed of light—and then there’s also the cost associated with fabricating smaller and smaller transistors. Which do you think we’ll hit first? Is it going to be the cost or the fundamental physical limits? I guess they’re tied together.

G.M.: They really are, yeah. Making things smaller is increasingly expensive. Fabs to operate on the newest technology nodes are absurd. It’s hard to think Intel started with [US] $3 million total capital. Now you can’t buy one tool, you can’t even install one tool for that much, I don’t think. The machines have gotten a lot more expensive and complex. On the other hand, their productivity in terms of transistors out per unit time has increased dramatically. So we can still afford to build a few fabs to utilize the modern technology.

We’ve had a lot of companies decide it was too expensive to move to the next generation already. There are only a few of us in the world that are investing in state-of-the-art fab facilities today. And I don’t see that number changing much over the next generation or two.

R.C.: Your initial prediction was largely based on the idea that the cost of each component on a chip was decreasing. So is that going to be the thing that decides it in the end? It’s an economic law, so it’ll have an economic demise?

G.M.: I think it’s going to be a technological demise rather than an economic one. People will continue to squeeze cost out of the products for quite a while after they can’t make them any smaller. I’m sure that’s happening already.

R.C.: I told a few people that I was going to meet you today, and I asked them what questions I should ask. Some just sort of laughed and said, “Can you ask him how we get out of this mess?” Because they’re all struggling with these technological issues.

G.M.: Whoo. Well, you could always retire and move to Hawaii.

R.C.: I think they’re trying to get to that point.

G.M.: Yeah, well, it’s the nature of the business. There aren’t many easy businesses, and this certainly isn’t one of them.

This interview has been edited for length and clarity. It originally appeared in print as “The Law That’s Not a Law.”

The Conversation (0)

Video Friday: DARPA Subterranean Challenge Final

1 min read
DARPA

This week we have a special DARPA SubT edition of Video Friday, both because the SubT Final is happening this week and is amazing, and also because (if I'm being honest) the SubT Final is happening this week and is amazing and I've spent all week covering it mostly in a cave with zero access to Internet. Win-win, right? So today, videos to watch are DARPA's recaps of the preliminary competition days, plus (depending on when you're tuning in) a livestream of the prize round highlights, the awards ceremony, and the SubT Summit with roundtable discussions featuring both the Virtual and Systems track teams.

Keep Reading ↓ Show less

Making 3D-Printed Objects Feel

3D-printing technique lets objects sense forces applied onto them for new interactive applications

2 min read

Researchers from MIT have developed a method to integrate sensing capabilities into 3D printable structures comprised of repetitive cells, which enables designers to rapidly prototype interactive input devices.

MIT

Some varieties of 3D-printed objects can now “feel," using a new technique that builds sensors directly into their materials. This research could lead to novel interactive devices such as intelligent furniture, a new study finds.

The new technique 3D-prints objects made from metamaterials—substances made of grids of repeating cells. When force is applied to a flexible metamaterial, some of their cells may stretch or compress. Electrodes incorporated within these structures can detect the magnitude and direction of these changes in shape, as well as rotation and acceleration.

Keep Reading ↓ Show less

How to Write Exceptionally Clear Requirements: 21 Tips

Avoid bad requirements with these 21 tips

1 min read

Systems Engineers face a major dilemma: More than 50% of project defects are caused by poorly written requirements. It's important to identify problematic language early on, before it develops into late-stage rework, cost-overruns, and recalls. Learn how to identify risks, errors and ambiguities in requirements before they cripple your project.

Trending Stories

The most-read stories on IEEE Spectrum right now