The Recession’s Silver Lining

How the semiconductor industry can use the recession to create the next technology renaissance

11 min read
Illustration by Keith Negley
Illustration: Keith Negley

Countless research institutions contributed to the digital, wireless, and mobile technologies that underpin our modern world. But none contributed more than Bell Telephone Laboratories, which logged an astonishing share of the key advances of the 20th century, including the transistor, the cellphone, the digital signal processor, the laser, the Unix operating system, and motion-picture sound. We no longer have Bell Labs to fund research with long-term payback. That has prompted many to wonder: Who will pay for such research now, and where will it be done?

We say: Governments and corporations must share the burden, and they must do it in structured collaborations among universities, companies, and government agencies in which intellectual property is freely available to all participants.

We also say, the sooner we can get started, the better. The recession has left R&D spending in free fall. This year, the global semiconductor industry is expected to spend just US $200 billion on research—$50 billion less than in 2008. And times are really tough in the semiconductor-equipment industry, whose R&D operations will shrivel like a salted leech from $34 billion in 2007 down to a pitiful $10 billion in 2009.

In the United States, a few basic sciences are getting a reprieve, thanks to the federal stimulus package. Of the $787 billion designated, $10 billion went to the National Institutes of Health for life-sciences research; on the other hand, there has been a steady federal funding decline for physical sciences in the United States.

In the long run, however, even the life sciences are unlikely to benefit in any meaningful way from that load of cash. A one-time infusion helps, but it also creates a classic feast-or-famine problem: The money needs to be spent by September of next year. And because there’s no follow-up money to keep these programs going beyond that time, officials can’t start major long-term initiatives.

But the recession isn’t what’s causing this problem; it’s only revealing an intensifying trend in the semiconductor industry. Revolutionary innovation has been missing in action for about 40 years as the industry instead focused on incremental advances. The industry could get away with short-term research because those incremental advances got the companies where they needed to be, financially speaking.

Limiting that funding to incremental research is why there hasn’t been a “transistor moment” in 50 years. So, painful as it is, this economic gloom might actually turn out to be a good thing. It offers the industry, for the first time in decades, an opportunity to rethink its most basic strategies, down to the engine that keeps it all going—innovation.

Innovation has often been a catalyst for economic recovery. It happened in the 1930s, when DuPont invented one of the major materials of the 20th century: neoprene. Two years after its introduction, neoprene, a synthetic rubber, was in every car and plane built in the United States, and 50 years after that it was in knee braces and wet suits. And again in the 1980s, small steel mills like Geneva and Nucor rose from the ashes of Big Steel. Today many developed countries have stupendous R&D resources and infrastructures and are eager to use them to pursue very high potential payoffs, especially in semiconductors. So the basic factors are in place to use this recession to establish a new model of semiconductor R&D that could usher in the next generation of innovation.

But there’s a problem. The innovation strategies that semiconductor companies large and small have developed over the past half century are grounded in the business practices and research conventions of a bygone era. Unless their strategies evolve to meet these changes, many of those companies will die a slow and avoidable death.

There is a way out. It just doesn’t look anything like the old way out, and it will make some of those companies uncomfortable, at least initially. At the Semiconductor Industry Association (SIA), in San Jose, Calif., we have developed a new model for innovation. Our model is counterintuitive—it asks companies to share intellectual property and invest in research that also benefits competitors, something that’s anathema in today’s standard industry practices. But our approach has been successfully tested by corporations like IBM, Intel, Micron, and Xilinx, among many others. When companies have embraced it, they’ve seen encouraging results. For example, there has already been a significant breakthrough that can be largely attributed to this model: the graphene-based BiSFET logic device, which operates at a fraction of the power of today’s typical devices. The concept is being developed by researchers at the University of Texas at Austin, and if it works as well as the simulations imply, it could change the world.

The BiSFET, we hope, is only the beginning. But it could be the end, if we can’t convince semiconductor executives of the value of our model. (And not everyone is on board with the idea of a “shared innovation environment.”) To better understand the barriers, we interviewed top management at some leading semiconductor companies and universities. Our subjects represented a broad cross-section of the industry, some of which use our model, and some of which do not. These institutions represent a perfect microcosm of the stumbling blocks—and the rewards for letting go of the stifling old ways and making the leap.

Big changes are afoot. The semiconductor industry has been the greatest single source of industrial innovation in recent history. But many of the advances have been incremental, such as the shift to high-k dielectric materials, and the move from aluminum to copper for on-chip interconnects. But as the old saw has it, after you’ve gone from the buggy to the car, building a better buggy whip won’t do you any good. In electronics, building a better triode won’t help. What the industry needs now is more like the shift from vacuum tubes to semiconductors. That’s because two trends are driving the semiconductor industry to a momentous inflection point.

First, the customer is changing. Several hundred million individual consumers, many of them in the developing world, have joined the global economy in just the past few years. Individuals have replaced companies and governments as the dominant buyers of cellphones, laptops, digital cameras, and other high-tech goodies. In fact, large corporate IT departments are no longer the world’s primary technology consumers [see time line, "Vectors of Change"]. These hundreds of millions of customers are atomized into many fragments, they don’t have monolithic tastes, and most important, they’re much more cost-conscious than big companies are.

Second, the metal-oxide-semiconductor transistor—the basic building block of the entire edifice of modern semiconductors—is approaching fundamental physical limits. The “next big thing” won’t be a linear progression of faster and faster computing and communication, laid out in road maps of the sort we’ve issued over the past 20 years. In fact, we don’t know what this next big thing is going to look like, because it could come from anywhere.

Engineering will never return to its isolation in a bubble of mechanics and computers. And therein lies the rub: Today the field has become fantastically inter- and multidisciplinary. So engineers and their companies need to be fluent in a growing panoply of languages: neuroscience, biology, geophysics, and more. The reason is that the next great innovation might come from a neuroscientist whose circuits can mimic the functions of a synapse or from a geologist whose algorithms model the flow of magma inside volcanoes.

However, no one company can be expected to keep tabs on every significant development in academic science and technology. Indeed, conventional semiconductor R&D and strategic marketing departments are often mired in short-term firefights, product deadlines, and meeting the next quarter’s financial goals.

We think the SIA model can do for research in the 21st century what the Bell Labs model did in the 20th. With our model, the industry can draw on a kind of nationwide “neural network” of academic research. To understand what we’re proposing, you need a quick lesson in university-industry interaction as it has existed for the past five decades.

In the existing system, a company consults individual professors with specific research questions, or it invests in local colleges mainly to burnish its image. In the first model, a company hires a star professor or researcher as a consultant and might also fund one or two graduate students for a small, proprietary project. The typical scale of this engagement is $50 000 to $100 000. In the second model, a company invests similar or potentially larger amounts to build goodwill in the community and to supplement its local talent base for recruitment.

These partnerships have yielded incremental advances. But to get to the next big paradigm, we need to innovate the way we innovate. To do that, we have developed our research model, exemplified here by the Nanoelectronics Research Initiative (NRI), one of 11 national centers we have set up to solve the technology showstoppers waiting to meet us in the future.

Here’s how they work. The research takes place not inside one particular company but across multiple universities and various disciplines, all tied together with a common goal. Each center is funded at several million dollars a year, with about 50 universities, 250 faculty, and 450 graduate and postdoctoral students. Companies “buy in” to the research conducted there and then share early results. All the interdisciplinary research centers operate with a nonexclusive intellectual property (IP) model. What that means is that all sponsoring companies have the right to use the IP without paying any royalties, but the university owns it. More on that later.

For NRI specifically, the funding comes from five companies, two U.S. federal agencies, and four state governments. Together, these organizations have invested a total of $20 million per year for the past four years. The NRI focuses on radically new semiconductor logic devices, ones not based on metal-oxide-semiconductor field-effect transistors, or MOSFETs, as virtually all modern chips are. In particular, NRI-hosted research has already produced the new device mentioned earlier: the BiSFET, or bilayer pseudospin field-effect transistor (not to be confused with the bistable field-effect transistor, or BISFET).

Some background: One of the most urgent needs in technology today is for ultralow-power devices. Vacuum tubes could have never been used to build a personal computer. A cellphone or MP3 player created with the bipolar junction or n-type MOS semiconductor technology that was common 30 years ago would suck up so much power that it couldn’t be powered with batteries. All digital information processing is based on variations in electronic charge (for instance, in the capacitor of a dynamic RAM cell), which correspond to a 1 or 0 state. Manipulating charge requires power, which generates heat. Just as previous technology transitions from vacuum tubes to solid-state devices to integrated circuit chips were all driven by power consumption, so will the next transition.

The BiSFET, described by Sanjay Banerjee and Leonard Franklin Register and their colleagues at UT Austin, is in the earliest research phase but offers tremendous potential. The BiSFET could substitute for a MOSFET transistor in logic and memory applications. Like a MOSFET transistor, it can switch and it can amplify. Where the BiSFET stands alone, however, is in its phenomenal power parsimony: It needs only one-hundredth to one-thousandth the power of a standard MOSFET, mainly because it would switch at much lower voltages than a MOSFET.

BiSFETs will not be drop-in replacements for MOSFETs, but in principle, BiSFET-based circuits could replace CMOS circuits in any application. Behind the BiSFET is a theoretical concept that’s not new in physics, but it had been completely beyond the ken of the semiconductor industry. Unlike the silicon channel in a MOSFET, the BiSFET channel is based on graphene, an exotic material consisting of single atomic sheets of the element carbon. Think of these layers as unrolled carbon nanotubes. Also, unlike a CMOS field-effect transistor, which has three terminals—source, drain, and gate—the BiSFET has four terminals: source, drain, and a top and bottom gate, which sandwich two electrically coupled layers of graphene between them. Though the two gates function as one, they must be biased differently to create electrons in one graphene layer and positively charged holes in the other. Interactions between these electrons and holes leads to what’s known as an electron-hole condensate, an esoteric state of matter in quantum physics, in which the particles tend to lose their individuality and display collective behavior. The basic idea has been around for decades, but according to the rules of their strange physical makeup, these condensates could be realized only with exotic materials and at cryogenic temperatures.

The proposed graphene devices require just 25 millivolts, a scant one-fortieth of the operating voltage of today’s “low-power” devices. This device could operate at room temperature and require a thousandth of the power of current devices. The BiSFET is as yet only a concept based on novel predicted physics in a novel material system. We still need experimental verification of the underlying phenomena on which the device is based.

The bottom line is that behind this breakthrough were NRI-assembled teams that included physicists, materials scientists, and electrical engineers specializing in device design. The successful application of graphene’s alchemical properties to semiconductor physics could have happened only within the interdisciplinary research architecture we have created.

In research, as in life, there’s no such thing as one size fits all. When we queried tech companies about our model, we found that few of them would be willing to adopt it. Impediments are often relics of the mind-set created in the last century.

Two main arguments came up again and again: Technology managers said they did not want to share intellectual property or research with competitors, and they did not want to spend money on what they could learn by attending conferences. A more fundamental issue was that many companies, particularly ones forced into short-term strategies, do not consider university research an important part of their business strategy.

By definition, the research performed in a collaborative university environment is shared by many players, including competitors—and potential future competitors. One perceived nightmare scenario, for a corporation, is that of a university professor or student forming her own company to exploit the tech breakthrough. Why should a company invest in research that also benefits its rivals? In terms of time and money, IP is the proverbial “giant sucking sound.” Of course IP is critical, but what’s often misunderstood is that its value depends entirely on the maturity of the technology. Guarding product IP like Cerberus at the gates of hell is not necessarily a wise strategy, especially for early-stage research, which occurs years before an innovation can be brought to market.

The problem here is that semiconductor companies are behaving as if they were pharmaceutical companies. With pharmaceutical discoveries, the early-stage IP is the most important; it would be unthinkable to share the development costs of a Prozac or a Celexa with a competitor. But in the semiconductor industry, no early-stage IP is ever “ready to wear.” There’s lots of cutting, fitting, altering, refitting, and realtering before it’s ready for the runway. Xilinx chief technology officer Ivo Bolsens put it very well when he told us, “There are a hundred decisions and innovations that I will need to add before I can take an excellent academic idea and make it into a product.”

Consider carbon nanotubes. These basic building blocks can be used in many different ways to develop countless different technologies and products. Patenting something so basic would be akin to patenting a brick. Builders can use the same brick to make castles and cottages. The outcomes are vastly different and do not depend in any way on whether that builder has the patent on the brick. And in that sense, the BiSFET device is a stellar example of the kind of early IP that companies are so unwilling to share. No one has even created the device yet—it’s certainly not ready for commercialization. Like the brick, it could lead to a hundred different architectures. And we hope it will.

The other belief, that companies can gain access to early-stage R&D results at conferences, is even easier to dispel. What companies don’t understand is that by the time their researchers read it at a conference, it’s already too little, too late, and too limited. Too little because you see only the tip of the iceberg in the final results; too late because by the time it’s in a paper the research has already been picked over for two years; and too limited—this is the most important point here—because you see only the path that resulted in the positive outcome. You want to be engaged with the full research, not just the condensed summary, boiled down to 20 PowerPoint slides and 20 minutes. You miss all the paths that were taken that were not successful—and that alone is worth the price of admission, because knowing all the dead ends to avoid could save a company millions. These kinds of negative results never get published at conferences.

Any company would be thrilled to achieve a 10 percent reduction in power between product generations. That number is typical of what evolutionary advances can accomplish at their best. Our national centers, by contrast, have enabled revolutionary and discontinuous advances in the last four to five years that haven’t been seen for the last four or five decades.

With devices that perform far better than today’s devices and yet consume a thousandth of the power, we could drastically reduce the consumption of power-hungry server farms that run today’s critical Internet applications but consume enough power for a small city. We could realize “green” residential and transportation systems, a huge opportunity—or perhaps even a necessity, given that the world in 2050 may need 28 terawatts of power, compared with the 15 TW of energy we use today. We might enable a new generation of personal electronics that turn our beloved iPhones into dinosaurs. We might build implantable medical devices that never need external charging, which means they wouldn’t require invasive surgery just to change the battery. The breakthrough research in the centers may even enable radical concepts like “energy scavenging,” where the chip survives entirely on power it draws from its surroundings—that is, from the movements of the person wearing the device.

But none of this will be possible until companies let go of their outdated notions and downright misconceptions.

The challenge today is in finding sources of disruptive scientific innovation. At Bell Labs and the Xerox Palo Alto Research Center, the seeds were planted for today’s technology revolution. No one has the resources to replicate these today, but we believe we can make an alternative model of innovation, updated for the 21st century. It may very well be the key to an epochal change.

About the Author

Pushkar Apte is vice president of technology programs and George Scalise is president of the Semiconductor Industry Association. They describe the effort to push semiconductor R&D past the end of Moore’s Law in “The Recession’s Silver Lining.”

To Probe Further

The authors describe their work establishing the requirements for the Green Flash in more detail in “Towards Ultra-High Resolution Models of Climate and Weather,” which appeared in the May 2008 issue of the International Journal of High-Performance Computing Applications. The material posted at https://www.lbl.gov/cs/html/greenflash.html provides recent updates on their research.

This article is for IEEE members only. Join IEEE to access our full archive.

Join the world’s largest professional organization devoted to engineering and applied sciences and get access to all of Spectrum’s articles, podcasts, and special reports. Learn more →

If you're already an IEEE member, please sign in to continue reading.

Membership includes:

  • Get unlimited access to IEEE Spectrum content
  • Follow your favorite topics to create a personalized feed of IEEE Spectrum content
  • Save Spectrum articles to read later
  • Network with other technology professionals
  • Establish a professional profile
  • Create a group to share and collaborate on projects
  • Discover IEEE events and activities
  • Join and participate in discussions