Future Transistors, Plastic Processors, and 3D Chips

IEEE Spectrum’s biggest semiconductor headlines of 2022

4 min read

Samuel K. Moore is IEEE Spectrum’s semiconductor editor.

A cake with a candle in the shape of a transistor symbol. A slice has been taken out of the cake, and a fork lies next to it.
Lisa Sheehan

“Those who do not know their history are doomed to repeat it.” With the phenomenal success of the transistor’s first 75 years, it feels like maybe we should both know and repeat. That seems to have been the reaction you (dear devoted IEEE Spectrum reader) took with your semiconductor article consumption. You dove deep with us into the inner workings of the very first transistor, but you also wanted to know what comes next. In between, taking some time to appreciate how fundamental forces are smashing up big system-on-chips, what an injection of government money could do to chip manufacturing in the United States, and some stranger stuff.

So to recap the semiconductor stories with the highest traffic numbers from this year, we’ve put together this set of highlights:

The Transistor at 75

It’s hard to overstate—considering the utter ubiquity and profound significance of electronic technology in people’s lives today—the importance of the transistor, which was invented in December 1947. After all, we likely produced 2 billion trillion of them this year alone. Yet there was only one in 1947, and it worked pretty weirdly. How we got from 1 to 2 billion trillion per year is a separate story. But what Spectrum readers were most interested in was how we’re going to make future billion trillions. The next step (well, the next, next step for some companies) seems pretty clear; it’s the complementary FET. That’s a PMOS and NMOS transistor built atop each other, likely all at once, potentially cutting the size of logic cells in half. What comes after that? At the 100th anniversary of the transistor there will likely be variations on that theme, our experts say.

The First High-Yield, Sub-Penny Plastic Processors

I’ve been listening to promises of cheap, flexible electronics my whole career. These could add intelligence to every currently-dumb object on store shelves today. So where’s my smart-banana? This year, we finally got an answer: Until now, nobody has built working plastic processors that can be made in the billions for less than a penny each. Even the simplest industry-standard microcontrollers are too complex to make on plastic in bulk, because the yield of good “chips” (for want of a better word) is super low. Engineers in Illinois and England came to the rescue with an architecture simple enough to breach the one-penny barrier.

Single-Chip Processors Have Reached Their Limits

Coffins require a number of nails, but this year two more were driven in to the one bearing the concept of the PC or server CPU made from a single piece of silicon. Apple revealed that the M1 Ultra, now found inside Mac Studio, is a variant of the M1 that effectively fuses two chips into one using advanced packaging. Nvidia delivered similar news about its Grace CPU processors. Processor designers are giving up on the idea of one big chip mostly because they’ve run out of silicon. Interconnecting two “chiplets” can give you a doubling of transistor count without needing to move to a more advanced chip making process.

These 5 Charts Help Demystify the Global Chip Shortage

The chip shortage that squeezed automakers and many others so hard in 2020 and 2021 was very much still with us in one form or another in 2022. Supply chains of all kinds were still showing their fragility, and semiconductors was no exception. It was even showing up in the maker space. But a lot of the shortage talk was amplified by U.S. government officials, who were trying to get some major manufacturing legislation passed. (More on that below.)

Intel's Take on the Next Wave of Moore's Law

In an interview in early December, Ann B. Kelleher, general manager of technology development at Intel really tried to make me understand how a concept called system technology co-optimization (STCO) was going save Moore’s Law. It’s not that I wasn’t buying it; I was. (And Spectrum has been writing about STCO in one way or another for nearly two decades.) It’s just that it’s an idea that really needs an example to go with it. In STCO, you start with the software workload, figure out the functions needed for that, and then determine which functions should be implemented using which semiconductor manufacturing technology. All of those things feed back on each other, and so must be “co-optimized.” It’s possible to do that, because you can now break a system-on-chip into functional chiplets and stitch them together using 3D and other advanced packaging, so they act like a single large chip. As for the example, Kelleher recommends looking at the Ponte Vecchio accelerator behind the Aurora supercomputer. It’s made up of 47 pieces of silicon made using four different processes, three of which weren’t even Intel’s.

3 Ways 3D Chip Tech is upending Computing

This was the year 3D chip packaging showed up in force, at least in the high-end logic arena. AI supercomputer startup Graphcore used some TSMC 3D chip-stacking tech to improve the flow of power to its chip in a way that could lead to a 40 percent speed up in training neural networks. AMD used a different TSMC tech to give it’s compute chiplets a major memory boost. And Intel through every chip packaging tech they had to Ponte Vecchio to build a 47-chiplet monster. As you’ve noticed if you’ve read any of the other items in this post, this is the way of the future.

U.S. Passes Landmark Act to Fund Semiconductor Manufacturing

Legislation aimed at increasing semiconductor manufacturing in the United States finally became law after a multiyear journey that saw many mutations and delays. The CHIPS and Science Act, part of a larger $280 billion package, provides about US $52 billion for new or expanded facilities that make semiconductors or chipmaking equipment. And it arrived amidst efforts by other nations and regions to boost chip manufacturing, an industry increasingly seen as a key to economic and military security. Now, the hard part—figuring out who gets how much money.

A Transistor for Sound Points Toward Whole New Electronics

Scientists have been chasing electronic devices based on topological materials, which basically protect the flow of current from disturbance because of the same math that tells you a donut and a coffee mug are fundamentally the same shape but cereal bowl and coffee mug are not. Harvard scientists took a detour they hope will lead to some breakthroughs, by building topological transistors that modulate sound instead of electrons. The result is a weird looking honeycomb system, that looks like it belongs in the inner workings of Iron Man’s suit.

RISC-V AI Chips Will Be Everywhere

In 2022, the open source instruction set architecture RISC-V began what analysts see as a rapid penetration of the machine learning market. Early in the year, there was already silicon from startups aimed toward servers, like Esperanto AI, and those targeting “edge” computing like Kneron. Growth should continue to accelerate now that there’s a standard set of vector instructions adopted by the organization governing RISC-V.

Micron is First to Deliver 3D Flash Chips with More Than 200 Layers

While logic technology is still figuring out its path to 3D construction, Flash memory has been there for a while. Nevertheless, 2022 saw an important barrier passed, chips with more than 200 layers of memory cells. Boise, Idaho-based Micron Technology got their first with a 232-layer NAND flash-memory chip. It’s the first such chip to pass the 200-layer mark, and it’s been a tight race. SK Hynix says it is shipping samples of a 238-layer TLC product that will be in full production in 2023.

The Conversation (0)