What AMD Learned From Its Big Chiplet Push

Sam Naffziger was an early evangelist for breaking up silicon chips

3 min read

Samuel K. Moore is IEEE Spectrum’s semiconductor editor.

An illustration of a man and surrounded by colored squares.
Stuart Bradford

Over the last five years, processors have gone from being single pieces of silicon to a collection of smaller chiplets that collectively act as if they’re one big chip. This approach means that the CPU’s functional pieces can be built using the technology that suits each piece best. Sam Naffziger, a product-technology architect at AMD, was an early proponent of this approach. Naffziger recently answered five chiplet-size questions from IEEE Spectrumon the topic.

Sam Naffziger

Sam Naffziger is a senior vice president, corporate fellow, and product-technology architect at AMD and an IEEE Fellow. He is the recipient of the IEEE Solid-State Circuits Society’s 2023 Industry Impact Award.

What are the main challenges you’ve seen for chiplets-based processors?

Sam Naffziger: We started out five or six years ago with the EPYC and Ryzen CPU lines. And at the time, we cast a pretty broad net to find what package technologies would be best for connecting the die [small block of silicon]. It’s a complex equation of cost, capability, bandwidth densities, power consumption, and also manufacturing capacity. It’s relatively easy to come up with great package technologies, but it’s a completely different thing to actually manufacture them in high volume, cost effectively. So we’ve invested heavily in that.

How might chiplets change the semiconductor-manufacturing process?

Naffziger: That’s definitely something that the industry is working through. There’s where we’re at today, and then there’s where we might go in 5 to 10 years. I think today, pretty much, the technologies are general purpose. They can be aligned to monolithic die just fine, or they can function for chiplets. With chiplets, we have much more specialized intellectual property. So, in the future one could envision specializing the process technology and getting performance benefits, cost reductions, and other things. But that’s not where the industry is at today.

How will chiplets affect software?

Naffziger: One of the goals of our architecture is to have it be completely transparent to software, because software is hard to change. For example, our second-generation EPYC CPU is made up of a centralized I/O [input/output] chiplet surrounded by compute dies. When we went to a centralized I/O die, it reduced memory latency, eliminating a software challenge from the first generation.

“One of the goals of our architecture is to have it be completely transparent to software, because software is hard to change.”

Now, with the [AMD Instinct] MI300—AMD’s upcoming high-performance computing accelerator—we’re integrating both CPU and GPU compute dies. The software implication of that sort of integration is that they can share one memory address space. Because the software doesn’t have to worry about managing memory, it’s easier to program.

How much of the architecture can be separated out onto chiplets?

Naffziger: We’re finding ways to scale logic, but SRAM is more of a challenge, and analog stuff is definitely not scaling. We’ve already taken the step of splitting off the analog with the central I/O chiplet. With 3D V-Cache—a high-density cache chiplet 3D-integrated with the compute die—we have split off the SRAM. And I would expect in the future there will be lots more of that kind of specialization. The physics will dictate how fine grained we can go, but I’m bullish about it.

What has to happen for mixing and matching different companies’ chiplets into the same package to become a reality?

Naffziger: First of all, we need an industry standard on the interface. UCIe, a chiplet interconnect standard introduced in 2022, is an important first step. I think we’ll see a gradual move towards this model because it really is going to be essential to deliver the next level of performance per watt and performance per dollar. Then, you will be able to put together a system-on-chip that is market or customer specific.

This article appears in the July 2023 print issue as “5 Questions for Sam Naffizger.”

The Conversation (1)
Wade Mealing
Wade Mealing28 Jun, 2023
INDV

Software is perhaps the most malleable material on the planet, literally an idea forced into life. I understand that most operating systems expect the hardware to abstract the complexity away into a consistent interface when the underlying hardware is paddling like a ducks feet underwater, but I hope that nuances of the comment are simply not presented. We see new CPU features added to compilers regularly, are chiplets so complex that we can't have compilers generate instructions to use new chiplet features ?If the comment was to Nerdsnipe, I've been sniped I guess.