Intel Shows Off Chip Packaging Powers

Three research directions should bind chiplets more tightly together

3 min read
Illustration of the chip and foveros
Illustration: Intel

Packaging has arguably never been a hotter subject. With Moore’s Law no longer providing the oomph it once did, one path to better computing is to connect chips more tightly together within the same package.

At Semicon West earlier this month, Intel showed off three new research efforts in packaging. One combines two of its existing technologies to more tightly integrate chiplets—smaller chips linked together in a package to form the kind of system that would, until recently, be made as a single large chip. Another adds better power delivery to dies at the top of a 3D stack of chips. And the final one is an improvement on Intel’s chiplet-to-chiplet interface called Advanced Interface Bus (AIB).

The first effort, dubbed Co-EMIB, is essentially a way of combining two existing Intel packaging technologies: EMIB (for embedded multidie interconnect bridge) and Foveros. The former bridges two chiplets over a short distance horizontally using a small piece of silicon embedded in a package’s organic substrate. The interconnect lines on silicon can be made narrower than on the organic substrate and can be packed together more tightly to form a high-bandwidth chip-to-chip connection. It’s used in systems like Intel’s Stratix 10 FPGA, which is actually an FPGA chiplet linked to two high-bandwidth DRAM and four high-speed transceiver chiplets in the same package.

Foveros is Intel’s 3D chip-stacking technology. It allows die-to-die connections that are just 50 micrometers apart, leading to high-bandwidth vertical connections. Through-silicon vias (or TSVs), conductors that pass vertically through the silicon of the bottom die, connect the stack to the package substrate.

Combining the two into Co-EMIB lets two or more Foveros stacks communicate through high-density EMIB bridges to build more complex systems. That might seem an obvious thing to do. But with connections only micrometers apart, an organic substrate that is hard to make perfectly planar, and a fairly large area to pattern, it was actually quite difficult.

“The scale of it becomes more and more critically [dependent] on how you can hold all your dimensional tolerancing through the assembly process,” says Johanna Swan, a fellow at Intel’s components research and technology development group. “The process tricks become more important in order to manage the size of structures. We’re able to show there’s a path for maintaining that dimensional stability over a larger area.”

The second research effort, Intel’s Omnidirectional Interconnect (ODI), essentially allows for EMIB-like vertical connections. These are larger than ordinary through-silicon vias—about 70 micrometers across versus a typical TSV’s 10 micrometers. The large diameter makes them especially well suited to deliver power to the top die in a 3D stack, according to Swan. “As you scale that area, you get cleaner, more efficient power delivery,” she says.

MDIO, the product of the third effort, should be available in 2020 according to Intel’s Semicon West presentation. It offers 200 gigabytes per second per millimeter of chip edge versus AIB’s 63 GB/s-mm, and it uses 0.50 picojoules per bit versus AIB’s 0.85. Intel compared MDIO to TSMC’s LIPINCON technology, which is also expected in 2020 and delivers 67 GB/s-mm at about the same picojoules per bit.

Intel R&D will continue to try to increase the number of bumps—the solder ball on/off ramps from a chip—available in a given area, says Swan. But ultimately, getting rid of solder is the goal. The intermetallic interface between the solder and the copper interconnects limits current, so Intel, and others, are exploring a technology called hybrid bonding, which uses a dielectric material and heat to connect one chip’s copper pads to another without solder.

The Conversation (0)

3 Ways 3D Chip Tech Is Upending Computing

AMD, Graphcore, and Intel show why the industry’s leading edge is going vertical

8 min read
Vertical
A stack of 3 images.  One of a chip, another is a group of chips and a single grey chip.
Intel; Graphcore; AMD
DarkBlue1

A crop of high-performance processors is showing that the new direction for continuing Moore’s Law is all about up. Each generation of processor needs to perform better than the last, and, at its most basic, that means integrating more logic onto the silicon. But there are two problems: One is that our ability to shrink transistors and the logic and memory blocks they make up is slowing down. The other is that chips have reached their size limits. Photolithography tools can pattern only an area of about 850 square millimeters, which is about the size of a top-of-the-line Nvidia GPU.

For a few years now, developers of systems-on-chips have begun to break up their ever-larger designs into smaller chiplets and link them together inside the same package to effectively increase the silicon area, among other advantages. In CPUs, these links have mostly been so-called 2.5D, where the chiplets are set beside each other and connected using short, dense interconnects. Momentum for this type of integration will likely only grow now that most of the major manufacturers have agreed on a 2.5D chiplet-to-chiplet communications standard.

Keep Reading ↓ Show less
{"imageShortcodeIds":[]}