When chipmakers announce that they’ve managed to pack yet more circuits onto a chip, it’s usually the smaller transistor that gets all the attention. But the interconnects that link transistors to form circuits also have to shrink. Now, some of them simply can’t get any smaller without creating some serious consequences for circuit speed and energy consumption.
The problem is perhaps most obvious in SRAM, the most ubiquitous memory on processors today. But researchers at the Belgian nanotech research center imec have come up with a scheme that could keep SRAM performing well and could eventually lead to a way to pack even more transistors onto integrated circuits.
ICs are made by constructing transistors on the silicon and then adding layers of interconnects above them to connect them together. In IEEE Electron Device Letters, the imec team described a way to take the interconnects that power SRAM cells out of those layers and instead bury them in the silicon. They then used the freed-up space to make other key interconnects bigger and thus less resistant. In simulations, the read-from speed of the resulting memory cells was about 31 percent faster than conventional SRAM, and writing to them required 340 millivolts less than what it takes to power memories whose interconnects aren’t buried.
SRAM—made up of six transistors—is particularly sensitive to more resistant interconnects because the two interconnects that control reading and writing (called the bit line and the word line), are relatively long, explains Shairfe M. Salahuddin, a senior researcher at imec. Long, narrow lines have more resistance, limiting current and slowing signals.
This is a problem when writing to an SRAM cell, because the resistance causes a voltage difference between the beginning and end of the bit line. To write, the bit line must be set to zero volts. Because of the resistance, getting the last transistor on the bit line to zero means starting with a negative voltage, which engineers have had to increase by several hundred millivolts over the past three generations of chips.
Interconnect resistance can also be a hindrance to reading a memory cell. More resistive lines lead to a longer delay in switching from high to low voltage, and delays in both the bit line and the word line combine during the read operation.
Future generations of chips, such as those made using the coming 3-nanometer node process, will need wider, less-resistive bit lines and word lines. Yet, overall, the processes need to produce more circuits for a given area. Salahuddin and the rest of the imec team hit upon way to do both. “We found that if we can remove the power lines from the SRAM bit cell, then we have some additional space” in the interconnect layer, he says. “We can use that space to widen the metal tracks for the bit line and the word line.”
The wider bit lines were nearly 75 percent less resistant and the new word lines cut resistance by more than 50 percent, leading to the improved read speed and lower write voltage.
The first step in making buried power lines is to etch through the dielectric (blue) and silicon (red) two form two trenches. The trenches are then layered with an encapsulant (green) and then filled with metal (gold). Part of the metal is removed and capped with dielectric before the FinFET gates are built (grey).Illustration: imec
Burying the power lines, however, was no easy task. Each SRAM cell contacts both a high voltage rail and a ground rail, and these had to be buried in between the transistor fins. What’s more, they couldn’t be very resistive themselves, so they had to be fairly large. The solution was, basically, to etch a deep, narrow trench between the transistor fins and then fill it with ruthenium. (Because of certain problems with copper’s stability, the chip industry is moving to cobalt or ruthenium for its most narrow interconnects.) Deep, narrow trenches are difficult to construct, says Salahuddin. Adding to the difficulty was encapsulating the ruthenium to prevent any interaction it might have with the silicon.
Next up for the technology is to see what gains it produces in the logic portions of microprocessors, whose geometries are far less regular than those of SRAM. If it works there, the researchers plan to extend the technology in a way that could lead to even smaller circuits. This technique, called backside power delivery, involves contacting the buried power lines using vertical connections that extend up through the silicon from the back of the chip. This would save even more space in the interconnect layers, perhaps shrinking the area needed for circuits by 15 percent. It would also save power, because the buried rails would have a shorter, lower-resistance path to the chip’s power supply.
Samuel K. Moore is the senior editor at IEEE Spectrum in charge of semiconductors coverage. An IEEE member, he has a bachelor's degree in biomedical engineering from Brown University and a master's degree in journalism from New York University.