Hey there, human — the robots need you! Vote for IEEE’s Robots Guide in the Webby Awards.

Close bar

New Nonvolatile Memories Shrink Circuits That Search Fast

New kinds of content addressable memory could speed search, enable in-memory computing

3 min read

Samuel K. Moore is IEEE Spectrum’s semiconductor editor.

Illustration of binary numbers and a magnifying glass
Illustration: iStockphoto

The kind of memory most people are familiar with returns data when given an address for that data. Content addressable memory (CAM) does the reverse: When given a set of data, it returns the address—typically in a single clock cycle—of where to find it. That ability, so useful in network routers and other systems that require a lot of lookups, is now getting a chance in new kinds of data-intensive tasks such as pattern matching and accelerating neural networks, as well as for doing logic operations in the memory itself.

Each cell of today’s content addressable memory is made from a combination of two static RAM (SRAM) cells. So it requires at least 12 transistors and is volatile, meaning it cannot retain data when the power is off. (Commercial chips are called ternary content addressable memories, or TCAMs. The ternary term means that you can search for data using three bits: 0, 1, or either.)

Seeking something smaller and non-volatile, engineers at the University of California, Berkeley have come up with a version of content-addressable memory that is potentially much more dense than even other experimental designs. It relies on ferroelectric field-effect transistors (FeFETs), which store data as an electric polarization within the transistor. The engineers, led by ferroelectric device pioneer Sayeef Salahuddin, described the work in IEEE Electron Device Letters as well as at the IEEE International Electron Devices Meeting (IEDM). Also at IEDM, a separate group of researchers from CEA Leti, in Grenoble, France, presented a resistive RAM-based TCAM that allows simultaneous searching of more data than previous RRAM circuits did.

Systems with content addressable memory are not common, because of the expense and area of SRAM memory. “So only very high-end enterprise servers [use them] to make search functions faster,” says Salahuddin, an IEEE Fellow. Emerging non-volatile memories such as FeFETs and RRAM can do the job in less area, because they require fewer transistors to form a memory cell.

CEA Leti’s TCAM cell sought to overcome a key limitation of previous attempts to build such circuits using RRAM. RRAM is a two-terminal nonvolatile memory that stores data as resistance. Previous RRAM-based TCAM schemes, made using two transistors and two RRAMs per cell, could not search for more than about 100 bits at once, because there is not a big enough difference between the RRAM’s two resistance states. That severely limits its usefulness.

Using the same number of transistors and RRAM cells, the Leti team, led by senior researcher Elisa Vianello, designed a TCAM cell that is largely insensitive to that resistance ratio. Though the team built 128-bit TCAMs, the technology can be scaled up to search for more than 2,000 bits at once, making it suitable for the long “data words” in applications like IPv6 packet routing and DNA sequence matching.  

Circuit representation of a single CAM cell, which consists of two FeFETS with their sources shorted together.Circuit representation of a single CAM cell, which consists of two FeFETS with their sources shorted together.Image: University of California, Berkeley/IEEE

The FeFET CAM cell that Salahuddin’s group developed is even more compact than CEA Leti’s. It’s just two transistors with their drains connected. It’s that connection that indicates if you’ve got a match for the bit you’re looking for. In operation, you’d write a bit (1, say) to one FeFET and its complement (0) to the other. You do this by disconnecting the sources and drains and applying voltage to the gates to shift the polarity of a ferroelectric material between the gate and the body of the transistor. This shift changes the amount of current that flows through the FeFET for a given gate voltage. It’s the change in current that marks the presence of the bit in question in that CAM cell.

There’s plenty to do to prove how well the FeFET circuit will work. “For this particular project we’ve shown just a single cell,” says Ava Tan, a graduate student in Salahuddin’s lab. “In the future we’ll demonstrate one [data] word and move on to building an array.”

The FeFET CAM cell could have other uses, Tan says. Perhaps the biggest fundamental roadblock to furthering traditional computers is the expense—in terms of time and energy—of moving data between processing and memory. One solution researchers are exploring is to find ways to do some processing in the memory itself. The FeFET CAM tech is one possibility, she says. Individually, each cell produces a truth table that’s the same as an exclusive not-OR (XNOR) logic gate. The cells “can both store a bit and perform the XNOR all in one,” she says.

The Conversation (0)