Promise of Analog AI Feeds Neural Net Hardware Pipeline

Exotic technologies could lead to ultra-low-power AI applications

3 min read

Charles Q. Choi is a science reporter who contributes regularly to IEEE Spectrum.

illustrated brain with connecting dots

Some of the best circuits to drive AI in the future may be analog, not digital, and research teams around the world are increasingly developing new devices to support such analog AI.

The most basic computation in the deep neural networks driving the current explosion in AI is the multiply-accumulate (MAC) operation. Deep neural networks are composed of layers of artificial neurons, and in MAC operations, the output of each one of these layers is multiplied by the values of the strengths or "weights" of their connections to the next layer, which then sums up these contributions.

Modern computers have digital components devoted to MAC operations, but analog circuits theoretically can perform these computations for orders of magnitude less energy. This strategy—known as analog AI, compute-in-memory or processing-in-memory—often performs these multiply-accumulate operations using non-volatile memory devices such as flash, magnetoresistive RAM (MRAM), resistive RAM (RRAM), phase-change memory (PCM) and even more esoteric technologies.

One team in Korea, however, is exploring neural networks based on praseodymium calcium manganese oxide electrochemical RAM (ECRAM) devices, which act like miniature batteries, storing data in the form of changes in their conductance. Study lead author Chuljun Lee at the Pohang University of Science and Technology in Korea notes that neural network hardware often has different demands during training versus during applications. For instance, low energy barriers help neural networks learn quickly, but high energy barriers help them retain what they learned for use during applications.

"Heating up their devices almost 100 degrees C warmer during training brought out the characteristics that are good for training," says electrical engineer John Paul Strachan, head of the Peter Grünberg Institute for Neuromorphic Compute Nodes at the Jülich Research Center in Germany, who did not participate in this study. "When it cooled down, they got the advantages of longer retention and lower current operation. By just adjusting one knob, heat, they could see improvements on multiple dimensions of computing." The researchers detailed their findings at the annual IEEE International Electron Devices Meeting (IEDM) in San Francisco on Dec. 14.

One key question this work faces is what kind of deterioration this ECRAM may face after multiple cycles of heating and cooling, Strachan notes. Still, "it was a very creative idea, and their work is a proof of concept that there could be some potential with this approach."

Another group investigated ferroelectric field-effect transistors (FEFETs). Study lead author Khandker Akif Aabrar at the University of Notre Dame explained that FEFETs store data in the form of electric polarization within each transistor.

A challenge FEFETs face is whether they can still display the analog behavior valuable to AI applications as they scale down, or whether they will abruptly switch to a binary mode where they only store one bit of information, with the polarization either one state or the other.

"The strength of this team's work is in their insight into the materials involved," says Strachan, who did not take part in this research. "A ferroelectric material can be thought of as a block made of many little domains, just as a ferromagnet can be thought up as up and down domains. For the analog behavior they desire, they want all these domains to slowly align either up or down in response to an applied electric field, and not get a runaway process where they all go up or down at once. So they physically broke up their ferroelectric superlattice structure with multiple dielectric layers to reduce this runaway process."

The system achieved a 94.1% online learning accuracy, which compared very well against other FEFET and RRAM technologies, findings that scientists detailed on Dec. 14 at the IEDM conference. Strachan notes future research can seek to optimize properties such as current levels.

A novel microchip from scientists in Japan and Taiwan made using c-axis-aligned crystalline indium gallium zinc oxide. Study co-author Satoru Ohshita at Semiconductor Energy Laboratory Co. in Japan notes their oxide semiconductor field-effect transistors (OSFETs) displayed ultra-low-current operations below 1 nano-ampere per cell and operation efficiencies of 143.9 trillion operations per second per watt, the best reported to date in analog AI chips, findings detailed on Dec. 14 at the IEDM conference. "These are extremely low-current devices," Strachan says. "Since the currents needed are so low, you can make circuit blocks larger—they get arrays of 512 by 512 memory cells, whereas the typical numbers for RRAM are more like 100 by 100. That's a big win, since larger blocks get a quadratic advantage in the weights they store. "When the OSFETs are combined with capacitors, they can retain information with more than 90% accuracy for 30 hours. "That could be a long enough time to move that information to some less volatile technology—tens of hours of retention is not a dealbreaker," Strachan says. All in all, "these new technologies that researchers are exploring are all proof of concept cases that raise new questions about challenges they may face in their future," Strachan says. "They also show a path to the foundry, which they need for high-volume, low-cost commercial products.”

The Conversation (0)