Searching for the Perfect Artificial Synapse for AI

Researchers tried out several new devices to get closer to the ideal needed for deep learning and neuromorphic computing

6 min read

Samuel K. Moore is IEEE Spectrum’s semiconductor editor.

A brain made of bright white electrical interconnect traces.
Image: iStockphoto

What’s the best type of device from which to build a neural network? Of course, it should be fast, small, consume little power, have the ability to reliably store many bits-worth of information. And if it’s going to be involved in learning new tricks as well as performing those tricks, it has to behave predictably during the learning process.

Neural networks can be thought of as a group of cells connected to other cells. These connections—synapses in biological neurons—all have particular strengths, or weights, associated with them. Rather than use the logic and memory of ordinary CPUs to represent these, companies and academic researchers have been working on ways of representing them in arrays of different kinds of nonvolatile memories. That way, key computations can be made without having to move any data. AI systems based on resistive RAM, flash memory, MRAM, and phase change memory are all in the works, but they all have their limitations. Last week, at the IEEE International Electron Device Meeting in San Francisco, researchers put forward some candidates that might do better.

IBM’s latest entrant as the basis of the perfect synapse is called electrochemical RAM. Like phase change memory or RRAM, it stores information as a change in its conductance. But unlike those two, which are usually built to achieve two or a few states, ECRAM is built to achieve dozens or even hundreds.

A box-shaped diagram of an ECRAM cell has four layers.Writing to an ECRAM cell drives lithium ions into or out of a tungsten trioxide channel. Reading involves measuring the conductance of the channel.Illustration/Image: IBM

The ECRAM cell looks a bit like a CMOS transistor. A gate sits atop a dielectric layer, which covers a semiconducting channel and two electrodes, the source and drain. However, in the ECRAM, the dielectric is lithium phosphorous oxynitride, a solid-state electrolyte used in experimental thin-film lithium-ion batteries. In an ECRAM, the part that would be the silicon channel in a CMOS transistor is made from tungsten trioxide, which is used in smart windows, among other things.

To set the level of resistance—the synapse’s “weight” in neural networks terms—you pulse a current across the gate and source electrodes. When this pulse is of one polarity, it drives lithium ions into the tungsten layer, making it more conductive. Reverse the polarity, and the ions flee back into the lithium phosphate, reducing conductance.

Reading the synapse’s weight just requires setting a voltage across the source and drain electrodes and sensing the resulting current. The separation of the read current path from the write current path is one of the advantages of ECRAM, says Jianshi Tang at IBM T.J. Watson Research Center. Phase change and resistive memories have to both set and sense conductance by running current through the same path. So reading the cell can potentially cause its conductance to drift.

(A separate group at IBM presented its solution to this drift problem at IEDM as well. That team’s “projection” phase change memory cell contains a structure that shunts the read current through without letting it rewrite the cell.)

IBM used measurements from its test cell to see how accurate a neural network made from a group of such cells would be. Using the MNIST database of handwritten digits as a test, this neural network reached 96 percent accuracy, just a bit off from the ideal. They first thought to improve the accuracy by doubling the number of conductance states the cell can achieve to 110, but it didn’t work. “We were surprised that it didn’t further improve the accuracy,” Tang says.

Three columns of dots are connected to each other by diagonal lines. At the top, the words 'forward propagation' accompany a left to right arrow. At the bottom 'the words back propagation for weight update' accompany a right to left arrow.Neural networks learn through a feedback process that tunes the network weights. It works best when the devices have symmetrical electrical characteristics.Image: Purdue University

The IBM team discovered that a slight asymmetry in the conductance between how it ramps up to its peak and how it descends was holding things back. Perfect symmetry would mean that a current pulse should change the conductance by a certain amount, and then the same pulse of opposite polarity should return the conductance precisely to its starting point. The ECRAM has good symmetry compared to other nonvolatile memories, but it’s not perfect.

Cutting that asymmetry in half would be all that’s needed to give the neural network its best possible accuracy. Such a thing is definitely doable, according to their research, by adjusting the device’s dynamic range.

The IBM team also showed that an ECRAM can be shrunk down to the point where the conductive channel is just 100 nanometers across, compared to the 60-micrometer version they first constructed. An ECRAM that size would need just 1 femtojoule of energy to change states, which is near the energy expended at human neuron’s synapse. “Of course, nothing’s perfect,” says Tang. “There are still several challenges to implement neuromorphic arrays with our ECRAM.”

Grey fins bridge two rectangles. At the middle the fins are shorter in height and tinged red.Germanium ferroelectric nanowire transistors might have the right characteristics to speed AI.Image: Peide Ye/Purdue University

The ECRAM wasn’t the only entrant at IEDM this year. A Purdue University group led by IEEE Fellow Peide Ye put forward a device made from germanium nanowires and ferroelectric materials. Ferroelectrics show a strong electrical polarization in response to small voltage. By placing a ferroelectric in a transistor’s gate, researchers are hoping to lower the voltage at which a transistor switches on, in the hopes of driving down power consumption. But you can also store information in the ferroelectric. That’s done by flipping the polarity of portions of the ferroelectric, and thereby altering the current that passes through the transistor for a given voltage. That’s what Ye’s group did, producing a device capable of more than 256 conductance states. What’s more, it could move up and down those conductance states with reasonable symmetry. A simulated network tackling the MNIST handwriting challenge reached 88 percent accuracy.

Symmetry and hundreds of conductive states aren’t needed if your neural network doesn’t have to do any learning, however. Many things you might want AI for in your daily life, such as having your coffee maker start up after hearing a “wake word”, would learn to do their jobs offline, in the cloud. The set of weights and neural connections needed to do the job would then be loaded onto a special-purpose, low-power chip embedded in your coffee maker. Many startups are looking to carve a spot for themselves providing these “inferencing” chips or the technology behind them, and some of them rely on using memory cells to both store weight values and perform critical deep learning calculations. Syntiant, Mythic, and Anaflash for example, all use embedded flash memory for their chips’ processing.

A diagram of a ferroelectric metal field effect transistor resembles the upper half of a cartoon robot.The FeMFET’s ferroelectric layer is built above the transistor, where a chip’s interconnects are usually built.Illustration: Notre Dame

A group of researchers from Notre Dame University in Indiana, and Samsung Advanced Logic Labs, in Austin, Tx., invented a new kind of memory cell for embedded AI chips: the ferroelectric metal FET (FeMFET). University of Notre Dame’s Kai Ni wanted to improve the FeFET’s track record in such AI applications; it had been plagued by high voltages for writing in the weights, which led to reliability problems. The answer was to move the ferroelectric layer out of the transistor, essentially making it a separate capacitor situated above the transistor.

Writing weights into the FeMFET, which can hold two bits, takes less than half the voltage of previous ferroelectric AI schemes. But, for the moment, it takes too long. “The only penalty we have now is write speed, which we do not think is intrinsic to the cell” and is something that can be improved, says Ni.

There may be no perfect synapse for neuromorphic chips and deep learning devices. But it seems clear from the variety of new, experimental ones revealed at IEDM last week that there will be better ones than we have today.

This story was corrected on 11 December to clarify terms and correct the channel length of an IBM ECRAM.

The Conversation (0)