Memristor-Driven Analog Compute Engine Would Use Chaos to Compute Efficiently

With Mott memristors, a system could solve intractable problems using little power

3 min read

Samuel K. Moore is IEEE Spectrum’s semiconductor editor.

A close-up image of a tiny memristor that resembles a gray storm cloud with its funnel shape.
A micrograph shows the construction of a Mott memristor composed of an 8-nanometer-thick layer of niobium dioxide between two layers of titanium nitride.
Photo: Suhas Kumar/Hewlett Packard Labs

When you’re really harried, you probably feel like your head is brimful of chaos. You’re pretty close. Neuroscientists say your brain operates in a regime termed the “edge of chaos,” and it’s actually a good thing. It’s a state that allows for fast, efficient analog computation of the kind that can solve problems that grow vastly more difficult as they become bigger in size.

The trouble is, if you’re trying to replicate that kind of chaotic computation with electronics, you need an element that both acts chaotically—how and when you want it to—and could scale up to form a big system.

“No one had been able to show chaotic dynamics in a single scalable electronic device,” says Suhas Kumar, a researcher at Hewlett Packard Labs, in Palo Alto, Calif. Until now, that is.

He, John Paul Strachan, and R. Stanley Williams recently reported in the journal Nature that a particular configuration of a certain type of memristor contains that seed of controlled chaos. What’s more, when they simulated wiring these up into a type of circuit called a Hopfield neural network, the circuit was capable of solving a ridiculously difficult problem—1,000 instances of the traveling salesman problem—at a rate of 10 trillion operations per second per watt.

(It’s not an apples-to-apples comparison, but the world’s most powerful supercomputer as of June 2017 managed 93,015 trillion floating point operations per second but consumed 15 megawatts doing it. So about 6 billion operations per second per watt.)

The device in question is called a Mott memristor. Memristors generally are devices that hold a memory, in the form of resistance, of the current that has flowed through them. The most familiar type is called resistive RAM (or ReRAM or RRAM, depending on who’s asking). Mott memristors have an added ability in that they can also reflect a temperature-driven change in resistance.

The HP Labs team made their memristor from an 8-nanometer-thick layer of niobium dioxide (NbO2) sandwiched between two layers of titanium nitride. The bottom titanium nitride layer was in the form of a 70-nanometer wide pillar. “We showed that this type of memristor can generate chaotic and nonchaotic signals,” says Williams, who invented the memristor based on theory by Leon Chua.

What’s basically happening is that by controlling voltage and current, the device can be put into a state where tiny, random thermal fluctuations in the few nanometers of NbO2 are amplified enough to alter the way the memristor reacts. Williams and his colleagues note that these fluctuations are only big enough to affect things in memristors of this scale. They never saw it in larger devices.

Once they’d characterized what the memristor was doing and how it was doing it, they simulated it in a circuit to see what it could do. In the simulation, they integrated an array of Mott memristors with another, more common type made of titanium oxide to form a Hopfield network. These networks are particularly good at solving optimization problems. That is, problems where you’re trying to discover the best solution from a number of possibilities.

(The traveling salesman problem is one of these. In it, the salesman must find the shortest route that lets him visit all of his customers’ cities, without going through any of them twice. It’s a difficult problem because it becomes exponentially more difficult to solve with each city you add.)

You can imagine the solutions to these problems as valleys in a landscape. The best solution is the lowest point in the landscape, and a computer’s efforts to find it are like a ball rolling down the hills. The problem is that the ball can get stuck in a valley that is low (a solution) but not the lowest one (the optimal solution). The advantage of the Mott memristor network is that the chaotic behavior is enough to basically bump the ball out of the less-than-optimal solutions so it can find the best solution.

“In our case, we’re using chaotic noise to hop out of these barriers,” says HP Labs researcher scientist Strachan.

Williams envisions these “analog compute engines” one day embedded in systems-on-a-chip to accelerate optimization problems. But there are plenty of steps before that. Among the first is to build the system and investigate how well it scales. They’ll also need to properly benchmark its performance against the best algorithms and hardware.

For Williams, there’s a bigger lesson in the development of these memristors. “Everyone’s trying to reinvent the transistor using a new material,” he notes. “Even if you made a perfect transistor—whatever that is—you’d still not beat scaled CMOS.” Instead scientists and engineers should be looking for new types of computing from these new materials. “It’s important to ask what the material system is doing that’s different than what a transistor does… Rather than make a bad transistor, see if it makes something that would take 100 or 1,000 transistors to replicate.” Williams and his team are hoping their memristor system does just that.

The Conversation (0)