This AI Can Beat You At Rock-Paper-Scissors

A reservoir computing chip offers fast and low-power predictions

4 min read

Dina Genkina is the computing and hardware editor at IEEE Spectrum

Close-up of TDK's prototype of an analog reservoir AI chip.

TDK Corporation's analog reservoir computing chip can predict the next step in a time series quickly and efficiently.

TDK

Rock-paper-scissors is usually a game of psychology, reverse psychology, reverse-reverse psychology, and chance. But what if a computer could understand you well enough to win every time? A team at Hokkaido University and the TDK Corporation (of cassette-tape fame), both based in Japan, has designed a chip that can do just that.

Okay, the chip does not read your mind. It uses an acceleration sensor placed on your thumb to measure your motion, and learns which motions represent paper, scissors, or rock. The amazing thing is, once it’s trained on your particular gestures, the chip can run the calculation predicting what you’ll do in the time it takes you to say “shoot,” allowing it to defeat you in real time.

The technique behind this feat is called reservoir computing, which is a machine-learning method that uses a complex dynamical system to extract meaningful features from time-series data. The idea of reservoir computing goes as far back as the 1990s. With the growth of artificial intelligence, there has been renewed interest in reservoir computing due to its comparatively low power requirements and its potential for fast training and inference.

The research team saw power consumption as a target, says Tomoyuki Sasaki, section head and senior manager at TDK, who worked on the device. “The second target is the latency issue. In the case of the edge AI, latency is a huge problem.”

To minimize the energy and latency of their setup, the team developed a CMOS hardware implementation of an analog reservoir computing circuit. The team presented their demo at the Combined Exhibition of Advanced Technologies conference in Chiba, Japan in October and are presenting their paper at the International Conference on Rebooting Computing in San Diego, California this week.

What is reservoir computing?

A reservoir computer is best understood in contrast to traditional neural networks, the basic architecture underlying much of AI today.

A neural network consists of artificial neurons, arranged in layers. Each layer can be thought of as a column of neurons, with each neuron in a column connecting to all the neurons in the next column via weighted artificial synapses. Data enters into the first column, and propagates from left to right, layer by layer, until the final column.

During training, the output of the final layer is compared to the correct answer, and this information is used to adjust the weights in all the synapses, this time working backwards layer by layer in a process called backpropagation.

This setup has two important features. First, the data only travels one way—forward. There are no loops. Second, all of the weights connecting any pair of neurons are adjusted during the training process. This architecture has proven extremely effective and flexible, but it is also costly; adjusting what sometimes ends up being billions of weights takes both time and power.

Reservoir computing is also built with artificial neurons and synapses, but they are arranged in a fundamentally different way. First, there are no layers—the neurons are connected to other neurons in a complicated, web-like way with plenty of loops. This imbues the network with a type of memory, where a particular input can keep coming back around.

Second, the connections within the reservoir are fixed. The data enters the reservoir, propagates through its complex structure, and then is connected by a set of final synapses to the output. It’s only this last set of synapses, with their weights, that actually gets adjusted during training. This approach greatly simplifies the training process, and eliminates the need for backpropagation altogether.

Given that the reservoir is fixed, and the only part that’s trained is a final “translation” layer from the reservoir to the desired output, it may seem like a miracle that these networks can be useful at all. And yet, for certain tasks, they have proved to be extremely effective.

“They’re by no means a blanket best model to use in the machine learning toolbox,” says Sanjukta Krishnagopal, assistant professor of computer science at the University of California, Santa Barbara, who was not involved in the work. But for predicting the time evolution of things that behave chaotically, such as, for example, the weather, they are the right tool for the job. “This is where reservoir computing shines.”

The reason is that the reservoir itself is a bit chaotic. “Your reservoir is usually operating at what’s called the edge of chaos, which means it can represent a large number of possible states, very simply, with a very small neural network,” Krishnagopal says.

A physical reservoir computer

The artificial synapses inside the reservoir are fixed, and backpropagation does not need to happen. This leaves a lot of freedom in how the reservoir is implemented. To build physical reservoirs, people have used a wide variety of mediums, including light, MEMS devices, and my personal favorite, literal buckets of water.

However, the team at Hokkaido and TDK wanted to create a CMOS-compatible chip that could be used in edge devices. To implement an artificial neuron, the team designed an analog circuit node. Each node is made up of three components: a non-linear resistor, a memory element based on MOS capacitors, and a buffer amplifier. Their chip consisted of four cores, each core made up of 121 such nodes.

Wiring up the nodes to connect with each other in the complex, recurrent patterns required for a reservoir is difficult. To cut down on the complexity, the team decided on a so-called simple cycle reservoir, with all the nodes connected in one big loop. Prior work has suggested that even this relatively simple configuration is capable of modeling a wide range of complicated dynamics.

Using this design, the team was able to build a chip that consumed only 20 microwatts of power per core, or 80 microwatts of power total—significantly less than other CMOS-compatible physical reservoir computing designs, the authors say.

Predicting the future

Aside from defeating humans at rock-paper-scissors, the reservoir computing chip can predict the next step in a time series in many different domains. “If what occurs today is affected by yesterday’s data, or other past data, it can predict the result,Sasaki says.

The team demonstrated the chip’s abilities on several tasks, including predicting the behavior of a well-known chaotic system known as a logistic map. The team also used the device on the archetypal real-world example of chaos: the weather. For both test cases, the chip was able to predict the next step with remarkable accuracy.

The precision of the prediction is not the main selling point, however. The extremely low power use and low latency offered by the chip could enable a new set of applications, such as real-time learning on wearables and other edge devices.

“I think the prediction is actually the same as the present technology,” Sasaki says. “However, the power consumption, the operation speed, is maybe 10 times better than the present AI technology. That is a big difference.”

The Conversation (0)