Electronics that mimic the treelike branches that form the network neurons use to communicate with each other could lead to artificial intelligence that no longer requires the megawatts of power available in the cloud. AI will then be able to run on the watts that can be drawn from the battery in a smartphone, a new study suggests.
As the brain-imitating AI systems known as neural networks grow in size and power, they are becoming more expensive and energy-hungry. For instance, to train its state-of-the-art neural network GPT-3, OpenAI spent US $4.6 million to run 9,200 GPUs for two weeks. Generating the energy that GPT-3 consumed during training released as much carbon as 1,300 cars would have spewed from their tailpipes over the same time, says study author Kwabena Boahen, a neuromorphic engineer at Stanford University, in California.
Now Boahen proposes a way for AI systems to boost the amount of information conveyed in each signal they transmit. This could reduce both the energy and space they currently demand, he says.
In a neural network, components called neurons are fed data and cooperate to solve a problem, such as recognizing faces. The neural net repeatedly adjusts the synapseslinking its neurons to modify each synapse’s “weight”—that is, the strength of one neuron’s influence over another. The network then determines whether the resulting patterns of behavior are better at finding a solution. Over time, the system discovers which patterns are best at computing results. It then adopts these patterns as defaults, mimicking the process of learning in the human brain. A neural network is called “deep“ if it possesses multiple layers of neurons. (For instance, GPT-3 possesses 175 billion weights connecting the equivalent of 8.3 million neurons arranged 384 layers deep.)
AI currently advances by performing twice as many computations every two months. However, the electronics industry doubles the devices required to perform these operations only once every two years. This has meant that AI is typically limited to the cloud, which can provide the many thousands of processors needed for it.
Previously, one way to reduce the energy costs of computation was to shrink transistors and pack them densely together. However, the return from that strategy is diminishing because the signals between transistors must now travel farther and farther across microchips, and the longer the wires are, the more energy the signals consume. One strategy to shorten these distances is to stack circuits on top of each other in three dimensions, but this approach then reduces the amount of surface area available for dissipating heat.
To solve this problem, Boahen outlines a way for AI systems to both send fewer signals while conveying more information. To accomplish this goal, he suggests that the systems may want to emulate a part of the biological neurons that’s different from the one they currently do. Instead of imitating the synapse—the space between neurons—he argues that they should mimic structures known as dendrites.
A biological neuron has three main parts—dendrites, an axon, and a cell body, which resemble the branches, roots, and trunk of a tree, respectively. A dendrite is where a neuron receives signals from other cells—for instance, the axon of another neuron. The synapse is the space that separates a dendrite or axon from another cell.
Dendrites can branch profusely, allowing one neuron to become connected with many others. Previous research found the order in which a dendrite receives signals from its branches governs the strength of its response. When a dendrite receives signals consecutively from its tip to its stem, it responds more strongly than when it receives those signals consecutively from its stem to its tip.
In this concept drawing of a dendrite-like nanoscale device, voltage pulses applied consecutively to all five gates from left to right flip all electric dipoles in the ferroelectric insulating layer from down to up. Stanford University/Nature
Based on these findings, Boahen developed a computational model of a dendrite that responded only if it received signals from neurons in a precise sequence. This means that each dendrite could encode data in more than just base two—one or zero, on or off—as is the case with today’s electronic component. It will use much higher base systems, depending on the number of connections it has and the length of the sequences of signals it receives.
Boahen suggests that a string of ferroelectric capacitors could emulate a stretch of dendrite and replace the gate stack of a field-effect transistor to form a ferroelectric FET (FeFET). A 1.5-micrometer-long FeFET with five gates could emulate a 15-µm-long stretch of dendrite with five synapses, he says.
In the human brain, a single neuron can be connected with thousands of other neurons. An artificial version of this may prove “feasible in a 3D chip,” Boahen says.
Boahen and his colleagues now have a $2 million National Science Foundation grant to explore this “dendrocentric learning” approach. He detailed the concept 30 November in the journal Nature.
- Photonic Chip Performs Image Recognition at the Speed of Light ›
- We Can Now Train Big Neural Networks on Small Devices - IEEE ... ›
- Fast, Efficient Neural Networks Copy Dragonfly Brains - IEEE ... ›
- Biggest Neural Network Ever Pushes AI Deep Learning - IEEE ... ›
- Hash Your Way To a Better Neural Network - IEEE Spectrum ›
Charles Q. Choi is a science reporter who contributes regularly to IEEE Spectrum. He has written for Scientific American, The New York Times, Wired, and Science, among others.