The February 2023 issue of IEEE Spectrum is here!

Close bar

Dendrocentric AI Could Run on Watts, Not Megawatts

Artificial intelligence that mimics dendrites could enable powerful AIs to run on smartphones instead of the cloud

3 min read
Computer illustration of a nerve cell showing the Axon, cell body, nucleus and dendrites.
Getty Images

Electronics that mimic the treelike branches that form the network neurons use to communicate with each other could lead to artificial intelligence that no longer requires the megawatts of power available in the cloud. AI will then be able to run on the watts that can be drawn from the battery in a smartphone, a new study suggests.

As the brain-imitating AI systems known as neural networks grow in size and power, they are becoming more expensive and energy-hungry. For instance, to train its state-of-the-art neural network GPT-3, OpenAI spent US $4.6 million to run 9,200 GPUs for two weeks. Generating the energy that GPT-3 consumed during training released as much carbon as 1,300 cars would have spewed from their tailpipes over the same time, says study author Kwabena Boahen, a neuromorphic engineer at Stanford University, in California.

Now Boahen proposes a way for AI systems to boost the amount of information conveyed in each signal they transmit. This could reduce both the energy and space they currently demand, he says.

In a neural network, components called neurons are fed data and cooperate to solve a problem, such as recognizing faces. The neural net repeatedly adjusts the synapseslinking its neurons to modify each synapse’s “weight”—that is, the strength of one neuron’s influence over another. The network then determines whether the resulting patterns of behavior are better at finding a solution. Over time, the system discovers which patterns are best at computing results. It then adopts these patterns as defaults, mimicking the process of learning in the human brain. A neural network is called “deep“ if it possesses multiple layers of neurons. (For instance, GPT-3 possesses 175 billion weights connecting the equivalent of 8.3 million neurons arranged 384 layers deep.)

AI currently advances by performing twice as many computations every two months. However, the electronics industry doubles the devices required to perform these operations only once every two years. This has meant that AI is typically limited to the cloud, which can provide the many thousands of processors needed for it.

Previously, one way to reduce the energy costs of computation was to shrink transistors and pack them densely together. However, the return from that strategy is diminishing because the signals between transistors must now travel farther and farther across microchips, and the longer the wires are, the more energy the signals consume. One strategy to shorten these distances is to stack circuits on top of each other in three dimensions, but this approach then reduces the amount of surface area available for dissipating heat.

To solve this problem, Boahen outlines a way for AI systems to both send fewer signals while conveying more information. To accomplish this goal, he suggests that the systems may want to emulate a part of the biological neurons that’s different from the one they currently do. Instead of imitating the synapse—the space between neurons—he argues that they should mimic structures known as dendrites.

A biological neuron has three main parts—dendrites, an axon, and a cell body, which resemble the branches, roots, and trunk of a tree, respectively. A dendrite is where a neuron receives signals from other cells—for instance, the axon of another neuron. The synapse is the space that separates a dendrite or axon from another cell.

Dendrites can branch profusely, allowing one neuron to become connected with many others. Previous research found the order in which a dendrite receives signals from its branches governs the strength of its response. When a dendrite receives signals consecutively from its tip to its stem, it responds more strongly than when it receives those signals consecutively from its stem to its tip.

Illustration shows a multilayered device with 5 vertical gates.In this concept drawing of a dendrite-like nanoscale device, voltage pulses applied consecutively to all five gates from left to right flip all electric dipoles in the ferroelectric insulating layer from down to up. Stanford University/Nature

Based on these findings, Boahen developed a computational model of a dendrite that responded only if it received signals from neurons in a precise sequence. This means that each dendrite could encode data in more than just base two—one or zero, on or off—as is the case with today’s electronic component. It will use much higher base systems, depending on the number of connections it has and the length of the sequences of signals it receives.

Boahen suggests that a string of ferroelectric capacitors could emulate a stretch of dendrite and replace the gate stack of a field-effect transistor to form a ferroelectric FET (FeFET). A 1.5-micrometer-long FeFET with five gates could emulate a 15-µm-long stretch of dendrite with five synapses, he says.

In the human brain, a single neuron can be connected with thousands of other neurons. An artificial version of this may prove “feasible in a 3D chip,” Boahen says.

Boahen and his colleagues now have a $2 million National Science Foundation grant to explore this “dendrocentric learning” approach. He detailed the concept 30 November in the journal Nature.

The Conversation (1)
Vaibhav Sunder18 Jan, 2023
M

Promising stuff. I imagine SkyWaters Corp. needs to rethink. When Chinese chips arnt surveilling, but these take over

Will AI Steal Submarines’ Stealth?

Better detection will make the oceans transparent—and perhaps doom mutually assured destruction

11 min read
A photo of a submarine in the water under a partly cloudy sky.

The Virginia-class fast attack submarine USS Virginia cruises through the Mediterranean in 2010. Back then, it could effectively disappear just by diving.

U.S. Navy

Submarines are valued primarily for their ability to hide. The assurance that submarines would likely survive the first missile strike in a nuclear war and thus be able to respond by launching missiles in a second strike is key to the strategy of deterrence known as mutually assured destruction. Any new technology that might render the oceans effectively transparent, making it trivial to spot lurking submarines, could thus undermine the peace of the world. For nearly a century, naval engineers have striven to develop ever-faster, ever-quieter submarines. But they have worked just as hard at advancing a wide array of radar, sonar, and other technologies designed to detect, target, and eliminate enemy submarines.

The balance seemed to turn with the emergence of nuclear-powered submarines in the early 1960s. In a 2015 study for the Center for Strategic and Budgetary Assessment, Bryan Clark, a naval specialist now at the Hudson Institute, noted that the ability of these boats to remain submerged for long periods of time made them “nearly impossible to find with radar and active sonar.” But even these stealthy submarines produce subtle, very-low-frequency noises that can be picked up from far away by networks of acoustic hydrophone arrays mounted to the seafloor.

Keep Reading ↓Show less
{"imageShortcodeIds":["30133857"]}