Neuromorphic computer chips meant to mimic the neural network architecture of biological brains have generally fallen short of their wetware counterparts in efficiency—a crucial factor that has limited practical applications for such chips. That could be changing. At a power density of just 20 milliwatts per square centimeter, IBM’s new brain-inspired chip comes tantalizingly close to such wetware efficiency. The hope is that it could bring brainlike intelligence to the sensors of smartphones, smart cars, and—if IBM has its way—everything else.
The latest IBM neurosynaptic computer chip, called TrueNorth, consists of 1 million programmable neurons and 256 million programmable synapses conveying signals between the digital neurons. Each of the chip’s 4,096 neurosynaptic cores includes the entire computing package: memory, computation, and communication. Such architecture helps to bypass the bottleneck in traditional von Neumann computing, where program instructions and operation data cannot pass through the same route simultaneously.
“This is literally a supercomputer the size of a postage stamp, light like a feather, and low power like a hearing aid,” says Dharmendra Modha, IBM fellow and chief scientist for brain-inspired computing at IBM Research-Almaden, in San Jose, Calif.
Such chips can emulate the human brain’s ability to recognize different objects in real time; TrueNorth showed it could distinguish among pedestrians, bicyclists, cars, and trucks. IBM envisions its new chips working together with traditional computing devices as hybrid machines, providing a dose of brainlike intelligence. The chip’s architecture, developed together by IBM and Cornell University, was first detailed in August in the journal Science.
“The impressive aspects of TrueNorth are the integration density—a million neurons on a single, admittedly very big, chip—and the very low power consumption for this many neurons,” says Steve Furber, a professor of computer engineering at the University of Manchester, in England, who is behind a competing effort [see “To Build a Brain,” IEEE Spectrum, August 2012].
With a total of 5.4 billion transistors, the computer chip is one of the largest CMOS chips ever built. Yet it uses just 70 mW in operation and has a power density about 1/10,000 that of most modern microprocessors. That brings neuromorphic engineering closer to the human brain’s marvelous efficiency as a grapefruit-size organ that consumes just 20 W.
IBM minimized power usage in several ways. For one, it traded the traditional processor’s clock—used to trigger and coordinate computational processes—for a more biological concept called event-driven computing. TrueNorth’s digital neurons can work together asynchronously without a clock by reacting to signal spikes, which are the output of both real neurons and silicon ones.
IBM also saved on power through the design of an on-chip network that interconnects all the chip’s neurosynaptic cores instead of using extra power to communicate with off-chip memory. And finally, it made the chip using a process technology meant for producing low-power mobile processors.
One brainlike feature that IBM did not mimic to reduce power consumption was to make TrueNorth’s neurons analog instead of digital. The choice to go all-digital led to a number of advantages: First, IBM dodged the problem of slight differences in the manufacturing process or temperature fluctuations that have an outsize effect on analog circuits.
Second, the lack of analog circuitry allowed the IBM team to dramatically shrink its hardware. Many experimental neuromorphic chips still use analog circuits that must be built using a process that on the Moore’s Law curve is more than a decade behind the process used today, Furber explains. By comparison, IBM fabricated its chip using Samsung’s 28-nanometer process technology—typical for manufacturing chips for today’s mobile devices.
And finally, the digital design enabled TrueNorth’s hardware to become functionally equivalent to its software—a factor that allowed the IBM software team to build TrueNorth applications on a simulator before the chip itself had been built.
The chip represents the culmination of a decade of Modha’s personal research and almost six years of funding from the U.S. Defense Advanced Research Projects Agency (DARPA). Modha continues to lead DARPA’s SyNAPSE project, a global effort that has committed more than US $100 million since 2008 to making computers that can learn. “Our long-term end goal is to build a ‘brain in a box’ with 100 billion synapses consuming 1 kilowatt of power,” Modha says.
But the goal of that brain is different from the goals of other, similar projects. TrueNorth’s digital circuits are designed with commercial applications in mind. Other projects have the goal of better understanding how the brain works.
SpiNNaker, a neural network based on digital circuits and led by Furber, uses a general-purpose parallel computing system tuned to run neurons based in software rather than hardware. The SpiNNaker team, which soon plans to simulate 100 million neurons using 100,000 chips, sacrificed the efficiency of dedicated hardware to gain the flexibility of software: “Our primary goal is to understand biology, so the flexibility is important in understanding the biological brain,” Furber says. For applications-focused IBM, “the efficiency and the density of their chip is perhaps more important than the flexibility they’ve retained.”
This article originally appeared in print as “IBM’s New Brain.”
A correction to this article was made on 02 October 2014.
Jeremy Hsu has been working as a science and technology journalist in New York City since 2008. He has written on subjects as diverse as supercomputing and wearable electronics for IEEE Spectrum. When he’s not trying to wrap his head around the latest quantum computing news for Spectrum, he also contributes to a variety of publications such as Scientific American, Discover, Popular Science, and others. He is a graduate of New York University’s Science, Health & Environmental Reporting Program.