18 November 2009—Scientists and engineers at IBM’s Almaden Research Center, in San Jose, Calif., announced today at the Supercomputing Conference (SC09) in Portland, Ore., that they have created the largest brain simulation to date on a supercomputer. The number of neurons and synapses in the simulation exceed those in a cat’s brain; previous simulations have reached only the level of mouse and rat brains. Experts predict that the simulation will have profound effects in two arenas: It will lead to a better understanding of how the brain’s architecture leads to cognition, and it should inspire the design of electronics that mimic the brain’s as-yet-unmatched ability to do complex computation and learn using a small volume of hardware that consumes little power.
The cortical simulator, called C2, integrates research from the fields of computation, computer memory, communication, and neuroscience to re-create 1 billion neurons connected by 10 trillion individual synapses. C2 runs on “Dawn,” a BlueGene/P supercomputer at Lawrence Livermore National Laboratory, in Livermore, Calif.
The research was funded by the U.S. Defense Advanced Research Projects Agency (DARPA), which is spending at least US $40 million to develop an electronic processor that mimics the mammalian brain’s function, size, and power consumption. The DARPA project, called Systems of Neuromorphic Adaptive Plastic Scalable Electronics, or SyNAPSE, was launched late last year and will continue until 2015 with a goal of a prototype chip simulating 10 billion neurons connected via 1 trillion synapses. The device must use 1 kilowatt or less (about what a space heater uses) and take up less than 2 liters in volume. IBM is one of three groups involved in the project. In addition to $21 million in funds for IBM, DARPA also funded HRL Labs, in Malibu, Calif., and HP Labs, in Palo Alto, Calif.
“Real brains are so impressive to computer scientists,” says Jim Olds, a neuroscientist who directs George Mason University’s Krasnow Institute. “Instead of banging our heads against Moore’s Law, why not build computers more like the brain and get them to solve problems the way the brain does?” Right now, Roadrunner, the supercomputer that comes closest to replicating a human’s ability to drive in rush-hour traffic, weighs 227 metric tons and requires a diet of about 3 megawatts. By contrast, the brain regularly handles rush-hour driving on 20 watts (comparable to the power consumption of a Nintendo Wii), and its 1.5 kilograms fit neatly into a handbag.
IBM’s principal investigator for SyNAPSE, Dharmendra Modha, likens the C2 cortical simulator to an electron microscope or a linear accelerator. “It’s a tool that other scientists can now use to better understand how cognition works,” he says.
Olds says the IBM work is extremely promising. “Each neuron in the network is a faithful reproduction of what we now know about neurons,” he says. This in itself is an enormous step forward for neuroscience, but it also allows neuroscientists to do what they have not previously been able to do: rapidly test their own hypotheses on an accurate replica of the brain. “It’s like the Large Hadron Collider in that respect,” says Olds.
The IBM research shows that a model of the human brain—which has 20 billion neurons connected by about 200 trillion synapses—could be reached by 2019, given enough processing power. But Johns Hopkins University electrical and computer engineering professor Andreas Andreou says the C2 simulator underscores an undeniable fact—to better understand the brain, we’re going to need a better computer.
A major problem is power consumption. Dawn is one of the most powerful and power-efficient supercomputers in the world, but it takes 500 seconds for it to simulate 5 seconds of brain activity, and it consumes 1.4 MW. Extrapolating from today’s technology trends, IBM projects that the 2019 human-scale simulation, running in real time, would require a dedicated nuclear power plant.
The human brain offers tantalizing clues to a better computer architecture. For example, today’s microprocessors separate memory and computation. The brain has no such constraints, and its circuits are reconfigurable, specialized, and fault tolerant, too. This makes the human brain much better at recognizing faces and driving a car, to take two examples, than the world’s most sophisticated supercomputers. DARPA’s Urban Challenge, which required contestants to develop an autonomous vehicle control system, proved this point. The Urban Challenge went off the rails faster than anyone expected, with cars idling indefinitely at stop signs or becoming paralyzed by large rocks.
While Andreou, who studies chip architectures for specialized problems, is generally a skeptic about efforts to reverse engineer the brain, what Modha and his team have done is “quite good,” he says. The most interesting thing about the IBM research, Andreou says, is what it bodes for computer architecture—it points to a future of far more specialized electronics.
“I don’t believe in general-purpose parallel computing,” says Andreou. “The way to get to large-scale computing—like a brain simulation or genomics—is to use a specialized chip that will solve a specific class of problems very well. That’s why this [work] is exciting: It gives a lot of really scientific data to actually design such processors.”
Modha says the next step is ”to demonstrate brainlike visual perception, decision [making], planning, and navigation in virtual environments” by mimicking how the brain interfaces with sensory organs and with muscles.
IBM is also working separately on nanomaterials that could enable the construction of brainlike chips. In the final phase, it plans to build a system of 100 such chips simulating 100 million neurons and 1 trillion synapses.
DARPA is not the only organization working toward brainlike cognitive systems. The best-known brain project under way in Europe is Blue Brain, at Ecole Polytechnique Fédérale de Lausanne. And the European Union has also lavished 6.7 million ($9.3 million) on a project that will build an artificial mouse brain. The Chinese government, according to a Wall Street Journal report earlier this year, is spending $1.5 million to develop robots whose artificial brains are driven by microcircuits that evolve, learn, and adapt to real-world situations. And in the United States, the National Science Foundation has funded a three-year study at the University of Southern California’s electrical engineering department to develop a synthetic cortex, which will contain carbon-based (as opposed to silicon) nanometer-scale artificial neurons.