The Cat Brain Cliff Notes

What you need to know before decide whether IBM's simulation is a fraud

7 min read
The Cat Brain Cliff Notes

Two leading scientists are embroiled in a controversy about a cat brain simulation. At first blush, the topic might seem silly. But the stakes are higher than the tired cat fight jokes would lead you to think. This argument has larger implications for the future of AI research, and particularly for a field called computational neuroscience. The controversy has called into question not only the legitimacy of one researcher's work, but of all brain simulation work. I think it's important to untangle the assumptions and accusations in clear, non-specialized language.

Background

Last week at the 2009 Supercomputing conference, IBM researchers on DARPA's SyNAPSE project presented a paper about a simulation that they said approached the scale of a cat's in terms of the number of neurons and synapses involved. They won the Gordon Bell prize for their efforts. A few days later, neuroscientist Henry Markram sent an open letter refuting Modha's claims and calling his paper a hoax.

First, let's understand the stakes and who's playing. Computational neuroscience—or cognitive computing, or biometaphorical computing—is an emerging field that is new enough that its luminaries have not yet agreed on a solid name for it. The goal (very simplified) is to build a brain.

But you can't create a brain out of meat any more than you can make it out of popsicle sticks, so you're stuck with computer simulations. That's not something either neuroscientists or computer scientists can do alone. It takes a village of electrical engineers, materials scientists, neuroscientists, biologists, computer scientists, nanotechnology researchers, and more. All these specialists bring different talents and assumptions. As such, the field is a fairly good embodiment of the Tower of Babel story, or the old story of the blind men feeling the elephant and arguing about whether an elephant is a foot or a trunk.

Markram v Modha

Consider Dharmendra Modha and Henry Markram. One is a neuroscientist, the other is a computer scientist. "The conflict arises since both species are needed to simulate (computer scientist) a brain (neuroscientist)," says Massimiliano Versace, who works on the SyNAPSE project at the Department of Cognitive and Neural Systems at Boston University. "You need both."

Markram is a brilliant neuroscientist. He discovered spike timing dependent plasticity (STDP). For those of you whose heads just exploded, that means essentially he figured out how neurons learn. STDP is what makes you, a human, able to understand causality: you see a flash of lightning, hear a crack of thunder, watch a tree fall on your house, and you are able to say, "Aw, crap. Now I need to call my insurance company." Markram figured out how our neurons fire together and wire together. He's kind of a big deal.

Modha is also kind of a big deal. He figured out a new adaptive replacement cache algorithm. While I have no cute example to explain this, I will say that IBM was so thrilled about Modha's work (and the patent it yielded) that they gave him an office at their Almaden research center and carte blanche. Modha decided that he wanted to build a brain. The next year, Almaden's annual conference was on cognitive computing. DARPA soon came knocking.

But the point is this: "It's safe to say that Modha is as brilliant a computer scientist as Markram is a neuroscientist," Versace says.

SyNAPSE v Blue Brain

Their respective projects are as different as the two researchers. Markram's Blue Brain project at Ecole Polytechnique Federale de Lausanne, is a simulation running on a (slightly older, smaller) IBM Blue Gene. With his model, Markram is investigating the roots of neurological disorders like Alzheimer's disease, autism and depression. Markram says he is building the entire brain from the fundamental ground up, to illuminate how the summed activity of 100 billion neurons can result in consciousness. "Our whole sense of consciousness and memory exists because of the way our brains are wired," he says. He wants to engineer it. He wants a machine that will talk back to him someday, a machine that will be conscious inside the confines of its supercomputer. "That's the secret about Blue Brain," he says. "It will search the Internet by itself; it will search to become more real. Like a Pinocchio."

Modha is working on SyNAPSE, a project that couldn't be more different. With SyNAPSE, DARPA wants to create electronics that take a page out of the brain's book. The stated purpose is to "investigate innovative approaches that enable revolutionary advances in neuromorphic electronic devices that are scalable to biological levels. Everything you need to know about SyNAPSE is on the Neurdon blog, which is written by Boston University postdoc Versace and Ben Chandler, a PhD student in the department. BU is a subcontractor to HP and HRL, who compete with IBM on the DARPA project (this is the way most DARPA projects are run--three competitors make everyone work harder).

For an explanation of what makes brains so great, I went to Richard Granger (who is, again, kind of a big deal at the Dartmouth Brain Engineering Laboratory). "Brains somehow carry out hugely parallel algorithms, using very slow, sparsely connected, probabilistic elements, and they somehow scale well as they grow," he says. "Big-brained mammals exhibit capabilities far beyond small brains, and there evidently are not severely diminishing returns." Also, they use a very small amount of power.

If we could learn from the brain, what might we achieve? With circuits as reconfigurable, specialized, and fault tolerant as the brain, a new species of computer would be better at recognizing faces and driving a car than the world's most sophisticated supercomputers. Right now, Roadrunner, the supercomputer that comes closest to replicating a human's ability to drive in rush-hour traffic, weighs 227 metric tons and requires a diet of about 3 megawatts. By contrast, the brain regularly handles rush-hour driving on 20 watts (comparable to the power consumption of a Nintendo Wii), and its 1.5 kilograms fit neatly into your skull.

The upshot is this: Both Modha and Markram's projects are modeling the brain, but for different purposes, with different end results in mind, and so they focus on different aspects of brain modeling.

The Trouble With Models

But what both projects have in common is their goal of figuring out what makes the mammalian brain work the way it does. We are a long way off. As of 2009, no one has yet been able to simulate even the simplest, dumbest animal—a fruit fly, say—in the process of reacting to a drink of sugar water. "Fruit flies are actually really complicated," Chandler explains (see BU's CELEST project for more). "As far as I know, we haven't even created a synthetic aplysia or nematode yet."

Why is that? Jim Olds (who directs George Mason University's Krasnow Institute for Advanced Study, and who is a neuroscientist) explains that what neuroscience is sorely lacking is a unifying principle. "We need an Einstein of neuroscience," he says, "to lay out a fundamental theory of cognition the way Einstein came up with the theory of relativity." Here's what he means by that. What aspect of the brain is the most basic element that, when modeled, will result in cognition? Is it a faithful reproduction of the wiring diagram of the brain? Is it the exact ion channels in the neurons?

But it's a chicken and egg problem, because without a brain-specific theory of relativity, there is no way to identify a basic unit of neuroscience. And without a basic unit of neuroscience, you can't build a model. So, to get anywhere, you must abstract. "[Markram] thinks it's the neuron," says Olds. "But what if that's not what makes a brain work as a brain? What if it's the synaptic spine, or the ion channels?" There are even neuroscientists who think you can never model the brain—because in order to do it you'd have to model down to quantum effects.

No one knows whether, to understand consciousness, neuroscience must account for every synaptic detail. "We do not have a definition of consciousness," says Granger. "Or, worse, we have fifteen mutually incompatible definitions."

No Six for You

It should be pretty clear at this point that no one's going to be building a Caprica Six any time soon. Or a cat. Or a fruit fly. But that doesn't mean all attempts at modeling any aspect of the brain are useless, or to use Markram's more loaded language, a hoax.

"To understand the brain—or any complex system—it is unavoidable that it be viewed computationally," Granger says. Granger says we should treat brain circuits as, well, circuits. "Circuits on one hand are physical systems; on the other, they are carrying out algorithms. To understand them does entail extensive simulation; but also computational, algorithm-level analysis, i.e., the elucidation of principles from simulation and experiment. Not just "that" they work, but how and why they work." So, can we arrive at deep insights without building a few starting simulations first?

"We definitely want to build simulations, including large, intermediate, and small-scale models, at low, middle, and high levels," says Granger. "The field has indeed been doing so for many years, and much of what we currently understand (such as the examples given) grew from many such models. The field is making great strides, and work such as Henry's, and many others', are prominent examples."

Hoax or No Hoax?

And ultimately, that is where Modha's work is important. It's not that he's created a cat brain. Every journalist who interviewed him has heard him correct them in the following way: "No, no, it's not a cat brain. A cat-SCALE simulation."

Some would ask whether there is a point to having a cat-scale simulation if it does not model with any great fidelity an actual cat. I think you should ask the Gordon Bell prize people*. Because for computer science, this is indeed a big deal. Neurdon's Ben Chandler says, "I don't think [other neural simulation environments] can handle 147,000 processors or 144 terabytes of memory. Actually mapping a system that large, and getting only a .3 percent deviation in workloads across cores is pretty impressive."

"We all want to understand how brain circuits do what they do," Granger says, "in particular, how their computational powers (perception, recognition, memory retrieval) manage to outperform our extant engineering systems. Let's avoid this (vast) discussion for now, and jump to this: To achieve large-scale simulations that exhibit critical characteristics of brain circuits, we will very likely need to address four current roadblock issues: speed, power, scaling, and parallelism."

A teaser for the next post (which will address the first point in Markram's letter): Modha's work addresses all of these.

*And for those calling for the Gordon Bell people to rescind their prize, please recall that the Bell prize is "awarded each year to recognize outstanding achievement in high-performance computing. The purpose of the award is to track the progress over time of parallel computing, with particular emphasis on rewarding innovation in applying high-performance computing to applications in science. Prizes are awarded for peak performance as well as special achievements in scalability."

 

 

The Conversation (0)