Steven Cherry: Hi, this is Steven Cherry for IEEE Spectrum’s “Techwise Conversations.”
Ten months ago, a group of researchers proposed a “large-scale, international public effort [that] aimed at reconstructing the full record of neural activity across complete neural circuits. This technological challenge,” they said, “could prove to be an invaluable step toward understanding fundamental and pathological brain processes.” The group called this proposed effort “the Brain Activity Map Project,” [PDF] and in March, it spelled out its vision in an article in the journal ACS Nano.
Last week, President Obama put the weight of the U.S. federal government behind the idea, creating what he called the BRAIN Initiative, where the letters B-R-A-I-N stand for Brain Research through Advancing Innovative Neurotechnologies. The project may ultimately spend [US] $3 billion over the course of a decade, though only $100 million was announced for its first year.
The balance will partly come from private research organizations, including the Salk Institute, the Allen Institute for Brain Science, the Howard Hughes Medical Institute, and the Kavli Foundation.
The ACS Nano article noted that a “large-scale, international public effort” would have to pull together researchers from a wide range of disparate disciplines, including biochemists, computer scientists, chemists, nanoscientists, experimental neuroscientists, and computational neuroscientists.
My guest today is one of those disparate researchers, from one of those private research institutions.
Terry Sejnowski occupies the Francis Crick Chair and is head of the Computational Neurobiology Lab at the Salk Institute. Among his many achievements has been a new model of axonal transmission that explained the symptoms of multiple sclerosis and suggested a new strategy for drugs to treat them. He joins us by phone.
Terry, welcome to the podcast.
Terry Sejnowski: Well, thanks for giving me the opportunity to be here today.
Steven Cherry: I want to quote one other thing from the ACS Nano article. It said, “No general theory of brain function is universally accepted. A fundamental underlying limitation is our persistent ignorance of the topology of the brain’s microcircuitry. This heavily interconnected, intermixed, and dynamical network of different cell types results in a daunting complexity of impenetrable jungles, where many investigators have lost themselves.” Is that a fair statement of the problem: impenetrable jungles where investigators lose themselves?
Terry Sejnowski: Well, this is very embarrassing, but it’s, I think, very close to the truth with regard to the jungle analogy. If you’ve ever seen an electron micrograph of a brain section through the cortex, it looks like spaghetti. And we know in 3-D we’re beginning to reconstruct the actual wiring diagram, but it’s a very, very complex, intricate structure. And interestingly, we know much more at the molecular level. If you ask me what are the molecules at a synapse, we can list them all.
We’re beginning to understand a lot about chemistry and some of the steps that lead to long-term changes in the synaptic strength. And we also know a lot at the higher end, which is overall global pictures of the picture, with brain-imaging techniques. But the middle levels, which deal with the millimeter scale in which we have hundreds of thousands of neurons interacting in dense networks, is one where we really haven’t had techniques to be able to probe at that level. And so this is the area that we need the most progress.
Steven Cherry: President Obama compared the BRAIN Initiative to the Human Genome Project, and the name “Brain Activity Map” reinforces that impression. But it’s a little misleading, isn’t it? I mean, is the actual goal the map or the tools needed to build the map?
Terry Sejnowski: About 80 percent of the project is going to be tool building, and that’s where collaborations between engineers and biologists will be absolutely essential, because the biologists really desperately need to be able to probe and to record and to analyze all the data that will be coming from this very large-scale attempt to crack the circuit.
But this is something that is not going to be as easy in some respects as the Human Genome Project. It’s not discrete. It’s continuous. It’s three-dimensional. There’s basic principles we don’t understand yet. But this is why I think the challenge is going to be so important, is that we simultaneously need to make progress on all those different directions.
Steven Cherry: What about the other 20 percent? There already is an effort to map the brain, right, like the mapping of the genome? And that’s the Human Connectome Project.
Terry Sejnowski: That’s right. And the Human Connectome Project, as I alluded before, is an attempt to get the wiring diagram, but that’s a static map. In other words, it gives you a schematic, but what it doesn’t tell you is the activity that’s flowing through the map, which is the information that’s being processed. And that really was the goal of the group that put together that article in ACS Nano, is we need to have sensors which can report out electrical activity and chemical activity in many neurons simultaneously that are interacting with each other in a circuit that is giving rise to some behavior. Not only do we want to passively report, but we also want two-way flow of information. We want to be able to influence the circuit as well.
Steven Cherry: Yeah, maybe you can just give us a better sense of what these future tools are going to be like.
Terry Sejnowski: Well, I can tell you what we’re doing right now and maybe extrapolate from that. One of the most powerful techniques that’s available right now is called optogenetics, and that was discovered just within the last decade. And instead of using electrodes, metal probes to actually record the electrical signals, this recording is done optically, with optical probes that change fluorescence when a signal is measured in the particular neuron. For example, an electrical signal or a calcium signal, the light, background light, increases. We can pick that up with two-photon microscopy, and with that we can sample about 100 neurons. And that can be done in vivo, which means in an animal that’s actually behaving. So it’s really revolutionized our ability to sample a small number of neurons.
So what we need to do is to scale that up 100 neurons—well, there are 100 billion neurons in the human brain, so we’re far from even coming up with a decent sample. What we would like to be able to accomplish at the end of 10 years is to record simultaneously from a million neurons and to be able to influence them. So that’s really the target that we have for the immediate future, to increase by a factor of about 100 000 the scale at which we can probe the brain.
Steven Cherry: And that’ll still be just like a 1/1000 sampling of the entire brain?
Terry Sejnowski: Well, a million is putting you into a realm which is a unit of processing which is in the cortex called a column. There’s about 100 000 neurons very tightly interacting in a cubic millimeter of cortex. And if we can record from, say, a dozen of these columns that are, say, in the visual cortex, we’d be able to see the actual coding of how the visual information coming in from the retina activates populations of neurons, how that changes over time, and how that might be related to perception and visual awareness. So that would give us a real insight that we don’t have right now by randomly sampling from a few neurons.
There are also other species that actually do have fewer neurons. For example, the zebrafish has a tadpole stage, where the brain is transparent, so it allows you to record from all the neurons in the brain ultimately. And it only has about a million neurons. So that means we would be within range, striking distance, of seeing how a whole brain operates.
Steven Cherry: So you mentioned data before. And what exactly will the computational neuroscientists such as yourself be doing?
Terry Sejnowski: Well, there are very challenging computational problems that are going to have to be overcome, not least of which is simply handling the large amounts of data that will be generated. I organized the meeting at Caltech that you mentioned, and the numbers that we came up with were something on the order of petabytes per year, which is comparable to the data that is being generated at very large projects like the Large Hadron Collider and also the LSST telescope that is being planned.
The plan is to develop a suite of programs that actually run on the server with interfaces that allow researchers who are trying to analyze the data and model the data to have access to it. And in fact, one of the most important goals of the BRAIN Initiative is to not only collect the data but to make it publicly available so that anybody who has a good idea could go in and try to analyze it.
So part of what’s going on is, look for patterns to understand what the spatial-temporal patterns are, and this is a problem in the very high-dimensional spaces. If you’re dealing with, say, a million neurons, you’re dealing with a space that can’t be visualized. It’s just too large. So you have to really use advanced machine learning techniques, ways of projecting the data, and all those now are, of course, being applied to large data sets in other fields. But the brain is special in the sense that it really evolved into very complex algorithms that we’re just beginning to appreciate.
Steven Cherry: The science writer at Salk who helped set up this interview, Karen Heyman, she told me that in your research you’ve applied information theory to the brain. And I guess that applies to this business of finding patterns. Is that right?
Terry Sejnowski: That’s exactly right. One of the most powerful tools we have right now to understand how information is coded and transferred, say, from the retina into the cortex, is to try to analyze the spatial temporal patterns of the spike trains. Everything that you see has to be encoded and ultimately decoded. So this is a project that we worked on back in the 1990s. It was a time when we were developing a signal processing technique called independent component analysis, or ICA. Independent component analysis is very important when you’re dealing with non-Gaussian signals, which occurs in natural images, in speech, and in a lot of biomedical signals coming from neurons.
Steven Cherry: The BRAIN Initiative is also being compared to the race to the moon, but the Apollo program costs something like $200 billion in today’s dollars. This is three or four orders of magnitude less. Do you think the work is being adequately funded given its ambitions?
Terry Sejnowski: No. But that having been said, that’s just the first year, and the goal is to ramp that up. And, of course, that’s going to depend on the economy and on the willingness of Congress to make it a high priority. And I have to applaud Obama for his vision and his leadership, because without making a special effort and making it a national project, I think it would really be much more difficult to get to the goals that we’re headed for.
I think really what we’re talking about is speeding up the research, creating tools which allow us to make progress much more quickly. And just to make an analogy with the Human Genome Project, the first genome cost the country $3 billion, a dollar per base pair, roughly. But now, 20 years later, the technology has improved, and it’s now possible to sequence a human genome for about $3000, and that’s about a million times improvement in the cost.
So we need to develop techniques that can allow us to improve the rate at which we can plug data and analyze data by the same, the high multiplier of hundreds of thousands. And that only can be done if we really apply advanced engineering techniques, nanotechnology, to the problem. And that’s why engineering is going to be a very important and an essential part of this whole project.
Now, in terms of the actual money, I think that’s a misleading number, and for the following reason. The true measure is the number of people that are working on a project and the brain power that you have applied to the problem. And what the president can do on a national stage, as he just did, is to really focus the spotlight of attention on a problem. And I can tell you right now there are literally hundreds of scientists and engineers at many universities around the country taking stock of what their skills and talents are and how they can bring to bear their insights and ideas. We really need many ideas. We don’t know right now which path is going to lead us towards that goal, so we need to explore hundreds of paths. We need a lot of new ideas.
And so the government, through the National Institutes of Health, through the DARPA, through the National Science Foundation, is going to be bringing to bear resources for teams of groups—engineers, biologists, neuroscientists—to explore these new ideas.
And it’s very difficult to get money out of NIH for developing tools. And this has always been a problem. You can get money for studying a biological problem, for making a biological measurement, but creating new tools is not part of the culture of the NIH, and that’s really going to change. And that’s really going to have a huge impact, I believe, over the next 10 years. So it’s much more than just the amount of money; it’s really the brain power and the creativity and the innovation that’s going to come out of this.
Steven Cherry: And I guess our point about the genome is, A, we’re putting the money in the right place, the tools, and B, because of that, I guess going from a million to a billion neurons may not be more expensive than getting to that first million.
Terry Sejnowski: Yeah, that’s the idea, is that once you have the technology rolling, you just keep moving on Moore’s Law. And by the way, there is a Moore’s Law for recording. If you go over data from the last 60 years, starting in 1960 roughly, the doubling time for the number of neurons you can record from is about seven and a half years. And I just had a class this morning where I calculated how long is it going to take us to get to a million if we continue along that path. It’s a very nice straight line on the log plot. It turns out that it’s going to take another 70 years. And I pointed out to the students in the class, who are about 20 years old, that by this time they’d be 90 and it’d be too late for their brains, to try to help them. So, yeah, I think we need to get the slope up on Moore’s Law to something more like doubling every year.
So if we just extrapolate now on the BRAIN Initiative, what are we going to buy? Well, we hope that we will have a technology 10 years from now that will, for the first time, allow us to go into diseased brains and understand really what’s going wrong. We just don’t have that ability right now. We hope that if we do figure out some basic principles and we do understand what’s wrong, say, in a brain of a person who is clinically depressed, well, if we have ways of influencing the neurons as well as recording, we might be able to go in and help reorganize the activity. In some diseases we know there’s a problem with the balance between excitation and inhibition. And if we can go in and selectively activate, say, one population of neurons that is going to rebalance the cortex, maybe that can help alleviate some of these really severe mental disorders.
So I think there’s going to be hope. I think we need to take our first step, which is developing the tools. The second step is making important discoveries and getting a basic understanding of these brain circuits and what goes wrong in diseases. And the third step, which is the goal, is to be able to use that knowledge and understanding to, if not cure, at least help people who have all these severe problems. And this is something that I think could be done in the next 10 or 20 years. I don’t think this is impossible.
And I think there are people who are skeptical, and I completely understand that when you’re starting out, you don’t know the answers, and Obama was very clear about this. We don’t know what the discoveries are going to be, but unless you look, you’re not going to find something.
Steven Cherry: Well, Terry, my wife’s best friend died of the complications of MS, and then there’s Alzheimer’s and epilepsy and depression, as you mentioned, and Parkinson’s and all the rest. And I suppose there isn’t a listener out there whose life isn’t touched by brain-related diseases.
Terry Sejnowski: It’s pervasive, and it’s really a huge burden on society. It’s estimated over $500 billion a year is spent in health care for mental disorders.
Steven Cherry: So on behalf of everyone, thanks for the work of scientists like you, and thanks for joining us today.
Terry Sejnowski: Well, it was a pleasure, and look forward to the next 10 years.
Steven Cherry: Very good. We’ve been speaking with Terry Sejnowski of the Salk Institute about the BRAIN Initiative, a new effort to help researchers someday map and model the 3-pound computer we all walk around with in our skulls.
For IEEE Spectrum’s “Techwise Conversations,” I’m Steven Cherry.
NOTE: Transcripts are created for the convenience of our readers and listeners and may not perfectly match their associated interviews and narratives. The authoritative record of IEEE Spectrum’s audio programming is the audio version.