2 September 2009—Computer hardware that can simulate brain function could bring greater understanding of how the brain develops and works and may even lead to ways of repairing brain damage caused by injury or disease. But because the activity of each neuron is so complex, it’s been difficult to simulate with great detail or in real time.
Now researchers at the University of Bristol, in England, say they’ve come up with a method to model neural activity with enough detail and speed for living cells to talk to synthetic neurons. ”We want to create an artificial brain that can communicate with a real brain,” says José Nuñez-Yañez, a senior lecturer in electronic engineering in Bristol’s Centre for Communications Research.
Nuñez-Yañez says previous efforts to model neural activity have relied on supercomputers and on generalized processors that don’t necessarily work well in parallel. What these machines model is so complex that it can take 30 days to process one second of activity. Instead, Nuñez-Yañez uses field-programmable gate arrays (FPGAs) that rely on floating-point processors, with perhaps 1000 processors running in parallel. (Floating-point mathematics represents numbers in a computer in a way that allows the decimal point to be placed in different positions—or ”float”—in relation to the significant digits in the number, such as in the numbers 1.23456 and 12345.6. Compared with fixed-point representation, where the decimal is set in one particular place in a string of digits, floating-point representation results in smaller rounding errors and therefore can be more precise.)
That precision is important for modeling neurons, Nuñez-Yañez says. Neurons communicate by exchanging spikes of voltage, which last a few milliseconds and may peak at about 70 millivolts. Scientists don’t really know how the spikes convey information, but they believe the timing of the spikes is important. In a fixed-point model, rounding errors accumulate, leading to a significant shift in the timing of the spikes. The floating-point model, Nuñez-Yañez says, is much more accurate. In addition, the precision gained using floating-point arithmetic allows for the creation of more complex, morphologically accurate three-dimensional neuronal models.
Nuñez-Yañez and his colleagues described the simulation of an artificial neuron in a chip this week at the 19th International Conference on Field Programmable Logic and Applications, in Prague. The next step is to have a biological neuron talking to an artificial one. The idea is to take slices of mouse brain and incubate them on top of a sensor. After about two weeks of growth, the neurons will begin communicating as they would in the brain. The sensor, an Aptina complementary-metal-oxide-semiconductor (CMOS) image detector altered to measure voltage instead of light, would send its measurements through an analog-to-digital converter, which in turn would feed the data to the FPGAs. In the computer, the electrical activity of different parts of the neural cell is represented mathematically, and a set of differential equations takes the input data and simulates the activity of each part.
The simulated neurons can even grow new dendrites or new synapses in response to the stimulus from the real brain cells. The activity of the simulated neurons can then be fed to the living ones in the form of spikes of voltage delivered through the CMOS sensor.