The April 2024 issue of IEEE Spectrum is here!

Close bar

New Math for Artificial Neurons

Floating-point processors in FPGAs make for artificial neurons quick enough to communicate with real ones

3 min read

2 September 2009—Computer hardware that can simulate brain function could bring greater understanding of how the brain develops and works and may even lead to ways of repairing brain damage caused by injury or disease. But because the activity of each neuron is so complex, it’s been difficult to simulate with great detail or in real time.

Now researchers at the University of Bristol, in England, say they’ve come up with a method to model neural activity with enough detail and speed for living cells to talk to synthetic neurons. ”We want to create an artificial brain that can communicate with a real brain,” says José Nuñez-Yañez, a senior lecturer in electronic engineering in Bristol’s Centre for Communications Research.

Nuñez-Yañez says previous efforts to model neural activity have relied on supercomputers and on generalized processors that don’t necessarily work well in parallel. What these machines model is so complex that it can take 30 days to process one second of activity. Instead, Nuñez-Yañez uses field-programmable gate arrays (FPGAs) that rely on floating-point processors, with perhaps 1000 processors running in parallel. (Floating-point mathematics represents numbers in a computer in a way that allows the decimal point to be placed in different positions—or ”float”—in relation to the significant digits in the number, such as in the numbers 1.23456 and 12345.6. Compared with fixed-point representation, where the decimal is set in one particular place in a string of digits, floating-point representation results in smaller rounding errors and therefore can be more precise.)

That precision is important for modeling neurons, Nuñez-Yañez says. Neurons communicate by exchanging spikes of voltage, which last a few milliseconds and may peak at about 70 millivolts. Scientists don’t really know how the spikes convey information, but they believe the timing of the spikes is important. In a fixed-point model, rounding errors accumulate, leading to a significant shift in the timing of the spikes. The floating-point model, Nuñez-Yañez says, is much more accurate. In addition, the precision gained using floating-point arithmetic allows for the creation of more complex, morphologically accurate three-dimensional neuronal models.

Nuñez-Yañez and his colleagues described the simulation of an artificial neuron in a chip this week at the 19th International Conference on Field Programmable Logic and Applications, in Prague. The next step is to have a biological neuron talking to an artificial one. The idea is to take slices of mouse brain and incubate them on top of a sensor. After about two weeks of growth, the neurons will begin communicating as they would in the brain. The sensor, an Aptina complementary-metal-oxide-semiconductor (CMOS) image detector altered to measure voltage instead of light, would send its measurements through an analog-to-digital converter, which in turn would feed the data to the FPGAs. In the computer, the electrical activity of different parts of the neural cell is represented mathematically, and a set of differential equations takes the input data and simulates the activity of each part.

The simulated neurons can even grow new dendrites or new synapses in response to the stimulus from the real brain cells. The activity of the simulated neurons can then be fed to the living ones in the form of spikes of voltage delivered through the CMOS sensor.

In addition to teaching scientists about brain development, the project might, for example, allow doctors to limit the damage of Parkinson’s disease, says Joe McGeehan, director of Bristol’s Centre for Communications Research. Brain damage can be worsened, he says, when healthy cells don’t get expected feedback because neighboring neurons have been damaged. Artificially providing that feedback should prevent the damage from spreading.

Ted Carnevale, a senior research scientist at Yale’s School of Medicine, says an FPGA model of neural activity has its limitations. ”It isn’t particularly useful as a tool for studying how the detailed anatomical and biophysical properties of neurons shape the operation of cells and neural circuits,” he says. But what it can do is provide real-time results, allowing the connection of real and synthetic neurons. ”FPGA models would certainly allow for larger and more complex circuits. Such work is still at an early stage, but this is probably where the payoff will lie,” Carnevale says.

About the Author

Neil Savage writes about technology from Lowell, Mass. In the September 2009 issue of IEEE Spectrum, he explained how transistors with a tunable band gap can be made from graphene.

This article is for IEEE members only. Join IEEE to access our full archive.

Join the world’s largest professional organization devoted to engineering and applied sciences and get access to all of Spectrum’s articles, podcasts, and special reports. Learn more →

If you're already an IEEE member, please sign in to continue reading.

Membership includes:

  • Get unlimited access to IEEE Spectrum content
  • Follow your favorite topics to create a personalized feed of IEEE Spectrum content
  • Save Spectrum articles to read later
  • Network with other technology professionals
  • Establish a professional profile
  • Create a group to share and collaborate on projects
  • Discover IEEE events and activities
  • Join and participate in discussions