The February 2023 issue of IEEE Spectrum is here!

Close bar

AI-Powered Rat Could Be a Valuable New Tool for Neuroscience

Researchers from DeepMind and Harvard are using a virtual rat to see what neural networks can teach us about biology

3 min read
Visualization of three tasks the virtual rodent trained to solve, from left: jumping over gaps, foraging in a maze, and touching a ball twice with a forepaw with a precise timing interval between touches.
The virtual rodent was trained to solve tasks including jumping over gaps, foraging in a maze, and touching a ball twice with a forepaw with a precise timing interval between touches.
Images: Deepmind and Harvard University

Can we study AI the same way we study lab rats? Researchers at DeepMind and Harvard University seem to think so. They built an AI-powered virtual rat that can carry out multiple complex tasks. Then, they used neuroscience techniques to understand how its artificial “brain” controls its movements.

Today’s most advanced AI is powered by artificial neural networks—machine learning algorithms made up of layers of interconnected components called “neurons” that are loosely inspired by the structure of the brain. While they operate in very different ways, a growing number of researchers believe drawing parallels between the two could both improve our understanding of neuroscience and make smarter AI.

Now the authors of a new paper due to be presented this week at the International Conference on Learning Representations have created a biologically accurate 3D model of a rat that can be controlled by a neural network in a simulated environment. They also showed that they could use neuroscience techniques for analyzing biological brain activity to understand how the neural net controlled the rat’s movements.

The platform could be the neuroscience equivalent of a wind tunnel, says Jesse Marshall, coauthor and postdoctoral researcher at Harvard, by letting researchers test different neural networks with varying degrees of biological realism to see how well they tackle complex challenges.

“Typical experiments in neuroscience probe the brains of animals performing single behaviors, like lever tapping, while most robots are tailor-made to solve specific tasks, like home vacuuming,” he says. “This paper is the start of our effort to understand how flexibility arises and is implemented in the brain, and use the insights we gain to design artificial agents with similar capabilities.”

The virtual rodent features muscles and joints based on measurements from real-life rats, as well as vision and a sense of proprioception, which refers to the feedback system that tells animals where their body parts are and how they’re moving. The researchers then trained a neural network to guide the rat through four tasks—jumping over a series of gaps, foraging in a maze, trying to escape a hilly environment, and performing precisely timed pairs of taps on a ball.

Once the rat could successfully complete the tasks, the research team then analyzed recordings of its neural activity using techniques borrowed from neuroscience to understand how the neural network was achieving the motor control required to complete the tasks.

Because the researchers had built the AI that powered the rat, much of what they found was expected. But one interesting insight they gained was that the neural activity seemed to occur over longer time scales than would be expected if it were directly controlling muscle forces and limb movements, says Diego Aldarondo, a coauthor and graduate student at Harvard.

“This implies that the network represents behaviors at an abstract scale of running, jumping, spinning, and other intuitive behavioral categories,” he says, a cognitive model that has previously been proposed to exist in animals.

The neural network appeared to reuse some such representations across tasks, and the neural activity encoding them often took the form of sequences, a phenomenon that has been observed in both rodents and songbirds.

The researchers have open sourced the virtual rat in the hopes that other researchers will build on their findings, says Josh Merel, a coauthor and a senior research scientist at DeepMind.

[shortcode ieee-pullquote quote=""This paper is the start of our effort to understand how flexibility arises and is implemented in the brain."" float="left" expand=1]

While neural networks don’t have the physiological realism of some models, Blake Richards, a neuroscientist from McGill University in Canada who was not involved in the work, says they capture enough important features of neural processing to generate useful predictions about how neural activity impacts behavior. The big contribution of this paper, he says, is to come up with a way to train these networks in a realistic manner that makes it much easier to compare against biological data.

“[The authors] are providing a platform for training [neural networks] in a realistic body and set of tasks, which will make comparisons to real brains in rodents far more valuable,” he adds.

While one must be cautious about making overly broad comparisons between artificial and biological neural networks, this approach could be a fruitful way to probe the neural underpinnings of behavior, says Stephen Scott, a neuroscientist at Queen’s University in Canada who was not involved in the work.

The complexity of recording neural activity in animals and linking it to specific behaviors means most experiments are done on relatively simple tasks in rigid experimental settings, Scott says. In contrast, the virtual rat can carry out complex, multipart behaviors like foraging that can be linked to its sensory input and neural activity with high precision.

The only problem is that collecting neural data from animals on tasks this complicated is very difficult, says Scott. He would like to see the authors test the virtual rat on some of the simpler tasks used in laboratory settings so that the neural activity patterns could be compared against those found in animals to see where they diverge.

The Conversation (0)

Will AI Steal Submarines’ Stealth?

Better detection will make the oceans transparent—and perhaps doom mutually assured destruction

11 min read
A photo of a submarine in the water under a partly cloudy sky.

The Virginia-class fast attack submarine USS Virginia cruises through the Mediterranean in 2010. Back then, it could effectively disappear just by diving.

U.S. Navy

Submarines are valued primarily for their ability to hide. The assurance that submarines would likely survive the first missile strike in a nuclear war and thus be able to respond by launching missiles in a second strike is key to the strategy of deterrence known as mutually assured destruction. Any new technology that might render the oceans effectively transparent, making it trivial to spot lurking submarines, could thus undermine the peace of the world. For nearly a century, naval engineers have striven to develop ever-faster, ever-quieter submarines. But they have worked just as hard at advancing a wide array of radar, sonar, and other technologies designed to detect, target, and eliminate enemy submarines.

The balance seemed to turn with the emergence of nuclear-powered submarines in the early 1960s. In a 2015 study for the Center for Strategic and Budgetary Assessment, Bryan Clark, a naval specialist now at the Hudson Institute, noted that the ability of these boats to remain submerged for long periods of time made them “nearly impossible to find with radar and active sonar.” But even these stealthy submarines produce subtle, very-low-frequency noises that can be picked up from far away by networks of acoustic hydrophone arrays mounted to the seafloor.

Keep Reading ↓Show less