Standardizing the Brain-Machine Interface

Every neural-prosthetics lab has its own brain-decoding algorithm, but could one size fit all?

PHOTO: Matthew McKee/Cyberkinetic Neurotechnology Systems

Plugged in

Algorithms interpret brain signals to move a cursor on a screen.

Earlier this year in a lab at Duke University, in Durham, N.C., a clever, raisin-gobbling ­monkey named Idoya made a robot move in Japan—just by thinking. And she wasn’t alone. She joined ranks with, among others, a ­paraplegic man who recently used his brain to move a cursor around a computer screen.

Researchers have endowed subjects with seemingly telekinetic powers by extracting the patterns of brain ­activity that occur when we move parts of our bodies. However those patterns are tapped electronically, algorithms are needed to interpret them and discern their salient ­features so that the appropriate ­signals can be sent to external devices. Groups working on brain-machine interfaces have designed brain decoders differently, depending on the type of neural data they collect and the purposes of their research. As a result, most algorithms have to be written from the ground up. But some in the field say it’s time to develop a generic algorithm that will incorporate the best work of the last decade and serve as a foundation for all labs ­working on neural prosthetics.

That’s just what Lakshmin­arayan Srini­vasan, a ­computer scientist at MIT, has in mind. Srinivasan—together with colleagues at MIT, Harvard, Boston University, and Massachusetts General Hospital—has pulled together elements of algorithms from all the major labs that design brain-machine interfaces and proposed a new approach that theoretically would ­support and enhance each design.

From the outset, ­researchers attacking the mind-over-matter problem of developing brain-­activated prosthetics adopted widely varying approaches. Some pasted electrodes onto the scalp; others placed them just inside the skull or directly into the brain. They eavesdropped on ­different parts of the brain and, ­having obtained ­signal ­patterns, processed them ­differently, says Srinivasan.

There are many ways to ­filter neural data. When users imagine moving a ­cursor on a screen, for example, they ­produce data about the speed they want it to go, where and when they want it to stop, the route they want it to take, and when it should click. At any point, their ­intentions might change. Also, over ­longer periods of time, ­neurons may die and replace one another in ways that can alter the signal. Every ­algorithm takes into account some of those dynamics, but none yet incorporates all of them, as Srinivasan is doing.

Srinivasan has ­developed his algorithm even as brain-machine interfaces are ­moving from the lab to the clinic. Already, Cyberkinetics Neurotechnology Systems, in Foxborough, Mass., is ­conducting clinical trials for a device called the BrainGate Neural Interface System, which would give severely paralyzed patients the ability to communicate through a computer. The first ­subject, a fully ­paralyzed man with amyotrophic ­lateral ­sclerosis, had a 100-electrode array implanted into his motor ­cortex. ”The very first day he tried to use the device, he had some control over the ­computer cursor,” says Leigh Hochberg, the ­principal investigator on the trials.

Despite early successes, researchers at Cyberkinetics consider the algorithms a work in progress. ”We adjust it all the time,” says John Simeral, an electrical ­engineer at Brown who works on the BrainGate algorithms.

Simeral says that elements of the algorithm Srinivasan suggests could further improve the BrainGate cursor task. For example, the system could give a clearer estimate of the exact moment a person radically changes intention.

In simulations, Srini­vasan’s algorithms ­performed as well or better than those he sought to unify. But Mikhail Lebedev, an engineer in Miguel Nicolelis’s lab at the Duke University Medical Center, says you can’t ­ultimately use ­simulations to judge an ­algorithm. When people plug into brain-­computer ­interfaces, it’s not only the algorithms that adjust to the way the brain works. The brain, to some extent, also learns how to ­manipulate the rules of the algorithms to get its desired outcome, and so you can never fully ­predict how the algorithms will perform.

Srinivasan says he’s now learning electrophysiology techniques and will soon try out his algorithms on human subjects.

Related Stories

Advertisement
Advertisement