A Wearable Chip to Predict Seizures

IBM converts a mountain of brain wave data into a prediction chip for epileptic seizures

3 min read

Electroencephalogram (EEG) on a 27-year old woman. Epilepsy tracing on the screen.
Photo: Getty Images

One of the toughest aspects of having epilepsy is not knowing when the next seizure will strike. A wearable warning system that detects pre-seizure brain activity and alerts people of its onset could alleviate some of that stress and make the disorder more manageable. To that end, IBM researchers say they have developed a portable chip that can do the job; they described their invention today in the Lancet’s open access journal eBioMedicine.

The scientists built the system on a mountain of brainwave data collected from epilepsy patients. The dataset, reported by a separate group in 2013, included over 16 years of continuous electroencephalography (EEG) recordings of brain activity, and thousands of seizures, from patients who had had electrodes surgically implanted in their brains

Scientists at IBM Research Australia then used that dataset to train deep learning algorithms called neural networks. The algorithms learned to identify patterns of brain activity associated with the onset of a seizure. IBM runs the neural networks on TrueNorthits ultra-low power neuromorphic computing chip. At the size of a postage stamp, the chip could be used in a wearable device for people with epilepsy, or connected to a mobile device.

One-third of epilepsy patients don’t improve with drugs or other treatment, so for those people, a prediction system could be the only tool that gives them some control over their disease, says Stefan Harrer, the researcher at IBM Research Australia in Melbourne who led the project. In fact, the burden of not knowing when a seizure will occur tends to lead people to avoid socializing, playing sports, traveling or doing anything where they don’t want to get surprised with a seizure. That kind of confinement, in turn, often leads to depression. “Our motivation,” says, Harrer, “is to put control and knowledge back into their lives.” 

Map of the brain networkA schematic map of the brain networkImage: IBM

IBM’s system is still in the proof-of-concept stage, and has not been tested on humans. “This is a demonstration of the feasibility of building a verifyable seizure prediction system,” says Harrer. His group tested the chip in a simulated study, using previously collected brain activity data. 

The device’s sensitivity can be dialed up or dialed down, depending the needs of the patient. In high sensitivity mode, the system accurately predicted seizures over 90 percent of the time, but it also spent a lot of time in warning mode. “We need to reduce the false positive rate before this system can be used as a real device on patients,” says Harrer. 

Brain activity patterns that indicate an upcoming seizure are notoriously hard to identify. The patterns differ not only person-to-person, but also can change over a person’s lifetime. That means deep learning algorithms have to be educated enough to identify lots of different patterns of key brain activity, and nimble enough to adapt over time as each person’s patterns shift.

IBM is one of several groups working on seizure prediction systems. The researchers at the University of Melbourne who collected the 16 years of EEG data also developed algorithms to identify and predict seizure onset. Universities and research hospitals have even hosted competitions to encourage innovation in this area.  

But previous restraints in computing capabilities have limited robustness of these algorithms as well as their real-time capabilities, says Harrer. For example, the University of Melbourne study relied on a combination of machine learning algorithms and human medical experts who would hand-pick patterns that looked noteworthy.

Because the process was so labor-intensive, the group had to focus on selected periods of brain activity, rather than the entire dataset, to make analysis more palatable. The result was diminished robustness of the predictions in some patients. It also meant that the system couldn’t possibly adapt in real-time to a person’s ever-changing brain activity patterns.

IBM’s chip analyzed University of Melbourne’s entire EEG recording dataset and didn’t rely on human-picked data features. “We let the algorithm plow through it and find patterns of interest by itself,” says Harrer. The computer “constantly trains and re-trains on those changing patterns over time,” making a real-time prediction system feasible, he says. 

Harrer says IBM would like to improve the performance of the algorithm further by exploring other neural network architectures and by including other factors and biomarkers. He also like to find a way to train the algorithms on data collected outside the skull, rather than from electrodes implanted in the brain.

The Conversation (0)