Even though brain–machine or brain–computer interfaces (BMI/BCI) have come a long way since Hans Berger discovered the presence of electrical activity in the human brain in 1924, seamless communication between our brains and machines remains a holy grail of computer science. The past few years have seen incredible advances in non-invasive wearable BMI research in terms of helping disabled or paralyzed people move again, control robotic prosthetics, or command computers by the power of thought, and give blind people bionic vision.
Now, a new study from an international team of scientists has designed a BMI system that can control a robotic arm or wheelchair by simply imagining the action via a detection system that is easier to wear than earlier systems. The system comprises a soft wireless scalp electronic system, which uses electro-encephalography (EEG) to read and translate neural signals from a human brain into action.
The soft wearable scalp used in Yeo and his colleagues' BMI system is something the team has been working on for two years. Unlike conventional EEG devices, Yeo says, this one doesn't have a bunch of wires, metal electrodes and so on. "It has miniaturized, imperceptible micro-needle electrodes, and flexible circuits with stretchable interconnectors." This, he adds, gives their system a better form factor and better signal acquisition.
Being both flexible and soft, the EEG scalp can be worn over hair and requires no gels or pastes to keep in place. The improved signal recording is largely down to the micro-needle electrodes, invisible to the naked eye, which penetrate the outermost layer of the skin. "You won't feel anything because [they are] too small to be detected by nerves," says Woon-Hong Yeo of the Georgia Institute of Technology. In conventional EEG set-ups, he adds, any motion like blinking or teeth grinding by the wearer causes signal degradation. "But once you make it ultra-light, thin, like our device, then you can minimize all of those motion issues."
The team used machine learning to analyze and classify the neural signals received by the system and identify when the wearer was imagining motor activity. That, says Yeo, is the essential component of a BMI, to distinguish between different types of inputs. "Typically, people use machine learning or deep learning… We used convolutional neural networks." This type of deep learning is typically used in computer vision tasks such as pattern recognition or facial recognition, and "not exclusively for brain signals," Yeo adds. "We are just getting the benefits of the deep learning mechanism itself."
Micro-needle electrodesGeorgia Institute of Technology
The researchers also used virtual reality (VR) to simulate action. Since the system is based on motor imagery, the VR component acts as a visual cue and "is sort of helping a user to imagine better, by showing hands or feet," Yeo says. The data showed that this, in fact, enhanced signal quality as well.
The portable BMI system was able to record real-time, high-quality motor imagery activity, and the four human subjects—all able-bodied people—were able to complete their VR exercises by thinking about them. Despite an accuracy rate of 93.22 ± 1.33 percent, Yeo says there are still many challenges ahead.
"The major limitation [of non-invasive BMIs] is that we are measuring signals on the skin, through the skull, through the tissues," he says, "So I believe we have to continuously improve our device quality to get better signals. And at the same time, we have to also continuously improve our data analysis…to have a better accuracy rate." Also, in the current experiment, the researchers played with only four classes for inputs. "I'd love to expand it to a more than 10 inputs." The team is also awaiting authorization to test the system on disabled human subjects.
- The Brain-Machine Interface, Unplugged - IEEE Spectrum ›
- Brain-Computer Interface Smashes Previous Record for Typing ... ›
- DARPA Funds Ambitious Brain-Machine Interface Program - IEEE ... ›