The July 2022 issue of IEEE Spectrum is here!

Close bar

A Soft, Wearable Brain–Machine Interface

Imperceptible micro-needles and flexible circuits improve neural signal recording

3 min read
Two cross straps meet at the crown of a head, where a circuit board is placed. Another strap circles the head.
Georgia Institute of Technology

Even though brain–machine or brain–computer interfaces (BMI/BCI) have come a long way since Hans Berger discovered the presence of electrical activity in the human brain in 1924, seamless communication between our brains and machines remains a holy grail of computer science. The past few years have seen incredible advances in non-invasive wearable BMI research in terms of helping disabled or paralyzed people move again, control robotic prosthetics, or command computers by the power of thought, and give blind people bionic vision.

Now, a new study from an international team of scientists has designed a BMI system that can control a robotic arm or wheelchair by simply imagining the action via a detection system that is easier to wear than earlier systems. The system comprises a soft wireless scalp electronic system, which uses electro-encephalography (EEG) to read and translate neural signals from a human brain into action.

The soft wearable scalp used in Yeo and his colleagues' BMI system is something the team has been working on for two years. Unlike conventional EEG devices, Yeo says, this one doesn't have a bunch of wires, metal electrodes and so on. "It has miniaturized, imperceptible micro-needle electrodes, and flexible circuits with stretchable interconnectors." This, he adds, gives their system a better form factor and better signal acquisition.

Being both flexible and soft, the EEG scalp can be worn over hair and requires no gels or pastes to keep in place. The improved signal recording is largely down to the micro-needle electrodes, invisible to the naked eye, which penetrate the outermost layer of the skin. "You won't feel anything because [they are] too small to be detected by nerves," says Woon-Hong Yeo of the Georgia Institute of Technology. In conventional EEG set-ups, he adds, any motion like blinking or teeth grinding by the wearer causes signal degradation. "But once you make it ultra-light, thin, like our device, then you can minimize all of those motion issues."

The team used machine learning to analyze and classify the neural signals received by the system and identify when the wearer was imagining motor activity. That, says Yeo, is the essential component of a BMI, to distinguish between different types of inputs. "Typically, people use machine learning or deep learning… We used convolutional neural networks." This type of deep learning is typically used in computer vision tasks such as pattern recognition or facial recognition, and "not exclusively for brain signals," Yeo adds. "We are just getting the benefits of the deep learning mechanism itself."

A ten by ten array of pyramid-shaped needs Micro-needle electrodesGeorgia Institute of Technology

The researchers also used virtual reality (VR) to simulate action. Since the system is based on motor imagery, the VR component acts as a visual cue and "is sort of helping a user to imagine better, by showing hands or feet," Yeo says. The data showed that this, in fact, enhanced signal quality as well.

The portable BMI system was able to record real-time, high-quality motor imagery activity, and the four human subjects—all able-bodied people—were able to complete their VR exercises by thinking about them. Despite an accuracy rate of 93.22 ± 1.33 percent, Yeo says there are still many challenges ahead.

"The major limitation [of non-invasive BMIs] is that we are measuring signals on the skin, through the skull, through the tissues," he says, "So I believe we have to continuously improve our device quality to get better signals. And at the same time, we have to also continuously improve our data analysis…to have a better accuracy rate." Also, in the current experiment, the researchers played with only four classes for inputs. "I'd love to expand it to a more than 10 inputs." The team is also awaiting authorization to test the system on disabled human subjects.

The Conversation (0)

The First Million-Transistor Chip: the Engineers’ Story

Intel’s i860 RISC chip was a graphics powerhouse

21 min read
Twenty people crowd into a cubicle, the man in the center seated holding a silicon wafer full of chips

Intel's million-transistor chip development team

In San Francisco on Feb. 27, 1989, Intel Corp., Santa Clara, Calif., startled the world of high technology by presenting the first ever 1-million-transistor microprocessor, which was also the company’s first such chip to use a reduced instruction set.

The number of transistors alone marks a huge leap upward: Intel’s previous microprocessor, the 80386, has only 275,000 of them. But this long-deferred move into the booming market in reduced-instruction-set computing (RISC) was more of a shock, in part because it broke with Intel’s tradition of compatibility with earlier processors—and not least because after three well-guarded years in development the chip came as a complete surprise. Now designated the i860, it entered development in 1986 about the same time as the 80486, the yet-to-be-introduced successor to Intel’s highly regarded 80286 and 80386. The two chips have about the same area and use the same 1-micrometer CMOS technology then under development at the company’s systems production and manufacturing plant in Hillsboro, Ore. But with the i860, then code-named the N10, the company planned a revolution.

Keep Reading ↓Show less
{"imageShortcodeIds":[]}