The March 2024 issue of IEEE Spectrum is here!

Close bar

Director of Typing-by-Brain Project Discusses How Facebook Will Get Inside Your Head

Reading out 100 words per minute from the brain? Yes, Facebook knows it's ambitious

5 min read
Drawing of a head in silhouette with a thought bubble inside the head reading: “Hello world! I am typing.”
Illustration: Alexandra Sapp; Image: Getty Images

When Facebook’s Mark Chevillet describes the company’s new “typing by brain” initiative, he has a way of keeping it from sounding totally crazy.

Chevillet is a neuroscience PhD, not some executive dreaming up vaporware, and he has a firm grasp on the current state of brain science. So when he spoke during a recent meeting at at Johns Hopkins University’s Applied Physics Lab, listeners nodded along as he described a brain-computer interface that would read out 100 words per minute from a speech center in the user’s brain, and do so with a non-invasive technology that could rest on the user’s head.

Then he stopped talking and the facts came rushing back in: No such technology exists today. The current record for typing-by-brain is eight words per minute, and that was achieved using implanted electrodes. No one really understands where speech lives in the brain. 

Chevillet acknowledged these facts, and agreed that Facebook’s goal is ambitious. “There’s plenty of technical and research risk involved,” he said. “But we’re not looking for the next guaranteed incremental step, we’re looking for transformative steps.”

Chevillet, who’s running the Facebook project within the hardware skunkworks known as Building 8, also gave the audience a few clues about how his team will tackle the challenges. But his vague descriptions of the technology weren’t enough to satisfy the experts that IEEE Spectrum later spoke with.

A woman stands on a stage in front of a huge screen displaying the text: 'So what if you could type directly from your brain?'

Photo: Stephen Lam/Reuters
Facebook executive Regina Dugan announced the typing-by-brain project at the company’s developer conference in April.

In the six weeks since Facebook executive Regina Dugan announced the project at the company’s annual developer’s conference, many neuroscientists have expressed skepticism about the project’s timeline: Dugan said the team will be working on this technology over the next two years.

Chevillet clarified that Facebook doesn’t expect to have a commercial product in two years’ time—instead the execs hope the research done in that time will prove the feasibility of a product.

To meet that goal, Chevillet said, his team is pursuing two parallel tracks of research. One focuses on developing a non-invasive technology that can read out high-quality neural data. The second focuses on brain science, and specifically on better understanding the mechanisms of language and speech. Here Chevillet said they’re asking the question: “Assuming you have the technology to get high-quality neural data, how would you decode 100 words per minute?” 

As for that target of 100 words per minute, he said they chose it based on people’s natural speaking rate. A technology that keeps up with that natural pace would allow users to issue the type of voice commands they currently direct at the iPhone’s Siri or Amazon’s Alexa—but without saying a word. “We’re interested in a silent speech interface that brings those same benefits of speech recognition,” Chevillet said, “but with the privacy you’d enjoy over text.”

“We’re interested in a silent speech interface that brings the benefits of speech recognition, but with the privacy you’d enjoy over text.”

Chevillet wouldn’t give details on the brain-scanning tech his team is developing, and Facebook has said only that it involves optical imaging. It sounds like Facebook is trying to “invent something completely new,” says Paul Sajda, a professor of biomedical engineering at Columbia University and an expert on advanced technologies for brain research.

To non-invasively read out signals from the brain, neuroscientists and doctors typically rely on EEG, which uses scalp electrodes to record an aggregate signal from millions of neurons. But EEG can’t provide precise data from small brain regions, explains Sajda. “Everybody now is saying, EEG doesn’t have a good enough signal, we’re not going to use that.”

John Donoghue, director of the Wyss Center for Bio and Neuroengineering in Geneva, Switzerland, is also skeptical of non-invasive brain scanning tech. He’s been working for decades on the “BrainGate” system that uses implanted electrodes; with this experimental tech, paralyzed people have controlled robotic arms and computer cursors. “If I could help a person who’s paralyzed without putting an electrode in their brain, I’d be the first to do it,” Donoghue says. But he doesn’t expect big improvements in EEG, and he doesn’t see a way to get better data from the brain non-invasively. “I think we’re pretty close to the limit,” he says. “But if I’m wrong, it will be a huge revolution.”

A woman's head is viewed from behind. She has an electronic gadget attached to her head, and looks toward a screen displaying letters.

Photo: Stanford University

The BrainGate system was used to set the current typing-by-brain record of eight words per minute. In that study, the paralyzed subject used an implant in the motor cortex to guide a computer cursor over a screen and pick letters to type. The implant was gathering data about motor commands, not about actual letters or words.

Again, Facebook seems to be planning something completely different: Chevillet said the technology will read out “intended speech” in the brain. “No one has an algoritm that can detect and decode thought per se, in its abstract forms,” he said at the conference. So the researchers must identify the brain regions involved “when people are imagining speaking to themselves,” he said, then build models that link patterns of brain activity to words.

For this basic brain science, Facebook is collaborating with academic groups at Johns Hopkins, UC Berkeley, and UC San Francisco. Some researchers are gathering preliminary data from epilepsy patients who are in the hospital for surgery, and who have temporary electrodes implanted in their brains for a few days of monitoring.

At Johns Hopkins, Michael Wolmetz and Nathan Crone have been making “semantic maps” of the brain, conducting experiments to determine which parts of the brain are involved in processing high-level concepts.

Two drawings of the brain with spots highlighted. One brain shows spots in the categories of living vs manmade, the other shows spots in the categories small vs large.

Image: Kyle Rupp et al.

In their most recent study, they showed epilepsy patients simple line drawings of 60 different objects. The researchers had already assigned each object a list of semantic attributes that described it; an airplane, for example, had semantic attributes like “manmade” and “large,” while a butterfly had such attributes as “living” and “small.” They the researchers watched which parts of the patients’ brains activated as they named each object.

After mapping where the 218 semantic attributes were represented in the brain, they then made a model that looked at brain activity and could identify—with limited accuracy—which object a patient was looking at.

An illustration shows an image of an airplane activating spots on the brain, which are analyzed to produce the identification 'airplane.'

Image: Kyle Rupp et al.

Such semantic maps could be useful to Facebook’s attempt to read out meaning from the brain, but the Hopkins researchers have noted that their maps were made based on visual identification of concrete objects. If Facebook wants to read out full sentences—with the complications of abstract concepts and tricky bits of grammar—it may need very different brain data. 

And the big problem there, says the Wyss Center’s Donoghue, is that neuroscientists don’t yet know which data is important for functions like speech. “You’ve got all these signals coming out of the brain, and the engineer comes along and says, ‘Oh, this is a signal processing problem. I’ll figure this out.’ But we don’t have an underlying theory of brain that explains what all these signals are,” Donoghue says. “You can’t create a good model when there’s so much you don’t understand.”

The neuroscience experts don’t necessarily have a problem with a deep-pocketed company taking an interest in the many open questions in their field. “If Facebook is going to invest in basic research like the Bell Telephone Company did with Bell Labs, I think that could be great for society,” says Columbia University’s Sajda.

But he wonders how it will play out if Building 8 doesn’t have much to show for its efforts in two year’s time. Basic scientific research takes time, Sajda says, and breakthroughs don’t happen on schedule. “If you’re a stockholder, I’m not sure if you want this high-risk stuff going on when there’s no guarantee you’ll get something out of it.”

The Conversation (0)