Skip to Content
Humans and technology

Facebook is ditching plans to make an interface that reads the brain

The company's research into a consumer mind-reading device is over, for now. Some scientists said it was never possible anyway.

FRLR head-mounted BCI research prototype
A prototype of Facebook's optical device for reading brain signals.Facebook

The spring of 2017 may be remembered as the coming-out party for Big Tech’s campaign to get inside your head. That was when news broke of Elon Musk’s new brain-interface company, Neuralink, which is working on how to stitch thousands of electrodes into people’s brains. Days later, Facebook joined the quest when it announced that its secretive skunkworks, named Building 8, was attempting to build a headset or headband that would allow people to send text messages by thinking—tapping them out at 100 words per minute.

The company’s goal was a hands-free interface anyone could use in virtual reality. “What if you could type directly from your brain?” asked Regina Dugan, a former DARPA officer who was then head of the Building 8 hardware dvision. “It sounds impossible, but it’s closer than you realize.”

Now the answer is in—and it’s not close at all. Four years after announcing a “crazy amazing” project to build a “silent speech” interface using optical technology to read thoughts, Facebook is shelving the project, saying consumer brain-reading still remains very far off.

In a blog post, Facebook said it is discontinuing the project and will instead focus on an experimental wrist controller for virtual reality that reads muscle signals in the arm. “While we still believe in the long-term potential of head-mounted optical [brain-computer interface] technologies, we’ve decided to focus our immediate efforts on a different neural interface approach that has a nearer-term path to market,” the company said.

Facebook’s brain-typing project had led it into uncharted territory—including funding brain surgeries at a California hospital and building prototype helmets that could shoot light through the skull—and into tough debates around whether tech companies should access private brain information. Ultimately, though, the company appears to have decided the research simply won’t lead to a product soon enough.

“We got lots of hands-on experience with these technologies,” says Mark Chevillet, the physicist and neuroscientist who until last year headed the silent-speech project but recently switched roles to study how Facebook handles elections. “That is why we can confidently say, as a consumer interface, a head-mounted optical silent speech device is still a very long way out. Possibly longer than we would have foreseen.”

Mind reading

The reason for the craze around brain-computer interfaces is that companies see mind-controlled software as a huge breakthrough—as important as the computer mouse, graphical user interface, or swipe screen. What’s more, researchers have already demonstrated that if they place electrodes directly in the brain to tap individual neurons, the results are remarkable. Paralyzed patients with such “implants” can deftly move robotic arms and play video games or type via mind control.

Facebook’s goal was to turn such findings into a consumer technology anyone could use, which meant a helmet or headset you could put on and take off. The company never had an intention to make a product that would involve brain surgery says Chevillet. Given the social giant’s many regulatory problems, CEO Mark Zuckerberg had once said that the last thing the company should do is crack open skulls. “I don’t want to see the congressional hearings on that one,” he had joked.

In fact, as brain-computer interfaces advance, there are serious new concerns. What would happen if large tech companies could know people’s thoughts? In Chile, legislators are even considering a human rights bill to protect brain data, free will, and mental privacy from tech companies. Given Facebook’s poor record on privacy, the decision to halt this research may have the side benefit of putting some distance between the company and rising worries about “neurorights.”

Facebook’s project aimed specifically at a brain controller that could mesh with its ambitions in virtual reality; it bought Oculus VR in 2014 for $2 billion. To get there, the company took a two-pronged approach, says Chevillet. First, it needed to determine whether a thought-to-speech interface was even possible. For that, it sponsored research at the University of California, San Francisco, where a researcher named Edward Chang has placed electrode pads on the surface of people’s brains.

Whereas implanted electrodes read data from single neurons, this technique, called electrocorticography, or ECoG, measures from fairly large groups of neurons at once. Chevillet says Facebook hoped it might also be possible to detect equivalent signals from outside the head.

The UCSF team made some surprising progress and today is reporting in the New England Journal of Medicine that it used those electrode pads to decode speech in real time. The subject was a 36-year-old man the researchers refer to as “Bravo-1,” who after a serious stroke has lost his ability to form intelligible words and can only grunt or moan. In their report, Chang’s group says with the electrodes on the surface of his brain, Bravo-1 has been able to form sentences on a computer at a rate of about 15 words per minute. The technology involves measuring neural signals in the part of the motor cortex associated with Bravo-1’s efforts to move his tongue and vocal tract as he imagines speaking.

To reach that result, Chang’s team asked Bravo-1 to imagine saying one of 50 common words nearly 10,000 times, feeding the patient’s neural signals to a deep-learning model. After training the model to match words with neural signals, the team was able to correctly determine the word Bravo-1 was thinking of saying 40% of the time (chance results would have been about 2%). Even so, his sentences were full of errors. “Hello, how are you?” might come out “Hungry how am you.”

But the scientists improved the performance by adding a language model—a program that judges which word sequences are most likely in English. That increased the accuracy to 75%. With this cyborg approach, the system could predict that Bravo-1’s sentence “I right my nurse” actually meant “I like my nurse.”

As remarkable as the result is, there are more than 170,000 words in English, and so performance would plummet outside of Bravo-1’s restricted vocabulary. That means the technique, while it might be useful as a medical aid, isn’t close to what Facebook had in mind. “We see applications in the foreseeable future in clinical assistive technology, but that is not where our business is,” says Chevillet. “We are focused on consumer applications, and there is a very long way to go for that.”

FRLR BCI research hardware module
Equipment developed by Facebook for diffuse optical tomography, which uses light to measure blood oxygen changes in the brain.
FACEBOOK

Optical failure

Facebook’s decision to drop out of brain reading is no shock to researchers who study these techniques. “I can’t say I am surprised, because they had hinted they were looking at a short time frame and were going to reevaluate things,” says Marc Slutzky, a professor at Northwestern whose former student Emily Mugler was a key hire Facebook made for its project. “Just speaking from experience, the goal of decoding speech is a large challenge. We’re still a long way off from a practical, all-encompassing kind of solution.”

Still, Slutzky says the UCSF project is an “impressive next step” that demonstrates both remarkable possibilities and some limits of the brain-reading science. He says that if artificial-intelligence models could be trained for longer, and on more than just one person’s brain, they could improve rapidly.

While the UCSF research was going on, Facebook was also paying other centers, like the Applied Physics Lab at Johns Hopkins, to figure out how to pump light through the skull to read neurons noninvasively. Much like MRI, those techniques rely on sensing reflected light to measure the amount of blood flow to brain regions.

It’s these optical techniques that remain the bigger stumbling block. Even with recent improvements, including some by Facebook, they are not able to pick up neural signals with enough resolution. Another issue, says Chevillet, is that the blood changes these methods detect peak a few seconds after a group of neurons fire, making it too slow to control a computer.

“Facebook dropping it isn’t an indictment of optical technology—it’s an assessment of the things they are trying to use it for,” says Bryan Johnson, the CEO and founder of Kernel, which this year started to commercialize a helmet that measures the brain using near-infrared beams. He says that like MRI, the technology is better for measuring overall brain states, which he believes has applications such as detecting emotion or attention. “The goal they have is improving control, and this technology does not fit that objective. It measures a hemodynamic signal, and that signal is slow,” says Johnson.

What’s next

Facebook now plans to focus on a technology it acquired in September 2019, when it bought a startup called CTRL-Labs for more than $500 million, one of its largest public acquisitions since its takeover of Oculus. That company has been developing a wrist-worn device that captures electrical signals in a person’s muscles through a technique known as EMG. This can detect gestures or figure out which finger someone is moving.

That’s not a brain interface, but it may be a simpler way of engaging in the virtual world that Facebook is building with its VR googles. Imagine, for instance, drawing a bow in an adventure game and then releasing the arrow with a small shift in your fingers. According to Krishna Shenoy, a Stanford University neuroscientist who is an advisor to CTRL-Labs, the device can record electrical activity in the muscles “at a remarkably detailed level” and can capture movements “from multiple fingers and with very little actual movement at all.”

In its blog post, Facebook said that “it makes sense to focus our near-term attention on wrist-based neural interfaces using EMG, a proven viable technology we believe has a nearer-term path to market for AR/VR input.”

The company says it now plans to open-source the software it developed for brain decoding and also provide access to prototype devices, so other researchers can benefit from its work. “We tackled these key problems: whether you can decode speech at all from brain activity, and then, can you decode it with a wearable optical device,” says Chevillet.

“We think eventually it will be possible.”

Deep Dive

Humans and technology

Unlocking the power of sustainability

A comprehensive sustainability effort embraces technology, shifting from risk reduction to innovation opportunity.

Building a data-driven health-care ecosystem

Harnessing data to improve the equity, affordability, and quality of the health care system.

Let’s not make the same mistakes with AI that we made with social media

Social media’s unregulated evolution over the past decade holds a lot of lessons that apply directly to AI companies and technologies.

People are worried that AI will take everyone’s jobs. We’ve been here before.

In a 1938 article, MIT’s president argued that technical progress didn’t mean fewer jobs. He’s still right.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.