Skip to Content

Facebook’s Sci-Fi Plan for Typing with Your Mind and Hearing with Your Skin

Inside the mysterious Building 8, the social network is working on far-out communication technologies.
April 19, 2017
Regina Dugan

A year ago, Facebook started up a special skunkworks team called Building 8 to focus on creating futuristic gadgets, saying the secretive projects would push forward the company’s goal of connecting the world.

On Wednesday at the annual F8 developer conference, the company revealed two of the six projects that are underway, and they sound a lot like science fiction.

Facebook says it hopes to build a new kind of noninvasive brain-machine interface—such as a cap or headband—that lets people text by simply thinking. Another aims to build a wearable device—an armband, perhaps—that makes it possible to “hear” words with your skin.

Building 8’s leader, Regina Dugan, says both projects have been under way for six months and that Facebook will decide in two years whether they’re worth continuing. Dugan was previously the head of Google’s similarly styled Advanced Technology and Projects Group and director of the Pentagon’s DARPA research agency.

The thinking-to-text project is headed up by Mark Chevillet, previously an adjunct professor of neuroscience at Johns Hopkins University.

Chevillet said the goal over two years is to build a noninvasive system that picks up speech signals inside the brain and permits people to silently turn those thoughts into text at a speed of 100 words per minute.

“We just want to be able to get those signals right before you actually produce the sound so you don’t have to say it out loud anymore,” he said.

Facebook says it is collaborating with Johns Hopkins, the University of California, Berkeley, and the University of California, San Francisco, on the project, which Chevillet says will focus on finding a way to use light, like LEDs or lasers, to sense neural signals emanating from the cerebral cortex.

The method would work in a way that is related to how functional near-infrared spectroscopy is currently used to measure brain activity.

Such a device—a headband or some sort of cap—could be useful to people so severely paralyzed that they can’t communicate. Over time, though, brain interfaces could be a way to “think” a message rather than typing it, or send a text in the middle of a conversation, Facebook thinks. They could also be a way to communicate with others in virtual or augmented reality, which are technologies that Facebook has been pushing heavily.

Chevillet said there are already some good demonstrations of brain-computer interfaces, like a recent study in which three people with paralysis were able to use their minds to select letters using an on-screen cursor, one of them typing at eight words per minute. In that study, a brain implant recorded neural signals. Others have experimented with trying to interpret what sounds people are making or thinking about. 

Such speech "decoding" projects have involved surgery to install an electronic implant in the brain or on its surface. Now the Facebook researchers are exploring whether it’s possible to figure out what someone wants to say by detecting signals outside the brain, then translating them into text. Doing so accurately, in real time, and at the rate Facebook proposes, would represent a huge step forward over what neuroscience has shown is possible so far.

Neuroscientists who viewed Dugan's presentation today at Facebook developer's conference were left with more questions than answers. “It was pretty vague exactly how they are going to get direct neural activity from these optical techniques, that is the big question” says Marc Slutzky, a neurologist and neuroengineer at Northwestern University. “If they can show that, it opens up a whole new realm of possibility, but the state of the art is nowhere near that. It remains to be seen how realistic it is to get this highly detailed information non-invasively.”

Slutzky says brain implants under the skull can so far decode speech sounds people are thinking about producing only with about 40 to 50 percent accuracy.

The second project, which focuses on making it possible for people to recognize words with their skin, draws inspiration from Braille and Tadoma—a method of communication in which people who are both deaf and blind place a hand on the face of another person to feel the vibrations and air as that person speaks.

In an experiment, researchers built a device with 16 actuators on it and strapped it to an engineer’s arm. Another engineer had a tablet computer with nine words on its display; as he tapped the different words—like “grasp,” “black,” and “cone”—the first engineer felt vibrations on her arm that corresponded with the words and was able to correctly interpret that she needed to pick up a black cone on the table in front of her.

To do this, the researchers are taking a spoken word—like “black”—and separating it into its frequency components, then delivering those frequencies to the actuators on the arm, Dugan said.

“Instead of from her cochlea to her brain, she’s taking [the signal] from her arm to her brain,” she added.

The researchers think of this as a way to deliver language on the skin, hoping that eventually people will be able to use the method to distinguish between about 100 words. They may also use nonverbal signals like pressure and temperature.

Dugan said the idea is to eventually have a wearable that sends messages you can feel, without having to take your phone out and, say, interrupt an in-person conversation you’re having with someone.

While neither of these projects will yield a gadget that you can buy, Dugan said she can imagine it happening eventually.

“I think at two years we should have a pretty good sense of whether it’s possible to build them into consumer goods,” she said.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.