Skip to Content

Brain-Controlled Typing May Be the Killer Advance That AR Needs

Why type when you can just think?
November 8, 2017
Justin Saglio

Clicking, typing, and swiping are the norm in 2017. But to streamline the way we use virtual and augmented reality, a startup called Neurable wants to replace all of that with simply thinking.

“Every major computational technology has needed an evolution in interaction,” Ramses Alcaide, cofounder and CEO of the firm, explained at MIT Technology Review’s EmTech conference in Cambridge, Massachusetts, on Wednesday. “When it came to the computer, we had the graphical user interface and the mouse. With smartphones, we went to capacitive touch screens. And now that we’re entering augmented reality, we need to start thinking about more natural ways of interacting—your hand, your eye, and even your brain.”

That, says Alcaide, could make the vision of augmented-reality headsets genuinely useful, allowing wearers to influence what they see without fumbling for a keypad or controller. That’s why Neurable has been working on developing brain-control systems for VR for over a year now. It uses a headset loaded with dry electrodes that sit on the scalp and track brain activity. The firm’s software analyzes the brain’s activity to work out what its wearer wants to do. A couple of months ago, the company showed off a snazzy VR game that uses the technology to let you move objects with your mind.

But that kind of thing is not the company’s true goal. “The killer interaction is not something exciting; it’s something boring,” Alcaide said at the conference. “It’s something as simple as typing, sweeping, pinch-and-zoom, and clicking.”

To that point, he showed off of an alpha version of Neurable’s first typing tool. The current speed record for typing via brain-computer interface is eight words per minute, but that uses an invasive implant to read signals from a person’s brain. “We’re working to beat that record, even though we’re using a noninvasive technology,” explains Alcaide. “We’re getting about one letter per second, which is still fairly slow, because it’s an early build. We think that in the next year we can further push that forward.”

He says that by introducing AI into the system, Neurable should be able to reduce the delay between letters and also predict what a user is trying to type. And that might make our interactions with technology smoother than ever.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.