Skip to Content

General-Purpose Brain-Computer Interface Brings Thought Control to Any PC

New system could make it easier for paralyzed people to communicate via computer keyboard.

For people who are paralyzed, a brain-computer interface is sometimes the only feasible way to communicate. The idea is that sensors monitor brain waves and the nerve signals that control facial expressions. Predefined signals can then be used to control the computer.

For example, the pattern of nerve signals associated with a left smirk might cause the cursor to move to the left, while a right smirk might cause it to move right; the brain waves associated with concentration might be associated with a double click, and so on. In this way, an otherwise paralyzed user can control a computer.

But there are significant problems with this kind of approach. The first is that these systems are highly susceptible to noise, so they often make mistakes. The second is that they are awkward and clunky to use, so communication is painfully slow. Typing speeds can be as low as about one character per minute. A way of significantly speeding up brain-control interfaces would be hugely useful.

Enter Ori Ossmy and pals at Ben-Gurion University in Israel, who have created a general-purpose brain-computer interface called MindDesktop that allows a user to control most aspects of a windows PC with typing speeds as fast as 20 seconds per character. That’s an order of magnitude better than some other systems.

First some background. In recent years, various companies have begun to sell off-the-shelf brain-monitoring devices that measure signals produced by the brain and the nerve-firing patterns associated with facial movement. Anybody can buy these devices for a few hundred dollars.

One of the better known is the Emotiv EPOC+ neuro headset and 14-channel EEG, which links to any computer via Wi-Fi. The device costs $800 (although a the company make a cheaper, lower-spec model that sells for $300).

Using this or other brain-measuring devices to control a computer is hard, however, not least because many interfaces are clunky and slow. So Ossmy and co have built a system that takes the signals detected by the Emotiv headset and exploits them in an interface that’s relatively easy to use.

Out of the box, the Emotiv headset can spot the nerve signals associated with various facial expressions. But it can also be trained to spot the brain-wave patterns associated with thinking about, say, a favorite flower or song or pet.  

Each of these thoughts can then be used to trigger a different action in the software. Indeed, the entire system is designed to work with just three inputs. It can be adapted to any form of input, provided it can distinguish three different ones.

The interface has some unique approaches. For example, users can select an item anywhere on the screen using a “hierarchical pointing device.” This divides the entire screen into four quarters. Users select the quarter that contains the item of interest, and this quarter is itself divided into four smaller quarters, one of which the user selects, and so on. This division continues until one quarter is filled with the item of interest, which the user can click on by selecting it.

This allows the user to open or close any application. An onscreen keyboard, predictive text, and other shortcuts speed up the process of communication.

The Israeli team put the software through its paces by asking 17 healthy adults to use it on a standard PC laptop and then measuring how long they took to perform certain tasks, such as opening a folder, playing a video, and searching the Internet for a topic.

The results show a clear learning progression. In just three learning sessions, all the users finished their tasks more quickly and were capable of sending a simple e-mail.

Interestingly, the researchers identified long hair as a potential problem, because it causes more signal errors; this made the tasks more difficult for women than for men. “The results indicate that users can quickly learn how to activate the new interface and efficiently use it to operate a PC,” say Ossmy and co.

However, there is much work to be done to make these systems comparable with other forms of communication, such as texting and ordinary typing. Better sensors will clearly help, but the user interface itself will always be critical.

Ref: arxiv.org/abs/1705.07490 : MindDesktop: A General Purpose Brain Computer Interface

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.