MIT Technology Review Subscribe

How Close Is a Workable Brain-Computer Interface?

Noninvasive communication between brains and computers just came a step closer.

The ultimate goal of brain-computer interfaces is something direct, noninvasive and relatively high bandwidth.

Not even science fiction authors believe that a non-invasive approach is ever going to happen. Think about all the times you’ve seen someone in movies like The Matrix “jack in” to a computer via a gnarly port in their skull.

Advertisement

In the real world, however, few people are ever fitted with direct neural interfaces to computers. The results have been impressive–macaques moving robot arms just by thinking about it, and patients with locked-in syndrome communicating for the first time in ages. But this hasn’t translated to a viable solution for most people who might need such an interface.

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

That’s why new research from Spain is so exciting. Scientists led by Eduardo Iáñez of Miguel Hernandez University have for the first time combined a number of desirable features into a single brain-computer interface that is noninvasive, spontaneous and asynchronous.

About that asynchronicity: it turns out that, because of the bandwidth limitations of recording brain activity through EEGs – external electrodes placed on the outside of the head – previous attempts at noninvasive brain computer interfaces required that users only direct the computer during certain time slots. Imagine a metronome ticking very slowly, say once a second, directing you to imagine the movement of your robotic arm starting… now. How tedious.

Iáñez and colleagues’ approach gets around this limitation by using four different models, each with assumptions that are sometimes the opposite others. This way, however a subject’s brain happens to be wired up, all the computer has to figure out is whether they mean “left” or “right” in order to direct a robot arm in two dimensions.

Here’s a video of the results. First, you’ll see the simulation, running in MatLab, and then the arm itself responding in near-real time to the user. (The computer has to sample brain activity in half-second intervals in order to gather enough data to detect what the user intends.)

Users drive the system simply by imagining what they want to happen – for example, they could visualize moving their hand in the direction they want the arm to move.

Here’s a slightly more impressive video, of an arm being activated in three dimensions, although the movements are clearly pre-programmed.

Future research goals include moving this interface out of two dimensions and into three. If they succeed, they’ll have at least matched in humans an experiment performed with Macaques in which an EEG-driven arm was used by the monkeys to feed themselves. That would be quite a feat for patients who are currently unable to engage in such activities, and the main barrier appears to be how clever computers can be about processing the signal. In other words, the sophistication of their algorithm.

Advertisement

Follow Mims on Twitter or contact him via email.

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement