Skip to Content

Single Cell Brain Control

Patients select images on a computer screen using only their thoughts.
October 29, 2010

People can exert conscious control over individual neurons, using that control to alter images on a computer screen, according to research published this week in Nature. Researchers say the findings help explain how our brain decide which stimuli in our noisy environment to pay attention to and which to ignore. The research may ultimately aid in the development of brain computer interfaces designed to help severely paralyzed people communicate.

Christof Koch and collaborators at Caltech studied people awaiting brain surgery for epilepsy; these patients have electrodes implanted directly into their brains to record their neural activity. Previous research by the same team had shown that individual neurons can respond preferentially to images of specific objects or people, such as Hallie Berry. In the new experiment, researchers identified some of these cells and then asked patients to try to manipulate their activity. They translated the neural signals into a control signal for a nearby computer.

An article in Nature News describes the experiment:

In this experiment, the scientists flashed a series of 110 familiar images – such as pictures of Marilyn Monroe or Michael Jackson – on a screen in front of each of the 12 patients and identified individual neurons which uniquely and reliably responded to one of the images. They selected four images for which they had found responsive neurons in different parts of a subject’s MTL.

Then they showed the subject two images superimposed on each other. Each was 50% faded out.

The subjects were told to think about one of the images and enhance it. They were given ten seconds, during which time the scientists ran the firing of the relevant neurons through a decoder. They fed the decoded information back into the superimposed images, fading the image whose neuron was firing more slowly and enhancing the image whose neuron was firing more quickly.

Watching this on-line feedback, the subjects were able to make their targeted image completely visible, and entirely eliminate the distracting image, in more than two thirds of trials, and they learnt to do so very quickly.

Afterwards, they reported that they had used different cognitive strategies. Some tried to enhance the target image, while others tried to fade the distracting images. Both had worked. But feedback on the computer screens was vital. When this ‘brain-machine interface’ wasn’t provided, their success rates plummeted below one third.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.