Skip to Content

Reading Thoughts with Brain Imaging

Researchers use fMRI to determine the contents of short-term memory.
February 18, 2009

Functional magnetic resonance imaging (fMRI) looks more and more like a window into the mind. In a study published online today in Nature, researchers at Vanderbilt University report that from fMRI data alone, they could distinguish which of two images subjects were holding in their memory–even several seconds after the images were removed. The study also pinpointed, for the first time, where in the brain visual working memory is maintained.

Big Brother is watching you: Researchers used fMRI to peer into the visual cortex of a subject and accurately predict which of two images (circular grating, above) he was holding in his short-term memory. The experimenters used specialized algorithms to tease out subtle patterns in brain activity (represented here in red and green) specific to that image in order to make the call.

Visual working memory allows us to briefly store and act upon specific details from images that we’ve seen: what color they are, how they’re oriented, and how frequently they appear. But how and where these details are stored has remained a mystery. Early visual areas, which are the first to receive and process visual information, don’t seem to stay active long enough to do the job. And higher visual areas don’t have the machinery to retain such fine-grained details.

“It’s been elusive,” says John-Dylan Haynes, a neuroscientist at the Bernstein Center for Computational Neuroscience, in Berlin. “This is a truly brilliant study that now convincingly demonstrates that the information about fine-grained contents of visual experience is held online in the early visual cortex across memory periods.”

In the study, subjects were briefly shown two subsequent images of a grating, each image oriented at a different angle. They were then given a cue telling them which one to remember. To ensure that the memory was maintained, subjects were shown a third grating several seconds later and prompted to indicate how it was rotated compared with the remembered one. Throughout the whole process, an fMRI scanner monitored activity in four different early visual areas of the brain.

By analyzing the activity in those areas during the 11-second remembering period, the experimenters were able to determine, with more than 80 percent accuracy, which grating orientation the subject had in mind. To do so, they used a sophisticated analytical tool called a pattern classifier, calibrated for each individual subject by a number of training trials. Rather than simply measuring the overall level of activity, the pattern classifier could probe for patterns in how that activity was distributed across the brain.

This approach turned out to be crucial. Previous studies had unsuccessfully tried to predict subjects’ memories by looking at overall brain activity in the early visual areas–an approach that was similarly unsuccessful here. In roughly half of the subjects, overall activity returned to baseline levels soon after the images were removed from view, and in all subjects activity was drastically reduced, making it impossible to decode which image the subject was remembering. But by teasing out specific activity patterns, the pattern classifier was able to reveal the previously hidden information encoded in those areas.

“Using these pattern-recognition-based techniques, the authors have been able to show that there is information stored there, even if on the surface it might not be obvious because the overall activity levels don’t go up,” says Haynes.

Previous studies using fMRI have shown that it’s possible to determine which of a number of pictures a person is looking at. But the new study is unique in that it is not decoding sensory information in the brain, but memory.

The researchers also found that the brain-activity patterns linked to looking at a grating and remembering it bear a striking resemblance to each other. “During working memory for visual information, it almost seemed as though these early areas are holding an echo of the initial visual response,” says Stephenie Harrison, a graduate student at Vanderbilt and the lead author on the Nature paper. “It suggests, in a way, that the memory trace itself is very similar to perception.”

It still remains to be seen how the activity patterns detected by fMRI, which essentially measures blood flow in the brain, translate into actual neural signals, says Haynes. Because it measures information in chunks of three cubic millimeters, fMRI can’t gather information about what individual neurons are doing. But “it gives us a better sense of what memory is,” says Harrison. “It’s hard to know because it’s such a subjective personal experience, but this gives us a better sense of what someone might be doing: they might actually be visualizing the information.”

No need to worry yet about Big Brother reading your mind. For now, real-world applications remain limited, says Frank Tong, an associate professor of psychology and senior author on the study. The ability to reconstruct from scratch a complex memory or imagined scenario is a long way off. “We’re still just discriminating a simple binary state,” Tong says. “If you increase the number of options, this would get progressively more difficult.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.