Functional magnetic resonance imaging (fMRI) looks more and more like a window into the mind. In a study published online today in Nature, researchers at Vanderbilt University report that from fMRI data alone, they could distinguish which of two images subjects were holding in their memory–even several seconds after the images were removed. The study also pinpointed, for the first time, where in the brain visual working memory is maintained.
Visual working memory allows us to briefly store and act upon specific details from images that we’ve seen: what color they are, how they’re oriented, and how frequently they appear. But how and where these details are stored has remained a mystery. Early visual areas, which are the first to receive and process visual information, don’t seem to stay active long enough to do the job. And higher visual areas don’t have the machinery to retain such fine-grained details.
“It’s been elusive,” says John-Dylan Haynes, a neuroscientist at the Bernstein Center for Computational Neuroscience, in Berlin. “This is a truly brilliant study that now convincingly demonstrates that the information about fine-grained contents of visual experience is held online in the early visual cortex across memory periods.”
In the study, subjects were briefly shown two subsequent images of a grating, each image oriented at a different angle. They were then given a cue telling them which one to remember. To ensure that the memory was maintained, subjects were shown a third grating several seconds later and prompted to indicate how it was rotated compared with the remembered one. Throughout the whole process, an fMRI scanner monitored activity in four different early visual areas of the brain.
By analyzing the activity in those areas during the 11-second remembering period, the experimenters were able to determine, with more than 80 percent accuracy, which grating orientation the subject had in mind. To do so, they used a sophisticated analytical tool called a pattern classifier, calibrated for each individual subject by a number of training trials. Rather than simply measuring the overall level of activity, the pattern classifier could probe for patterns in how that activity was distributed across the brain.
This approach turned out to be crucial. Previous studies had unsuccessfully tried to predict subjects’ memories by looking at overall brain activity in the early visual areas–an approach that was similarly unsuccessful here. In roughly half of the subjects, overall activity returned to baseline levels soon after the images were removed from view, and in all subjects activity was drastically reduced, making it impossible to decode which image the subject was remembering. But by teasing out specific activity patterns, the pattern classifier was able to reveal the previously hidden information encoded in those areas.
“Using these pattern-recognition-based techniques, the authors have been able to show that there is information stored there, even if on the surface it might not be obvious because the overall activity levels don’t go up,” says Haynes.
Previous studies using fMRI have shown that it’s possible to determine which of a number of pictures a person is looking at. But the new study is unique in that it is not decoding sensory information in the brain, but memory.
The researchers also found that the brain-activity patterns linked to looking at a grating and remembering it bear a striking resemblance to each other. “During working memory for visual information, it almost seemed as though these early areas are holding an echo of the initial visual response,” says Stephenie Harrison, a graduate student at Vanderbilt and the lead author on the Nature paper. “It suggests, in a way, that the memory trace itself is very similar to perception.”
It still remains to be seen how the activity patterns detected by fMRI, which essentially measures blood flow in the brain, translate into actual neural signals, says Haynes. Because it measures information in chunks of three cubic millimeters, fMRI can’t gather information about what individual neurons are doing. But “it gives us a better sense of what memory is,” says Harrison. “It’s hard to know because it’s such a subjective personal experience, but this gives us a better sense of what someone might be doing: they might actually be visualizing the information.”
No need to worry yet about Big Brother reading your mind. For now, real-world applications remain limited, says Frank Tong, an associate professor of psychology and senior author on the study. The ability to reconstruct from scratch a complex memory or imagined scenario is a long way off. “We’re still just discriminating a simple binary state,” Tong says. “If you increase the number of options, this would get progressively more difficult.”
A horrifying new AI app swaps women into porn videos with a click
Deepfake researchers have long feared the day this would arrive.
We can’t afford to stop solar geoengineering research
It is the wrong time to take this strategy for combating climate change off the table.
Meet Altos Labs, Silicon Valley’s latest wild bet on living forever
Funders of a deep-pocketed new "rejuvenation" startup are said to include Jeff Bezos and Yuri Milner.
The new version of GPT-3 is much better behaved (and should be less toxic)
OpenAI has trained its flagship language model to follow instructions, making it spit out less unwanted text—but there's still a way to go.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.