Functional magnetic resonance imaging (fMRI) looks more and more like a window into the mind. In a study published online today in Nature, researchers at Vanderbilt University report that from fMRI data alone, they could distinguish which of two images subjects were holding in their memory–even several seconds after the images were removed. The study also pinpointed, for the first time, where in the brain visual working memory is maintained.
Visual working memory allows us to briefly store and act upon specific details from images that we’ve seen: what color they are, how they’re oriented, and how frequently they appear. But how and where these details are stored has remained a mystery. Early visual areas, which are the first to receive and process visual information, don’t seem to stay active long enough to do the job. And higher visual areas don’t have the machinery to retain such fine-grained details.
“It’s been elusive,” says John-Dylan Haynes, a neuroscientist at the Bernstein Center for Computational Neuroscience, in Berlin. “This is a truly brilliant study that now convincingly demonstrates that the information about fine-grained contents of visual experience is held online in the early visual cortex across memory periods.”
In the study, subjects were briefly shown two subsequent images of a grating, each image oriented at a different angle. They were then given a cue telling them which one to remember. To ensure that the memory was maintained, subjects were shown a third grating several seconds later and prompted to indicate how it was rotated compared with the remembered one. Throughout the whole process, an fMRI scanner monitored activity in four different early visual areas of the brain.
By analyzing the activity in those areas during the 11-second remembering period, the experimenters were able to determine, with more than 80 percent accuracy, which grating orientation the subject had in mind. To do so, they used a sophisticated analytical tool called a pattern classifier, calibrated for each individual subject by a number of training trials. Rather than simply measuring the overall level of activity, the pattern classifier could probe for patterns in how that activity was distributed across the brain.
This approach turned out to be crucial. Previous studies had unsuccessfully tried to predict subjects’ memories by looking at overall brain activity in the early visual areas–an approach that was similarly unsuccessful here. In roughly half of the subjects, overall activity returned to baseline levels soon after the images were removed from view, and in all subjects activity was drastically reduced, making it impossible to decode which image the subject was remembering. But by teasing out specific activity patterns, the pattern classifier was able to reveal the previously hidden information encoded in those areas.