Skip to Content

Mind Reading with Functional MRI

Scientists use brain imaging to predict what someone is looking at.
March 5, 2008

Scientists can accurately predict which of a thousand pictures a person is looking at by analyzing brain activity using functional magnetic resonance imaging (fMRI). The approach should shed light on how the brain processes visual information, and it might one day be used to reconstruct dreams.

“[The research] suggests that fMRI-based measurements of brain activity contain much more information about underlying neural processes than has previously been appreciated,” says Jack Gallant, a neuroscientist at the University of California, Berkeley, and senior author of the study.

FMRI detects blood flow in the brain, giving an indirect measure of brain activity. Most fMRI studies to date have used the technology to pinpoint the parts of the brain involved in different cognitive tasks, such as reading or remembering faces. The new study, however, adopts an emerging trend in fMRI: using the technology to analyze neural information processing. By employing computer models to analyze the kinds of information gathered from the neural activity, scientists can try to assess how neural signals are processed in different brain areas and ultimately fused to create a cohesive perception. Researchers have previously used this approach to show that some visual information can be gleaned from brain-imaging data, such as whether a person is looking at faces or houses.

According to the study, published Wednesday in the online version of the journal Nature, scientists first gathered information about how the brain processes images by recording activity in the visual cortex as subjects looked at several thousand randomly selected pictures. Neurons in this part of the brain respond to specific aspects of the visual scene, such as a patch of strongly contrasting light and dark, so the activity recorded in each area of the brain scan reflects the visual information being processed by neurons in that area of the brain. The researchers compiled this information to develop a computer model that would predict the pattern of brain activity triggered by any image.

When volunteers were later shown a new image not included in the first set, the computer model was able to correctly predict which picture out of 120 or 1,000 possibilities the person looked at with 90 or 80 percent accuracy, respectively.

“They can do this with a surprising degree of accuracy,” says Frank Tong, a neuroscientist at Vanderbilt University, in Nashville, TN, who was not involved in the research. “People will be struck by how much visual information these researchers were able to extract from the brain.”

Gallant and his team plan to use this technology to better understand how the visual system works by building computational models of various theories and then testing their ability to interpret brain scans. “The most direct way to test theories about how the brain transforms information is to measure what information is stored in different parts of the person’s mind, and how that changes from structure to structure,” says Ken Norman, a neuroscientist at Princeton University, in New Jersey, who was not involved in the research. Similar methods might also be useful in determining how those steps go awry in people with different kinds of cognitive deficits, he says.

This approach could also shed light on cognitive phenomena that are difficult to study, such as attention. For example, when a person looks at a picture of a skier on a mountain, he can focus either on the skier in the foreground or on the mountain scenery in the background. Exactly how this happens is a major open question in cognitive neuroscience. Neural activity, and thus the information captured by the fMRI, might change depending on where the person focuses his attention. Computer models developed by Tong have shown early success in predicting where a person is focusing his attention using a similar approach.

In the long term, this technology might be used to study even more ephemeral phenomena, such as dreaming. “It is currently unknown whether processes like dreaming and imagination are realized in the brain in a way that is functionally similar to perception,” says Gallant. “If they are, then the techniques developed in our study should be directly applicable.”

However, Gallant and others caution that the technology is not yet able to actually reconstruct from scratch what a person sees. While researchers are working on this capability, it is largely limited by the resolution of fMRI itself. Current brain-scanning devices have a spatial resolution of approximately one millimeter, an area that contains hundreds of neurons, each responding to different bits of visual information.

One of the most provocative potential applications for this type of “mind reading” technology has been in lie detection–for example, trying to determine directly from brain activity whether a suspect recognizes a photograph of a crime scene that she says she has never visited. (See “Imaging Deception in the Brain.”) Most neuroscientists believe that there isn’t enough data to determine if this is a reliable method of lie detection, and Gallant says that his technology is unlikely to make it any more so. “Any brain-reading device that aims to decode stored memories will inevitably be limited not only by the technology itself, but also by the quality of the stored information.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.