Skip to Content

Brain Imaging Reveals What You’re Watching

Researchers develop an fMRI-based model to reconstruct moving images that people are seeing.
September 22, 2011

Scientists are a step closer to constructing a digital version of the human visual system. Researchers at the University of California, Berkeley, have developed an algorithm that can be applied to functional magnetic resonance imaging (fMRI) imagery to show a moving image a person is seeing.

Neuroscientists have been using fMRI to study the human visual system for years, which involves measuring changes in blood oxygen levels in the brain. This works fine for studying how we see static images, but it falls short when it comes to moving imagery. Individual neuronal activity occurs over a much faster time scale, so a few years ago the researchers behind the current study set out to devise a computer model to measure this instead. The study shows that this new approach is not only successful but remarkably accurate.

The study, which appears in Current Biology this week, marks the first time that anyone has used brain imaging to determine what moving images a person is seeing. It could help researchers model the human visual system on a computer, and it raises the tantalizing prospect of one day being able to use the model to reconstruct other types of dynamic imagery, such as dreams and memories.

The researchers involved in the study watched hours of movie previews while lying in an fMRI machine. Next they painstakingly deconstructed the data so that they had a specific activation pattern for each second of footage. They ran that data through several different filters to infer what was happening at the neuronal level. “Once you do this, you have a complete model that links the plumbing of the blood flow that you do see with fMRI to the neuronal activity that you don’t see,” says Jack Gallant, who coauthored the study with colleague Shinji Nishimoto.

Next, the researchers compiled a library of 18 million YouTube video clips, chosen at random, to test their model objectively. Previous studies have shown that fMRI can be used to determine static images a subject is looking at, but the new computer model offered the possibility of reconstructing images that had direction of movement as well as shape. “No one has tried to model dynamic vision with this level of detail before,” says Jim Haxby, a neuroimaging expert at Dartmouth College who was not involved in the study.

The researchers used the YouTube library to simulate what would happen on the fMRI images when they watched a new set of movie trailers. The results of the simulations and fMRI scans were close to identical. “Usually you only get that kind of accuracy in physics, not neuroscience,” says Benjamin Singer, an fMRI researcher at Princeton University who was not involved with the study. “It’s a tour de force that brings together decades of work.”

There are two main caveats to the study. The researchers used fMRI data from only one area of the visual system—the V1 area, also known as the primary visual cortex. And the models were customized to each subject. Trying to design a model that would work for everyone would have been too difficult, says Gallant, although he suspects a more generalized model could be developed in the future.

The ultimate goal of this research is to create a computational version of the human brain that “sees” the world as we do. The study also demonstrates an unexpected use for an existing technology. “Everyone always thought it was impossible to recover dynamic brain activity with fMRI,” says Gallant.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.