Skip to Content

How Brain Scientists Outsmart Their Lab Mice

To watch the brain work during navigation, scientists build computer-generated worlds for mice.
October 2, 2015

Scientists can now observe the brains of lab animals in microscopic detail as the animals go about some action. A technique called two-photon imaging, in particular, allows neuroscientists to watch thousands of neurons working in concert to encode information.

A mouse is ready to enter a virtual-reality system where its brain can be imaged while it thinks it’s running through a maze.

The trouble is, two-photon imaging requires the animal’s head to stay fixed in place. That would seem to preclude watching the brain as the animal does anything of much interest.

One creative solution is virtual reality—a computer-generated environment experienced through a headset. A few years ago neuroscientists started designing tiny virtual-reality systems to fool mice into thinking they were navigating a maze when they were really running on the top of a large ball, their heads fixed in position.

Until now, however, mice didn’t run on the ball until they’d had weeks of training. Nicholas Sofroniew, working with others at the HHMI Janelia Campus in Virginia, created a tactile virtual maze the mice seem to understand right away: they navigate through virtual corridors without training. More recently, he has been working with Jeremy Freeman to expand the complexity of the system. 

It’s designed to exploit the way mice navigate in nature, Freeman says. Instead of relying primarily on their eyes, mice rely heavily on their whiskers to feel their way through the world.

In the whisker-oriented virtual reality, the walls move to give the mouse the illusion that it is running down winding corridors, he says. But the whole time, the rodent’s head is stationary.

A high-resolution images shows neurons in action as the mouse navigates the virtual-reality system.

Karel Svoboda, a senior researcher on the project, says they’ve already learned that different neurons fire depending on the distance between the mouse’s head and the wall. The brain seems to be translating input from the whiskers into a form the mouse can use.

The imaging technique, which Svoboda helped develop, relies on fluorescent proteins from jellyfish. The researchers genetically alter the mice so their cells make this fluorescent protein in a form that’s activated when exposed to calcium ions. Neurons communicate by transferring calcium ions, so the tagged neurons light up in concert with brain activity. To see and record what’s going on, the researchers replace a chunk of the animals’ skulls with a little window.

Scientists have long been able to “listen” to single neurons using electrodes, says Svoboda, but that’s like being able to hear only one instrument during a symphony. Now, he says, they can watch the way information flows through the brain while the mouse is learning to cope with a new, albeit virtual, environment. 

Even though the mouse’s head doesn’t move, it’s engaged in what Svoboda calls active sensation. We do it when we move our eyes around to explore our surroundings. Mice do that as well, and they also move their whiskers around to explore by feel. The mouse brain seems to use sets of neurons to represent distances, he says.

Ultimately, the researchers hope to understand how the brain computes information. That could help uncover what happens in disorders such as autism. “We want to understand how brains do everything involved in sensing, learning, and decision-making,” says Freeman.

What they’d really like is to understand the mechanism of learning and to get at the nature of intelligence. That’s a hard problem, he says, “but trying to understand the brain while exploring immersive environments is one of our best shots.”

Keep Reading

Most Popular

The inside story of how ChatGPT was built from the people who made it

Exclusive conversations that take us behind the scenes of a cultural phenomenon.

How Rust went from a side project to the world’s most-loved programming language

For decades, coders wrote critical systems in C and C++. Now they turn to Rust.

Design thinking was supposed to fix the world. Where did it go wrong?

An approach that promised to democratize design may have done the opposite.

Sam Altman invested $180 million into a company trying to delay death

Can anti-aging breakthroughs add 10 healthy years to the human life span? The CEO of OpenAI is paying to find out.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.