Skip to Content

How Brain Scientists Outsmart Their Lab Mice

To watch the brain work during navigation, scientists build computer-generated worlds for mice.
October 2, 2015

Scientists can now observe the brains of lab animals in microscopic detail as the animals go about some action. A technique called two-photon imaging, in particular, allows neuroscientists to watch thousands of neurons working in concert to encode information.

A mouse is ready to enter a virtual-reality system where its brain can be imaged while it thinks it’s running through a maze.

The trouble is, two-photon imaging requires the animal’s head to stay fixed in place. That would seem to preclude watching the brain as the animal does anything of much interest.

One creative solution is virtual reality—a computer-generated environment experienced through a headset. A few years ago neuroscientists started designing tiny virtual-reality systems to fool mice into thinking they were navigating a maze when they were really running on the top of a large ball, their heads fixed in position.

Until now, however, mice didn’t run on the ball until they’d had weeks of training. Nicholas Sofroniew, working with others at the HHMI Janelia Campus in Virginia, created a tactile virtual maze the mice seem to understand right away: they navigate through virtual corridors without training. More recently, he has been working with Jeremy Freeman to expand the complexity of the system. 

It’s designed to exploit the way mice navigate in nature, Freeman says. Instead of relying primarily on their eyes, mice rely heavily on their whiskers to feel their way through the world.

In the whisker-oriented virtual reality, the walls move to give the mouse the illusion that it is running down winding corridors, he says. But the whole time, the rodent’s head is stationary.

A high-resolution images shows neurons in action as the mouse navigates the virtual-reality system.

Karel Svoboda, a senior researcher on the project, says they’ve already learned that different neurons fire depending on the distance between the mouse’s head and the wall. The brain seems to be translating input from the whiskers into a form the mouse can use.

The imaging technique, which Svoboda helped develop, relies on fluorescent proteins from jellyfish. The researchers genetically alter the mice so their cells make this fluorescent protein in a form that’s activated when exposed to calcium ions. Neurons communicate by transferring calcium ions, so the tagged neurons light up in concert with brain activity. To see and record what’s going on, the researchers replace a chunk of the animals’ skulls with a little window.

Scientists have long been able to “listen” to single neurons using electrodes, says Svoboda, but that’s like being able to hear only one instrument during a symphony. Now, he says, they can watch the way information flows through the brain while the mouse is learning to cope with a new, albeit virtual, environment. 

Even though the mouse’s head doesn’t move, it’s engaged in what Svoboda calls active sensation. We do it when we move our eyes around to explore our surroundings. Mice do that as well, and they also move their whiskers around to explore by feel. The mouse brain seems to use sets of neurons to represent distances, he says.

Ultimately, the researchers hope to understand how the brain computes information. That could help uncover what happens in disorders such as autism. “We want to understand how brains do everything involved in sensing, learning, and decision-making,” says Freeman.

What they’d really like is to understand the mechanism of learning and to get at the nature of intelligence. That’s a hard problem, he says, “but trying to understand the brain while exploring immersive environments is one of our best shots.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.