Skip to Content
Uncategorized

Now There’s a VR Rig for Lab Animals

August 21, 2017

How do you study the way zebrafish translate visual cues into movement, or whether mice are afraid of heights? For researchers at the Vienna Biocenter in Austria, the answer seemed obvious: build a virtual-reality rig for lab animals. So that’s exactly what they’ve done.

The new setup, called FreemoVR, is an arena whose walls and floors are made of computer displays, with 10 high-speed cameras hanging above that are able to monitor the movement of animals placed in the space. The researchers have software observe the animals' movements and quickly change the imagery shown on the displays.

So far, it seems to be pretty compelling for the critters that it's been tested on. Fruit flies that were shown virtual pillars flew in circles around them as though they were really there. Meanwhile, mice chose to walk only along raised pathways that appeared to be closer to the floor (an illusion achieved by using two sizes of checkerboard on the ground to provide a trick of perspective), just as they would in the physical world.

Details of the system, as well as results from the experiments, are published in Nature Methods. The team reckons that the setup could be used as an easier way to understand how animals respond to visual stimulation. Indeed, IEEE Spectrum reports that the lab is already investigating how differences in the brain function of fruit flies affect their responses to what they see in VR.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.