Skip to Content

Building a Library for VR, One Image at a Time

The Internet is slowly becoming available in virtual reality with a new library of photorealistic scans.
October 11, 2016

If you’re searching for a new house or apartment online, you might start noticing a new icon that looks like a pair of glasses in the bottom right corner of photos. Clicking it will launch a 360° virtual-reality view that you can see as long as you’re wearing a Google Cardboard or Samsung Gear VR headset.

It’s the result of a library of more than 250,000 3-D scans that San Francisco company Matterport just converted to be compatible with VR. In these early days of VR, the content isn’t diverse enough to match the abilities of a desktop computer. Matterport’s library of photorealistic scans can fill some gaps in cases where a VR view can improve upon a more traditional Internet browsing experience.

Matterport’s initial partners are concentrated in real estate. Websites like realtor.com, apartments.com, and Sotheby’s will now feature the VR headset icons on some photos. But the library extends far beyond 3-D scans of houses. Anyone with Matterport’s room scanner—a souped-up camera more likely to be owned by a company or studio given its $4,500 price tag—can upload scans to the library for $19 each. They’re converted into VR-viewable content from the browser-based 3-D models for which Matterport is known.

Matterport is opening the whole package up to developers, who could choose to take a 3-D space and build a social or gaming experience on top of it. They could also expand on the real estate partner’s current application by making it possible to virtually place furniture or paint within a 3-D scan of a room that you can tour in VR.

Virtual reality is a heavily visual medium. But right now, most of those visuals still need to be built. VR is so new that it hasn’t yet been seeded with the heaps of pre-made content that have made the modern Internet so interesting.

“It’s very hard to create content for VR from scratch,” Matterport cofounder and chief strategy officer Matt Bell says. “If you were to go build a 3-D model of someone’s apartment just from taking a lot of photos and doing measurements and building it in a 3-D design program, that would take weeks to do.”

Linden Lab, creator of the expansive online social world Second Life, is working on a new social world for VR. It’s called Sansar, and like Second Life it’s built to allow users to quickly pull together places and objects to customize their space. You can interact with other people within the game and program objects to do pretty much whatever you want.

Linden Lab is working on the same challenge of bringing real life content into VR, according to CEO Ebbe Altberg. The game currently features an Egyptian tomb that is a 3-D scan of the real deal. The scan was uploaded, published, and shared to Sansar’s community, and now virtual social interactions can take place inside the tomb.

“How else would other people ever have a chance to visit that tomb and talk to other people inside of it?” Altberg says.

It’s a good case for using 3-D scans in VR to simulate travel. Users can visit places that are too expensive or dangerous to visit. Altberg also sees applications in education, advertising, and skill training.

As VR grows, so will the visual content available to use and remix. But for now, Matterport’s library is a good example of how the infant VR industry can fit into our daily activities on the Internet.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.