Skip to Content

Step Inside the Future

Virtual reality will let us understand the world in ways photography never could.

For the past eight years, I’ve been working on virtual reality and have seen the tangible impact of being able to feel a story with the entire body. More than photographs, even more than 360° video, virtual reality is much closer to the way we experience the real world: spatial, navigable, viscerally comprehensible. Standing on a virtual street in Syria when a bomb goes off, you understand why so many Syrians have become refugees. Being in a virtual room with two sisters as they try to protect a third sibling from an ex-boyfriend’s fatal attack, you feel the true horror of domestic violence and guns. Racing a virtual car down the F1 Singapore track, you confront the challenges and fears faced by real drivers.

The power and reach of this medium will only grow (see “Oculus Rift Is Too Cool to Ignore”). The way it uses physical space as much as visuals will find dramatic application in everything from important real-world stories to gaming to interactive narratives.

The technology behind 3-D capture for creating these experiences is also making rapid advances. Companies like 8i are combining the beauty and realism of 360° video with the immersive walk-around capacity of virtual reality. Meanwhile, chip makers Intel and Qualcomm are offering ways to use your mobile phone to scan environments and people using depth sensing. (I was allowed into the R&D space at Qualcomm two years ago and watched an early prototype scan a purple teddy bear and render it quickly into a 3-D model with gorgeous texture.) Both Qualcomm and Intel are supporting Google’s Project Tango, which will make Android phones capable of 3-D mapping. The impending Apple Primesense camera will offer similar capabilities. Moreover, Google has announced that it will not only give Google Maps three dimensions but also compile scans of building interiors.

The ability to literally scan a scene with your phone’s camera and have the images automatically stitch themselves together in three dimensions, or to quickly scan a person—I was fully scanned this way by Intel in a matter of minutes—will change the way we interact with our environments and our social networks. In the future, witnesses at a major event will be able to document it with their mobile phones in a way that will allow others to step inside the scene—giving people an instantaneous understanding of the event that no video or photograph could provide.

We’re just now coming to grips with all the communication possibilities of these spatial experiences. They’re going to enhance every aspect of our lives and give us access to a whole new way of understanding the world.

Nonny de la Peña is the CEO of Emblematic Group and a pioneer in the use of virtual reality for immersive journalism.

Keep Reading

Most Popular

It’s time to retire the term “user”

The proliferation of AI means we need a new word.

Sam Altman says helpful agents are poised to become AI’s killer function

Open AI’s CEO says we won’t need new hardware or lots more training data to get there.

An AI startup made a hyperrealistic deepfake of me that’s so good it’s scary

Synthesia's new technology is impressive but raises big questions about a world where we increasingly can’t tell what’s real.

Taking AI to the next level in manufacturing

Reducing data, talent, and organizational barriers to achieve scale.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.