For the past eight years, I’ve been working on virtual reality and have seen the tangible impact of being able to feel a story with the entire body. More than photographs, even more than 360° video, virtual reality is much closer to the way we experience the real world: spatial, navigable, viscerally comprehensible. Standing on a virtual street in Syria when a bomb goes off, you understand why so many Syrians have become refugees. Being in a virtual room with two sisters as they try to protect a third sibling from an ex-boyfriend’s fatal attack, you feel the true horror of domestic violence and guns. Racing a virtual car down the F1 Singapore track, you confront the challenges and fears faced by real drivers.
The power and reach of this medium will only grow (see “Oculus Rift Is Too Cool to Ignore”). The way it uses physical space as much as visuals will find dramatic application in everything from important real-world stories to gaming to interactive narratives.
The technology behind 3-D capture for creating these experiences is also making rapid advances. Companies like 8i are combining the beauty and realism of 360° video with the immersive walk-around capacity of virtual reality. Meanwhile, chip makers Intel and Qualcomm are offering ways to use your mobile phone to scan environments and people using depth sensing. (I was allowed into the R&D space at Qualcomm two years ago and watched an early prototype scan a purple teddy bear and render it quickly into a 3-D model with gorgeous texture.) Both Qualcomm and Intel are supporting Google’s Project Tango, which will make Android phones capable of 3-D mapping. The impending Apple Primesense camera will offer similar capabilities. Moreover, Google has announced that it will not only give Google Maps three dimensions but also compile scans of building interiors.
The ability to literally scan a scene with your phone’s camera and have the images automatically stitch themselves together in three dimensions, or to quickly scan a person—I was fully scanned this way by Intel in a matter of minutes—will change the way we interact with our environments and our social networks. In the future, witnesses at a major event will be able to document it with their mobile phones in a way that will allow others to step inside the scene—giving people an instantaneous understanding of the event that no video or photograph could provide.
We’re just now coming to grips with all the communication possibilities of these spatial experiences. They’re going to enhance every aspect of our lives and give us access to a whole new way of understanding the world.
Nonny de la Peña is the CEO of Emblematic Group and a pioneer in the use of virtual reality for immersive journalism.
The new version of GPT-3 is much better behaved (and should be less toxic)
OpenAI has trained its flagship language model to follow instructions, making it spit out less unwanted text—but there's still a way to go.
A horrifying new AI app swaps women into porn videos with a click
Deepfake researchers have long feared the day this would arrive.
We can’t afford to stop solar geoengineering research
It is the wrong time to take this strategy for combating climate change off the table.
Meet Altos Labs, Silicon Valley’s latest wild bet on living forever
Funders of a deep-pocketed new "rejuvenation" startup are said to include Jeff Bezos and Yuri Milner.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.