Step Inside the Future
For the past eight years, I’ve been working on virtual reality and have seen the tangible impact of being able to feel a story with the entire body. More than photographs, even more than 360° video, virtual reality is much closer to the way we experience the real world: spatial, navigable, viscerally comprehensible. Standing on a virtual street in Syria when a bomb goes off, you understand why so many Syrians have become refugees. Being in a virtual room with two sisters as they try to protect a third sibling from an ex-boyfriend’s fatal attack, you feel the true horror of domestic violence and guns. Racing a virtual car down the F1 Singapore track, you confront the challenges and fears faced by real drivers.
The power and reach of this medium will only grow (see “Oculus Rift Is Too Cool to Ignore”). The way it uses physical space as much as visuals will find dramatic application in everything from important real-world stories to gaming to interactive narratives.
The technology behind 3-D capture for creating these experiences is also making rapid advances. Companies like 8i are combining the beauty and realism of 360° video with the immersive walk-around capacity of virtual reality. Meanwhile, chip makers Intel and Qualcomm are offering ways to use your mobile phone to scan environments and people using depth sensing. (I was allowed into the R&D space at Qualcomm two years ago and watched an early prototype scan a purple teddy bear and render it quickly into a 3-D model with gorgeous texture.) Both Qualcomm and Intel are supporting Google’s Project Tango, which will make Android phones capable of 3-D mapping. The impending Apple Primesense camera will offer similar capabilities. Moreover, Google has announced that it will not only give Google Maps three dimensions but also compile scans of building interiors.
The ability to literally scan a scene with your phone’s camera and have the images automatically stitch themselves together in three dimensions, or to quickly scan a person—I was fully scanned this way by Intel in a matter of minutes—will change the way we interact with our environments and our social networks. In the future, witnesses at a major event will be able to document it with their mobile phones in a way that will allow others to step inside the scene—giving people an instantaneous understanding of the event that no video or photograph could provide.
We’re just now coming to grips with all the communication possibilities of these spatial experiences. They’re going to enhance every aspect of our lives and give us access to a whole new way of understanding the world.
Nonny de la Peña is the CEO of Emblematic Group and a pioneer in the use of virtual reality for immersive journalism.
The inside story of how ChatGPT was built from the people who made it
Exclusive conversations that take us behind the scenes of a cultural phenomenon.
How Rust went from a side project to the world’s most-loved programming language
For decades, coders wrote critical systems in C and C++. Now they turn to Rust.
Design thinking was supposed to fix the world. Where did it go wrong?
An approach that promised to democratize design may have done the opposite.
Sam Altman invested $180 million into a company trying to delay death
Can anti-aging breakthroughs add 10 healthy years to the human life span? The CEO of OpenAI is paying to find out.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.