For the past eight years, I’ve been working on virtual reality and have seen the tangible impact of being able to feel a story with the entire body. More than photographs, even more than 360° video, virtual reality is much closer to the way we experience the real world: spatial, navigable, viscerally comprehensible. Standing on a virtual street in Syria when a bomb goes off, you understand why so many Syrians have become refugees. Being in a virtual room with two sisters as they try to protect a third sibling from an ex-boyfriend’s fatal attack, you feel the true horror of domestic violence and guns. Racing a virtual car down the F1 Singapore track, you confront the challenges and fears faced by real drivers.
The power and reach of this medium will only grow (see “Oculus Rift Is Too Cool to Ignore”). The way it uses physical space as much as visuals will find dramatic application in everything from important real-world stories to gaming to interactive narratives.
The technology behind 3-D capture for creating these experiences is also making rapid advances. Companies like 8i are combining the beauty and realism of 360° video with the immersive walk-around capacity of virtual reality. Meanwhile, chip makers Intel and Qualcomm are offering ways to use your mobile phone to scan environments and people using depth sensing. (I was allowed into the R&D space at Qualcomm two years ago and watched an early prototype scan a purple teddy bear and render it quickly into a 3-D model with gorgeous texture.) Both Qualcomm and Intel are supporting Google’s Project Tango, which will make Android phones capable of 3-D mapping. The impending Apple Primesense camera will offer similar capabilities. Moreover, Google has announced that it will not only give Google Maps three dimensions but also compile scans of building interiors.
The ability to literally scan a scene with your phone’s camera and have the images automatically stitch themselves together in three dimensions, or to quickly scan a person—I was fully scanned this way by Intel in a matter of minutes—will change the way we interact with our environments and our social networks. In the future, witnesses at a major event will be able to document it with their mobile phones in a way that will allow others to step inside the scene—giving people an instantaneous understanding of the event that no video or photograph could provide.
We’re just now coming to grips with all the communication possibilities of these spatial experiences. They’re going to enhance every aspect of our lives and give us access to a whole new way of understanding the world.
Nonny de la Peña is the CEO of Emblematic Group and a pioneer in the use of virtual reality for immersive journalism.
Meta has built a massive new language AI—and it’s giving it away for free
Facebook’s parent company is inviting researchers to pore over and pick apart the flaws in its version of GPT-3
The gene-edited pig heart given to a dying patient was infected with a pig virus
The first transplant of a genetically-modified pig heart into a human may have ended prematurely because of a well-known—and avoidable—risk.
Saudi Arabia plans to spend $1 billion a year discovering treatments to slow aging
The oil kingdom fears that its population is aging at an accelerated rate and hopes to test drugs to reverse the problem. First up might be the diabetes drug metformin.
Yann LeCun has a bold new vision for the future of AI
One of the godfathers of deep learning pulls together old ideas to sketch out a fresh path for AI, but raises as many questions as he answers.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.