Recent weeks have been good ones for people interested in virtual reality. The Facebook acquisition of Oculus has galvanized the idea that “something wonderful” will happen if we put on these strange headsets and visually enter other worlds. Of course, most people assume this means gaming.
And it’s true that the upcoming Crystal Cove Oculus headset (which tracks the head’s position and rotation) will immerse its users in the most amazing computer gaming experiences they could have ever thought possible. But that’s not the big part of the story.
After we’ve had the Oculus strapped to our faces for a few months and the novelty has worn off, we might find ourselves asking some important questions: “Where are the other people?” And “Where can I start working and learning and building in here?”
That is where things are going to get interesting.
The Oculus Rift is only one of several remarkable advances in hardware that are going to dramatically change our ability in the coming year to immerse ourselves in a 3-D world. The others include 3-D or 2-D cameras that can capture facial expressions and head movements, and several types of motion controllers that can accurately capture the movements of our arms, legs, and hands.
Companies like Sixense and PrioVR have amazing devices in the works that will follow the motion of the body as accurately and with as low latency as the Oculus presents images to the eyes. We won’t just be able to see these worlds—we’ll be able to touch them.
We’ll also be able to communicate with others while we’re inside these worlds: the Internet is now fast enough to allow us to be in a virtual environment with other people who are accessing it from elsewhere, even halfway across the world.
Updating imagery shown to the eyes with a delay of less than 10 milliseconds relative to head movements generates a magical sense of being “present” in a virtual space. My own experiments have shown that a second kind of presence—the feeling of really being face-to-face with another person—requires an end-to-end delay (including hardware, software, and network transmission) of around 100 milliseconds or less between your movement and their perception of that movement.
Below that threshold, the small head and eye movements that we use with each other while talking in the “real” world can work in a virtual one. We can feel empathy and connection, interrupt each other, and smoothly and rapidly exchange thoughts. At less than 100 milliseconds of delay you can reach out and virtually touch or shake hands with another person and find the perception of the resulting collisions and motion to be perfectly believable and immersive.
If virtual reality can replace (or even improve upon) videoconferencing or long-distance travel as a way of getting together with people, it will surely disrupt and restructure many different basic human exchanges that have nothing to do with playing games.
For many of the everyday things we do—talking face-to-face, working together, or designing and building things—the real world will suddenly have real competition.
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
The Biggest Questions: What is death?
New neuroscience is challenging our understanding of the dying process—bringing opportunities for the living.
Data analytics reveal real business value
Sophisticated analytics tools mine insights from data, optimizing operational processes across the enterprise.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.