Standing in a San Francisco office recently, I got coached on some boxing basics in a gym, checked out a model at a photo shoot, and watched a guy rapping out by Venice Beach. Yet while what they were doing was interesting, I couldn’t stop staring at their butts.
That’s because these people, all virtual-reality creations of a startup called 8i, looked quite realistic. And I was viewing them while wearing an HTC Vive headset set up in an open space large enough for me to rove around without bumping into anything. Thus, I had enough room to concentrate on the details I found most fascinating—and in this case it was, admittedly, the way the fabric of their clothes looked on their derrieres and legs.
8i’s goal is to capture humans with an array of cameras and show them in a way that’s as true to life as possible in virtual reality. Its CEO and cofounder, Linc Gasking, imagines a lot of uses for this—it could lead to virtual-reality films that let you view live-action character’s actions from different angles, and concerts where you can get up close and personal with your favorite musician. Or perhaps it would help make remote learning easier: you could walk around your yoga teacher to examine a move before trying it yourself.
These kinds of experiences would be different from the way you typically see live-action content in virtual reality today. As the viewer wearing the headset, you tend to be passive, sitting in the center of a sphere of captured video with which you can’t interact. But 8i is betting that new high-end headsets like the Oculus Rift and HTC Vive, which give users some freedom of movement by including positional tracking, will create a market for ever-more-immersive experiences.
“What we’ve found is that there is really a race toward creating an incredibly realistic experience,” says Gasking.
The company starts crafting each so-called “volumetric” person for virtual reality by filming on a green stage surrounded by cameras (8i has one stage set up in Los Angeles with 40 high-definition cameras, and another in Wellington, New Zealand, with 20). After recording a video, like that of the boxing coach I saw, the company uses its software to stitch the footage together to show just the person—detailed, three-dimensional, and visible from the front, back, or sides. The results can also be placed in different settings, like a gym or a desert, which may be computer-generated, photorealistic, or some combination.
Right now, the results are a bit jarring. In the demos I saw, people’s faces looked fairly realistic but a little bit off (perhaps softer-looking than they should be), which created almost an “uncanny valley” effect. (Gasking says this is a result of the algorithm used to process the video in three dimensions, and he expects it will improve over time.) Yet I was drawn to the detail captured in their clothing. Looking at the tracksuit pants on the boxing coach and the dress on the model was mesmerizing; they looked very true to life (which makes sense, since they were initially recorded as videos).
Late this month, in time for Oculus’s Rift release, the company plans to roll out a new version of an app that lets people view a range of its realistic people in different virtual experiences. It’s also working with virtual-reality content makers like journalist and filmmaker Nonny de la Peña.
Jeremy Bailenson, director of Stanford’s Virtual Human Interaction Lab and cofounder of virtual-reality sports training company Strivr Labs, agrees with 8i’s assessment that the time is right to be moving toward super-realistic avatars in virtual reality, and he says that using cameras to make this happen means a lot less work for programmers. Once companies improve the techniques, he expects we’ll see many such avatars in VR.
But he cautions that building 3-D avatars by capturing people in video—rather than building them as 3-D models—means they can’t do novel things, like respond to you.
“What makes VR neat in a lot of ways is the interactivity, the ways it can react to you,” he says.
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
The Biggest Questions: What is death?
New neuroscience is challenging our understanding of the dying process—bringing opportunities for the living.
How to fix the internet
If we want online discourse to improve, we need to move beyond the big platforms.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.