Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo


Unsupported browser: Your browser does not meet modern web standards. See how it scores »

{ action.text }

Put Your Game Face On

Indeed, despite the millions of dollars thrown at the problem, digital human faces still have a ways to go. What remains to be done may seem like incremental steps – making eye movements less robotic, capturing changes in blood flow so cheeks flush, getting skin to wrinkle just the right way during a smile – but they add up. “The last 20 percent could take 80 percent of our time to get right – but we’re definitely in that last 20 percent,” says Darin Grant, director of technology at Digital Domain in Venice, CA, which did character animations for this summer’s I, Robot.

In the end, commercial audiences will decide the value of these digital doubles. “The ultimate test of what we do is how it looks on-screen and how it translates to production,” says Grant. His colleague Brad Parker, a visual-effects supervisor and director at Digital Domain, maintains that digital humans will pay increasing dividends for filmmakers – and for the graphics community. “It’s a big deal,” he says. “It combines everything that’s difficult about computer graphics.”

Why it’s such a hard problem – exactly what our eyes detect as “wrong” in a digital human – isn’t yet well understood. But University of Southern California graphics researchers Lewis and Ulrich Neumann are trying to find out. In recent experiments, their group showed glimpses of real and digital faces to volunteers to see if they could tell the difference. The results were striking – and frustrating. “We spent a year working on these faces, but we couldn’t fool people for a quarter of a second,” Lewis says. He predicts that this work will lead to statistical models of how real human faces behave, which in turn will yield software tools that artists can use to make characters move their eyes just so or change expressions in other subtle ways that could be vital to believability.

Such advances should have a dramatic impact. Says Steve Sullivan, director of research and development at Industrial Light and Magic in San Rafael, CA, “We’ll probably look back in 10 years and think today’s digital doubles look horribly primitive.”

And it won’t only be movies that get a facelift. The same graphical simulation tools that filmmakers are starting to master will also help fuel the next big market for digital faces: video games. Today’s games boast dazzling creatures and scenery, but their human characters are not even close to being photorealistic. It’s just not practical to program in every viewing angle and expression that may arise during the course of a multilevel, interactive game.

That’s where George Borshukov comes in. Borshukov, a computer scientist who designed state-of-the-art digital humans for the Matrix films (all those Smiths in Reloaded and Revolutions are his team’s), is now applying face technology to games. A former technology supervisor at ESC Entertainment in Alameda, CA, Borshukov recently moved to video-game powerhouse Electronic Arts in Redwood City, CA. He says that next-generation gaming hardware will come close to demonstrating techniques for photorealistic faces in real time, but that trade-offs, approximations, and data compression will be needed to make it happen.

The problem is that with games, everything has to happen on the fly. Yet it still takes a few hours to render a single frame of today’s best digital faces. That’s workable if you have months to produce the scenes, as in a movie. In a game or interactive film, however, the particular image called for may not exist until the user orders it up with the flick of a joystick. Making this practical will require software that’s thousands of times faster.

Five years down the road, experts say, a hybrid between a game and a movie could allow viewers/players to design and direct their own films and even put themselves into the action. You might first “cast” the film by scanning photos of real people – you and your friends, for instance – and running software that would create photoreal 3-D models of those people. Then, in real time, you could direct the film’s action via a handheld controller or keyboard – anything from zooming the camera around the characters to making the lead actor run in a certain direction. Interactive entertainment, Borshukov says, “is where the real future is.”

Facing the future

Back at Imageworks, a storm of activity swirls around Mark Sagar. Artists are in crunch mode for another digital-actor project, this fall’s The Polar Express, based on the popular children’s book. But Sagar, who is not directly involved with that effort, is entranced by what’s farther down the road – a more elegant approach to digital faces based on underlying scientific principles. “I see today’s work as an interim stage where we still have to capture a lot of data,” he says. “Eventually everything will use mathematical models of how things move and how they reflect light.”

Sagar also sees much broader applications of digital humans in medical graphics, cooperative training simulations for rescue workers, and human-computer interfaces that could help users communicate more effectively with both machines and other people. Outside the entertainment industry, large organizations like Microsoft and Honda are pursuing research on advanced graphics and human modeling, including software that could allow you to create realistic virtual characters and digital avatars based on just a photo. Related algorithms could also help computers recognize faces and interpret expressions, either for security purposes or to predict a user’s needs.

“We’re at an interesting age when we’re starting to be able to simulate humans down to the last detail,” says Sagar. There’s a certain irony in his statement. For once digital humans are done right, they’ll be indistinguishable from the real thing; audiences won’t even realize that artists and scientists like Sagar have changed the face of entertainment – and society.

Gregory T. Huang is a TR associate editor.

0 comments about this story. Start the discussion »

Tagged: Communications

Reprints and Permissions | Send feedback to the editor

From the Archives


Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me