Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

 

Virtual Visage

Mark Sagar has always been torn between art and science. After college, he spent three years traveling the world, sketching portraits for a living. But the tug of technology made him return to graduate school in his native New Zealand to study engineering. “I never thought I’d spend years of my life studying the human face,” he admits, sitting in his office at Imageworks, surrounded by books and papers on visual perception.

Hearing Sagar describe the human face as “a multichannel signaling device” suggests that the science and engineering side of him has won out. Understanding the science behind faces, he says, enables him to make a digital character’s message come through more effectively on the screen. Expressions like downcast eyes, a furrowed brow, or a curled lip signify a person’s emotional state and give clues to his or her intent.

Sagar’s path to Hollywood opened almost by accident. In the mid-1990s, as a graduate student at the University of Auckland and as a postdoctoral fellow at MIT, he developed computer simulations of the human eye and face that could help doctors-in-training learn surgical techniques. His simulations looked so real that a team of dot-com entrepreneurs convinced him to cofound a graphics startup called LifeFX in Newton, MA. Its mission: commercialize software that everyone from filmmakers to Web businesses and e-mail providers could use to produce photorealistic likenesses of people.

Sagar soon became a leading authority on digital faces for entertainment. In 1999, he came to Los Angeles to work on

computer-generated face animations for films, including one of actor Jim Carrey. Paul Debevec, a graphics researcher who made his name creating virtual environments and advancing digital lighting techniques, saw Sagar’s films at a conference and was intrigued: he had never seen faux faces that looked so convincing up close. “That was the moment that made me cross the threshold of truly believing that a photoreal computer-graphics face would happen in the next five years,” says Debevec, who is now at the Institute for Creative Technologies at the University of Southern California (see “Hollywood’s Master of Light,” TR March 2004).

The two scientists struck up a collaboration, using Debevec’s lighting techniques to render Sagar’s digital faces – a combination that quickly catapulted them to the forefront of the field. It turns out that if you’re trying to simulate a face, getting the lighting right is a big deal. Unlike previous computer simulations that looked odd in different contexts and had to be adjusted by trial and error, Sagar and Debevec’s faces could be tailored to match the lighting in any scene. That’s because they were built using a rich database of real faces photographed from different angles and illuminated by many different combinations of light. When LifeFX folded in 2002, Imageworks snatched up Sagar specifically for his expertise in faces.

He immediately began working on the first feature-film test of these techniques: Spider-Man 2. The action scenes in the film required detailed and expressive simulations of the faces of well-known actors – a particularly tough problem, says Sagar. Not only are audiences quick to reject ersatz human faces in general, but they are particularly sensitive to faces they recognize; any discrepancy between digital and real could be perceived as fake. To make the simulations work, the researchers needed lots of reference footage of the real actors under different lighting conditions.

So Maguire and Molina each spent a day in Debevec’s lab. Supervised by research programmer Tim Hawkins, they sat in a special apparatus called a “light stage” while four still cameras captured hundreds of images of their heads and faces making a variety of expressions and illuminated by strobes from every possible angle. The actors also had laser scans and plaster casts made of their heads and faces, so that high-resolution digital 3-D models of their likenesses could be built on computers.

At Imageworks, Sagar and his team wrote user-friendly software so that dozens of artists could use the gigabytes of image data without getting bogged down in technical details. To make the train sequence look right, for example, Sagar’s software combined images from Debevec’s setup into composites that matched the real-world lighting on the movie set, then mapped the composites onto 3-D computer models of the actors. To make the faces move, animators manipulated the models frame by frame, using existing pictures and video of the actors as a rough guide. The software calculated lighting changes based on how the face models deformed – and illuminated the digital skin accordingly. The result: synthetic actors who look like Maguire and Molina (intercut with the flesh-and-blood ones) zoom through the air, around skyscrapers, over trains, and underwater, emoting all the while.

Imageworks is a prime example of how effects houses are integrating new research into their production pipelines more quickly than they did just a few years ago. (While audiences might be wowed by what has shown up at the multiplex lately, the fundamental graphics technology in films didn’t change much in the 1990s.) “Before, there was a very long lag. Something would get developed, and then you’d wait ten years for a software company to commercialize it,” says J. P. Lewis, an expert on graphics and animation at the University of Southern California’s Computer Graphics and Immersive Technology lab. “Now, I think companies are much more aware of research, and they tend to jump on it much more quickly.”

A walk through the darkened hallways of Imageworks this spring finds the team scrambling to put the finishing touches on the more than 800 effects shots for Spider-Man 2. It’s a young, hip crowd sporting fashionable glasses and displaying mementos from the film on their desks – photos, action figures, a cast of Tobey Maguire’s face. On the day he ships his last shot for the film, visual-effects supervisor Stokdyk laments that there isn’t more time. The biggest challenge, he says, was blending Molina’s sometimes real, sometimes digital, face with his “Doc Ock” costume and comic-book-style surroundings. “To match reality,” he sighs, “is almost impossible.”

0 comments about this story. Start the discussion »

Tagged: Communications

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me