Computer-generated human faces usually look plastic and unconvincing on the silver screen; one of the biggest problems is getting simulated light to bounce off the skin just right. Now computer scientists Henrik Wann Jensen of the University of California, San Diego, and Pat Hanrahan of Stanford University have written software that renders virtual skin in a more realistic way. A graphic artist defines the shape and color of the face, the lighting conditions, and the translucency of the skin; the software then uses physics to calculate how light is absorbed and scattered beneath the surface of the simulated skin. That gives the skin a softer, more diffuse, and more natural look than previous computer models did. What’s more, the technique requires no more time to render each frame of animation than existing methods, thanks to mathematical shortcuts. Studios and effects companies including Pixar, ILM, and Disney are starting to use the technique, says Jensen.
DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.
“This is a profound moment in the history of technology,” says Mustafa Suleyman.
What to know about this autumn’s covid vaccines
New variants will pose a challenge, but early signs suggest the shots will still boost antibody responses.
Human-plus-AI solutions mitigate security threats
With the right human oversight, emerging technologies like artificial intelligence can help keep business and customer data secure
Next slide, please: A brief history of the corporate presentation
From million-dollar slide shows to Steve Jobs’s introduction of the iPhone, a bit of show business never hurt plain old business.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.