Skip to Content
Artificial intelligence

AI software can dream up an entire digital world from a simple sketch

Creating a lifelike digital scene normally requires skill, creativity, and patience. Now we can just offload the work to an AI algorithm.
December 3, 2018

Creating a virtual environment that looks realistic takes time and skill. The details have to be hand-crafted using a graphics chip that renders 3D shapes, appropriate lighting, and textures. The latest blockbuster video game, Red Dead Redemption 2, for example, took a team of around 1000 developers more than eight years to create—occasionally working 100-hour weeks. That kind of workload might not be required for much longer. A powerful new AI algorithm can dream up the photorealistic details of a scene on the fly.

Developed by chipmaker Nvidia, the software won’t just make life easier for software developers. It could also be used to auto-generate virtual environments for virtual reality or for teaching self-driving cars and robots about the world.  

“We can create new sketches that have never been seen before and render those,” says Bryan Catanzaro, vice president of applied deep learning at Nvidia. “We’re actually teaching the model how to draw based on real video.”

Nvidia’s researchers used a standard machine-learning approach to identify different objects in a video scene: cars, trees, buildings, and so forth. The team then used what’s known as a generative adversarial network, or GAN, to train a computer to fill in realistic 3D imagery. 

The system can then be fed the outline of a scene, showing where different objects are, and it will fill in stunning, slightly shimmering detail. The effect is impressive, even if some of these objects occasionally look a bit warped or twisted. 

“Classical computer graphics render by building up the way light interacts with objects,” says Catanzaro. “We wondered what we could do with artificial intelligence to change the rendering process.”

Catanzaro says the approach could lower the barrier for game design. Besides rendering whole scenes, the approach could be used to add a real person to a video game after feeding on a few minutes of video footage of the person in real life. He suggests that the approach could also be used to help render realistic settings for virtual reality, or to provide synthetic training data for autonomous vehicles or robots. “You can’t realistically get real training data for every situation that might pop up,” he says. The work was announced today at NeurIPS, a major AI conference in Montreal.

“This is interesting and impressive work,” says Michiel van de Panne, a professor at the University of British Columbia who specializes in machine learning and computer graphics. He notes that previous work involving GANs involved synthesizing simpler elements such as individual images or character motions.

“The work points the way to a very different way of creating animated imagery,” van de Panne says. “One with a different set of capabilities,” that are both less computationally intensive and could be interactive.

The Nvidia algorithm is just the latest in a dizzying procession of advances involving GANs. Invented by a Google researcher only a few years ago, GANs have emerged as a remarkable tool for synthesizing realistic, and often eerily strange imagery and audio. This trend promises to revolutionize computer graphics and special effects, and help artists and musicians imagine or develop new ideas. But it could also undermine public trust in video and audio evidence (see “Fake America great again”).

Catanzaro admits it could be misused. “This is a technology that could be used for a lot of things,” he says.

Deep Dive

Artificial intelligence

conceptual illustration showing various women's faces being scanned
conceptual illustration showing various women's faces being scanned

A horrifying new AI app swaps women into porn videos with a click

Deepfake researchers have long feared the day this would arrive.

storm front
storm front

DeepMind’s AI predicts almost exactly when and where it’s going to rain

The firm worked with UK weather forecasters to create a model that was better at making short term predictions than existing systems.

People are hiring out their faces to become deepfake-style marketing clones

AI-powered characters based on real people can star in thousands of videos and say anything, in any language.

Tentacle of Octopus
Tentacle of Octopus

What an octopus’s mind can teach us about AI’s ultimate mystery

Machine consciousness has been debated since Turing—and dismissed for being unscientific. Yet it still clouds our thinking about AIs like GPT-3.

Stay connected

Illustration by Rose WongIllustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.