Skip to Content
Artificial intelligence

AI software can dream up an entire digital world from a simple sketch

Creating a lifelike digital scene normally requires skill, creativity, and patience. Now we can just offload the work to an AI algorithm.
December 3, 2018

Creating a virtual environment that looks realistic takes time and skill. The details have to be hand-crafted using a graphics chip that renders 3D shapes, appropriate lighting, and textures. The latest blockbuster video game, Red Dead Redemption 2, for example, took a team of around 1000 developers more than eight years to create—occasionally working 100-hour weeks. That kind of workload might not be required for much longer. A powerful new AI algorithm can dream up the photorealistic details of a scene on the fly.

Developed by chipmaker Nvidia, the software won’t just make life easier for software developers. It could also be used to auto-generate virtual environments for virtual reality or for teaching self-driving cars and robots about the world.  

“We can create new sketches that have never been seen before and render those,” says Bryan Catanzaro, vice president of applied deep learning at Nvidia. “We’re actually teaching the model how to draw based on real video.”

Nvidia’s researchers used a standard machine-learning approach to identify different objects in a video scene: cars, trees, buildings, and so forth. The team then used what’s known as a generative adversarial network, or GAN, to train a computer to fill in realistic 3D imagery. 

The system can then be fed the outline of a scene, showing where different objects are, and it will fill in stunning, slightly shimmering detail. The effect is impressive, even if some of these objects occasionally look a bit warped or twisted. 

“Classical computer graphics render by building up the way light interacts with objects,” says Catanzaro. “We wondered what we could do with artificial intelligence to change the rendering process.”

Catanzaro says the approach could lower the barrier for game design. Besides rendering whole scenes, the approach could be used to add a real person to a video game after feeding on a few minutes of video footage of the person in real life. He suggests that the approach could also be used to help render realistic settings for virtual reality, or to provide synthetic training data for autonomous vehicles or robots. “You can’t realistically get real training data for every situation that might pop up,” he says. The work was announced today at NeurIPS, a major AI conference in Montreal.

“This is interesting and impressive work,” says Michiel van de Panne, a professor at the University of British Columbia who specializes in machine learning and computer graphics. He notes that previous work involving GANs involved synthesizing simpler elements such as individual images or character motions.

“The work points the way to a very different way of creating animated imagery,” van de Panne says. “One with a different set of capabilities,” that are both less computationally intensive and could be interactive.

The Nvidia algorithm is just the latest in a dizzying procession of advances involving GANs. Invented by a Google researcher only a few years ago, GANs have emerged as a remarkable tool for synthesizing realistic, and often eerily strange imagery and audio. This trend promises to revolutionize computer graphics and special effects, and help artists and musicians imagine or develop new ideas. But it could also undermine public trust in video and audio evidence (see “Fake America great again”).

Catanzaro admits it could be misused. “This is a technology that could be used for a lot of things,” he says.

Deep Dive

Artificial intelligence

This new data poisoning tool lets artists fight back against generative AI

The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models. 

Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist

An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.

Unpacking the hype around OpenAI’s rumored new Q* model

If OpenAI's new model can solve grade-school math, it could pave the way for more powerful systems.

Generative AI deployment: Strategies for smooth scaling

Our global poll examines key decision points for putting AI to use in the enterprise.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.