Skip to Content
Artificial intelligence

An algorithm can transform your doodles into photorealistic images

March 19, 2019

In December of last year, at one of the world’s largest AI research conferences, American chipmaker Nvidia showed off an incredible new concept: using generative adversarial networks, or GANs (remember them?), to turn simple sketches into photorealistic scenes. The idea was the technology could easily render new virtual environments for video games and movies, or for training self-driving cars.

Now the company has turned those same algorithms into a new doodling app called GauGAN, named after post-Impressionist artist Paul Gauguin. It allows anyone to scribble a few lines in an MS Paint–like interface and converts it in real time into beautiful pictures with mountains, oceans, trees, and stone. It does this by associating each of the colors with specific objects, such as brown for “rock” and light blue for “sky.” Once an artist adds a paint stroke in a specific color, a deep-learning model trained on a million images fills in the texture and lighting detail. The tool also comes with different filters for changing the time of day, from sunrise to sunset, or the style of painting, from photorealistic to Impressionist.

While GauGAN currently specializes in nature scenes and is not yet publicly available, the demonstration shows how much fine-tuned control we now have when it comes to creating fake images. As much as this is an impressive (even magical) achievement, it also raises important questions about the potential these algorithms have to spread disinformation and undermine truth in the future. Fortunately, the AI research community is already at work trying to tackle this problem.

This story originally appeared in our AI newsletter The Algorithm. To have it directly delivered to your inbox, sign up here for free.

Deep Dive

Artificial intelligence

Apple is promising personalized AI in a private cloud. Here’s how that will work.

Apple’s first big salvo in the AI wars makes a bet that people will care about data privacy when automating tasks.

An AI startup made a hyperrealistic deepfake of me that’s so good it’s scary

Synthesia's new technology is impressive but raises big questions about a world where we increasingly can’t tell what’s real.

Propagandists are using AI too—and companies need to be open about it

OpenAI has reported on influence operations that use its AI tools. Such reporting, alongside data sharing, should become the industry norm.

This AI-powered “black box” could make surgery safer

A new smart monitoring system could help doctors avoid mistakes—but it’s also alarming some surgeons and leading to sabotage.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.