Skip to Content
MIT Technology Review

An algorithm can transform your doodles into photorealistic images

In December of last year, at one of the world’s largest AI research conferences, American chipmaker Nvidia showed off an incredible new concept: using generative adversarial networks, or GANs (remember them?), to turn simple sketches into photorealistic scenes. The idea was the technology could easily render new virtual environments for video games and movies, or for training self-driving cars.

Now the company has turned those same algorithms into a new doodling app called GauGAN, named after post-Impressionist artist Paul Gauguin. It allows anyone to scribble a few lines in an MS Paint–like interface and converts it in real time into beautiful pictures with mountains, oceans, trees, and stone. It does this by associating each of the colors with specific objects, such as brown for “rock” and light blue for “sky.” Once an artist adds a paint stroke in a specific color, a deep-learning model trained on a million images fills in the texture and lighting detail. The tool also comes with different filters for changing the time of day, from sunrise to sunset, or the style of painting, from photorealistic to Impressionist.

While GauGAN currently specializes in nature scenes and is not yet publicly available, the demonstration shows how much fine-tuned control we now have when it comes to creating fake images. As much as this is an impressive (even magical) achievement, it also raises important questions about the potential these algorithms have to spread disinformation and undermine truth in the future. Fortunately, the AI research community is already at work trying to tackle this problem.

This story originally appeared in our AI newsletter The Algorithm. To have it directly delivered to your inbox, sign up here for free.