Skip to Content
Artificial intelligence

Algorithms can turn any scene into a comic

Making your own Picasso isn’t the only thing neural style transfer can do.

Comics take the form of a series of still images that together tell a story. The images are often highly stylized and the graphic artists admired for their skill.

But this kind of artistry is hard to learn and difficult to perfect, making it time consuming and expensive to produce. So artists, publishers, and readers would dearly love an automated way to make an image imitate a desired comic style.

It turns out that this kind of algorithm already exists. Back in 2015, a group of researchers in Germany discovered a way to transfer the artistic style of one image to another. Since then, others have steadily improved this approach to make it quicker and more accurate.

From left to right: the original graphic image, the target image, and the resulting stylized image

However, the work has so far focused on transferring the style of fine artists such as Picasso and Van Gogh to other images, or altering ordinary pictures in ways like turning night into day. How well do these algorithms work with the often more stylized images produced by comic artists?

Today we get an answer thanks to the work of Maciej Pęśko and Tomasz Trzciński at Warsaw University of Technology in Poland. These guys have applied various types of image style transfer to comic graphics and compared the results.

First some background. This approach began with the work of Leon Gatys at the University of Tubingen and a few pals, who studied the way deep neural networks recorded and analyzed artistic style. These networks consist of layers that each analyze an image at a different level—details such as shapes, colors, and lines.

The key insight behind Gatys and co’s work is that artistic style is not captured in the layers themselves but in the correlations between them. That immediately makes it possible to separate an artist’s style from the art’s content, and even to transfer it from one image to another.

And that’s exactly what Gatys and co did, to widespread amazement from the computer vision community. This work has become the foundation of new subdiscipline of computer vision known as neural style transfer.

One problem with the new approach is that is computationally intensive. It takes considerable time—several seconds for 512x512 images on modern desktop computers—to analyze a picture, strip away its style, and apply that style to another scene.

So computer scientists began searching for different approaches that could do the task more quickly. And indeed they came up with various algorithms that do a similar job. However, there is a trade-off between speed and quality. 

Enter Pęśko and Trzciński. These guys have tested a wide range of neural style transfer algorithms on the specific task of transferring the graphic styles associated with comics. “This is the first attempt to evaluate and compare the results obtained by several methods in the context of transferring comic style,” they say.

They specifically focus on the fastest techniques that have the potential to work on any graphic image. “We focus mostly on methods whose execution time per image do not exceed 2 seconds,” they say.

In this way, they tested five different algorithms on 600x450-pixel images processed using a 12-gigabyte Titan X graphics processing unit. They selected images that represent various comic styles and transferred these to images chosen randomly from a Google image search.

Finally, they showed the results to 100 people to evaluate how well the algorithms achieved the style transfer.

The results show the state of the art in this area. The algorithm judged best is a technique known as adaptive instance normalization, developed in 2017, with some 30 percent of the votes in its favor. “It confirms our assumptions that this method gives results that are the closest to cartoon or comics in terms of stylistic similarity,” says Pęśko and Trzciński.

However, the results are by no means perfect. All the techniques suffer to some degree from problems such as inappropriate color transfer and blurring. “We believe that there is still some place for improvement,” say the researchers.

That represents an opportunity. The comic book market in the US alone is worth $1 billion a year. And there are many parts of the world that have yet to develop their own cultures around comics, such as India. So there are markets that have the potential to grow.

The ability to create high-quality comic images will make a significant difference to anybody wanting to conquer those markets.

However, there is another problem: the challenge of developing powerful characters and compelling storylines. Neural networks can’t help with that … at least, not yet.

Ref: : Neural Comic Style Transfer: Case Study


Deep Dive

Artificial intelligence

What are AI agents? 

The next big thing is AI tools that can do more complex tasks. Here’s how they will work.

What is AI?

Everyone thinks they know but no one can agree. And that’s a problem.

How to use AI to plan your next vacation

AI tools can be useful for everything from booking flights to translating menus.

Why Google’s AI Overviews gets things wrong

Google’s new AI search feature is a mess. So why is it telling us to eat rocks and gluey pizza, and can it be fixed?

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.