Comics take the form of a series of still images that together tell a story. The images are often highly stylized and the graphic artists admired for their skill.
But this kind of artistry is hard to learn and difficult to perfect, making it time consuming and expensive to produce. So artists, publishers, and readers would dearly love an automated way to make an image imitate a desired comic style.
It turns out that this kind of algorithm already exists. Back in 2015, a group of researchers in Germany discovered a way to transfer the artistic style of one image to another. Since then, others have steadily improved this approach to make it quicker and more accurate.
However, the work has so far focused on transferring the style of fine artists such as Picasso and Van Gogh to other images, or altering ordinary pictures in ways like turning night into day. How well do these algorithms work with the often more stylized images produced by comic artists?
Today we get an answer thanks to the work of Maciej Pęśko and Tomasz Trzciński at Warsaw University of Technology in Poland. These guys have applied various types of image style transfer to comic graphics and compared the results.
First some background. This approach began with the work of Leon Gatys at the University of Tubingen and a few pals, who studied the way deep neural networks recorded and analyzed artistic style. These networks consist of layers that each analyze an image at a different level—details such as shapes, colors, and lines.
The key insight behind Gatys and co’s work is that artistic style is not captured in the layers themselves but in the correlations between them. That immediately makes it possible to separate an artist’s style from the art’s content, and even to transfer it from one image to another.
And that’s exactly what Gatys and co did, to widespread amazement from the computer vision community. This work has become the foundation of new subdiscipline of computer vision known as neural style transfer.
One problem with the new approach is that is computationally intensive. It takes considerable time—several seconds for 512x512 images on modern desktop computers—to analyze a picture, strip away its style, and apply that style to another scene.
So computer scientists began searching for different approaches that could do the task more quickly. And indeed they came up with various algorithms that do a similar job. However, there is a trade-off between speed and quality.
Enter Pęśko and Trzciński. These guys have tested a wide range of neural style transfer algorithms on the specific task of transferring the graphic styles associated with comics. “This is the first attempt to evaluate and compare the results obtained by several methods in the context of transferring comic style,” they say.
They specifically focus on the fastest techniques that have the potential to work on any graphic image. “We focus mostly on methods whose execution time per image do not exceed 2 seconds,” they say.
In this way, they tested five different algorithms on 600x450-pixel images processed using a 12-gigabyte Titan X graphics processing unit. They selected images that represent various comic styles and transferred these to images chosen randomly from a Google image search.
Finally, they showed the results to 100 people to evaluate how well the algorithms achieved the style transfer.
The results show the state of the art in this area. The algorithm judged best is a technique known as adaptive instance normalization, developed in 2017, with some 30 percent of the votes in its favor. “It confirms our assumptions that this method gives results that are the closest to cartoon or comics in terms of stylistic similarity,” says Pęśko and Trzciński.
However, the results are by no means perfect. All the techniques suffer to some degree from problems such as inappropriate color transfer and blurring. “We believe that there is still some place for improvement,” say the researchers.
That represents an opportunity. The comic book market in the US alone is worth $1 billion a year. And there are many parts of the world that have yet to develop their own cultures around comics, such as India. So there are markets that have the potential to grow.
The ability to create high-quality comic images will make a significant difference to anybody wanting to conquer those markets.
However, there is another problem: the challenge of developing powerful characters and compelling storylines. Neural networks can’t help with that … at least, not yet.
Ref: arxiv.org/abs/1809.01726 : Neural Comic Style Transfer: Case Study
This artist is dominating AI-generated art. And he’s not happy about it.
Greg Rutkowski is a more popular prompt than Picasso.
What does GPT-3 “know” about me?
Large language models are trained on troves of personal data hoovered from the internet. So I wanted to know: What does it have on me?
An AI that can design new proteins could help unlock new cures and materials
The machine-learning tool could help researchers discover entirely new proteins not yet known to science.
DeepMind’s new chatbot uses Google searches plus humans to give better answers
The lab trained a chatbot to learn from human feedback and search the internet for information to support its claims.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.