There was a time, not so long ago, when the manipulation of images was a rare and magical thing. Expensive and time consuming, I was confined largely to the worlds of advertising, marketing and glossy magazine design.
In the 1990s, the Photoshop era brought photo manipulation to a broader audience. And in the last few years, photo manipulation has become mainstream activity to the development of filters that automatically make photos look grainy or old or water-coloured and so.
In contrast Photoshop which costs hundreds of dollars, these filters are built-in to photo apps that cost pennies. The explosion of altered images on the web is testament to the fact that everybody has one.
Despite this progress, there are still some photomanipulation techniques that have yet to be automated. One of these is the technique of Sumi-e or oriental ink painting.
Many western painting techniques create pictures using many layers of ink or paint that slowly build and modify the image. By contrast, sumi-e uses simple brush strokes to capture the essence of an image or a movement as simply as possible.
The challenge in automating this is first reproducing these simple brush strokes in a natural, repeatable way. They then have to be applied in a way that produces the required image.
Various groups have tried to produce natural looking brush strokes. One way is to create a physics-based model of the way the brush and ink make contact with the paper. This can produce natural looking results but the computationally expensive calculations have to be repeated for each stroke making this impractical for ordinary applications.
Now Ning Xie and pals at the Tokyo Institute of Technology in Japan have solved the problem. These guys have developed a computer model that mimics the contact of a brush and ink with paper as the brush moves in various ways. But allow this model to paint a line and it produces an awful result.
So Ning and co used a technique known as reinforcement learning to improve the result. This is a simple idea in which the model is rewarded for producing smoother lines but not when it produces ragged lines. In this way, it learns.
Ning and co used the technique to build a library of brush strokes that the computer can use at any time. It then applies these by following lines on an existing picture to create an oriental ink version.
Ning and co say they think the results are rather good. It’s hard to disagree. The picture above is one beautiful example and there are a few more below.
How long before they release this as an iPhone/Android app?
Ref: arxiv.org/abs/1206.4634: Artist Agent: A Reinforcement Learning Approach to Automatic Stroke Generation in Oriental Ink Painting
These materials were meant to revolutionize the solar industry. Why hasn’t it happened?
Perovskites are promising, but real-world conditions have held them back.
Why China is still obsessed with disinfecting everything
Most public health bodies dealing with covid have long since moved on from the idea of surface transmission. China’s didn’t—and that helps it control the narrative about the disease’s origins and danger.
Anti-aging drugs are being tested as a way to treat covid
Drugs that rejuvenate our immune systems and make us biologically younger could help protect us from the disease’s worst effects.
A quick guide to the most important AI law you’ve never heard of
The European Union is planning new legislation aimed at curbing the worst harms associated with artificial intelligence.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.