Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

{ action.text }

There was a time, not so long ago, when the manipulation of images was a rare and magical thing. Expensive and time consuming, I was confined largely to the worlds of advertising, marketing and glossy magazine design. 

In the 1990s, the Photoshop era brought photo manipulation to a broader audience. And in the last few years, photo manipulation has become  mainstream activity to the development of filters that automatically make photos look grainy or old or water-coloured and so. 

In contrast Photoshop which costs hundreds of dollars, these filters are built-in to photo apps that cost pennies. The explosion of altered images on the web is testament to the fact that everybody has one.

Despite this progress, there are still some photomanipulation techniques that have yet to be automated. One of these is the technique of Sumi-e or oriental ink painting.

Many western painting techniques create pictures using many layers of ink or paint that slowly build and modify the image. By contrast, sumi-e uses simple brush strokes to capture the essence of an image or a movement as simply as possible. 

The challenge in automating this is first reproducing these simple brush strokes in a natural, repeatable way. They then have to be applied in a way that produces the required image. 

Various groups have tried to produce natural looking brush strokes. One way is to create a physics-based model of the way the brush and ink make contact with the paper. This can produce natural looking results but the computationally expensive calculations have to be repeated for each stroke making this impractical for ordinary applications.

Now Ning Xie and pals at the Tokyo Institute of Technology in Japan have solved the problem. These guys have developed a computer model that mimics the contact of a brush and ink with paper as the brush moves in various ways. But allow this model to paint a line and it produces an awful result.

So Ning and co used a technique known as reinforcement learning to improve the result. This is a simple idea in which the model is rewarded for producing smoother lines but not when it produces ragged lines. In this way, it learns. 

Ning and co used the technique to build a library of brush strokes that the computer can use at any time. It then applies these by following lines on an existing picture to create an oriental ink version.

Ning and co say they think the results are rather good. It’s hard to disagree. The picture above is one beautiful example and there are a few more below.

How long before they release this as an iPhone/Android app?

Ref: arxiv.org/abs/1206.4634: Artist Agent: A Reinforcement Learning Approach to Automatic Stroke Generation in Oriental Ink Painting


1 comment. Share your thoughts »

Tagged: Computing

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me