Skip to Content

One of the important questions that art historians pursue is how great artists were influenced by others. They examine the style, content, and genre of the artwork and look for connections and influences between artists.

That’s a complex business. In the days before photography, the only way to copy a piece of art was by hand. Indeed, this kind of work was common. Artists often replicated their own work or work by others in the same studio, and copies abounded.

But the goal of this form of copying wasn’t always to reproduce the original. Often, artists used existing pictures as a starting point for their own work, which would reflect the composition or pose of the original. Indeed, there are many examples of identical human figures in the same pose in entirely different paintings.

So the history of art is a complex web of links between artists and their works, often mapped out in the influences on original works, partial copies, and complete copies.

The human pose plays an important role in this. One job of the art historian is to tease apart this web, to study the human poses used by different artists and glimpse the forces that influenced them.

Today, that gets easier thanks to the work of Tomas Jenicek and Ondrej Chum at the Czech Technical University in Prague. These guys have used a machine vision system to analyze the poses of human subjects in fine art paintings throughout history. They then search for other paintings that contain people in the same poses.

Pose matching in different artworks

The technique reveals previously unknown links between art and artists. It adds a powerful new tool to the armory that art historians can use, with the potential to change the way we understand the history of art.

The method is relatively straightforward and based on the vast databases that art historians have created in recent years. These have digitized the collections from many of the world’s top museums and galleries, and many of them are openly available online. These databases are suddenly amenable to analysis by machine intelligence.

At the same time, other researchers have been developing machine vision algorithms that can determine a human pose from a 2D image. Probably the most advanced is an algorithm called OpenPose, an open-source program for real-time pose detection in 2D images, developed at Carnegie Mellon University in Pittsburgh.

Jenicek and Chum use this software to search for similar poses in a database of manually annotated images. This acts a kind of gold standard.

They say the automated process easily outperforms other ways of finding similar images. “We experimentally show that explicit human pose matching is superior to standard content-based image retrieval methods on a manually annotated art composition transfer dataset,” they say.

They go on to look for similar poses in an online database called the Web Gallery of Art, which contains 37,000 images. The researchers say their algorithm discovered a wide range of links between pictures that would have been impossible to identify by other means (see image).

Of course, the algorithm is not perfect. It finds a number of false positives, in which poses in different images appear similar but after visual inspection turn out to be entirely different.

This is by no means the first attempt to use machine vision to study fine art. Researchers have already used algorithms to find striking new links between artworks based on the general composition of a painting.

Human pose estimation is much harder for machines than studying general composition, so it’s taken longer to bring this technology to bear on the art world. But the prevalence of humans in art is so great that this technique has significant potential.

It offers a powerful new way to analyze art works through the ages and to study how copies and variations of human poses have influenced artists. Just how art historians use this new tool will be fascinating to watch.

Ref: arxiv.org/abs/1907.03537 : Linking Art through Human Poses

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.