One of the important questions that art historians pursue is how great artists were influenced by others. They examine the style, content, and genre of the artwork and look for connections and influences between artists.
That’s a complex business. In the days before photography, the only way to copy a piece of art was by hand. Indeed, this kind of work was common. Artists often replicated their own work or work by others in the same studio, and copies abounded.
But the goal of this form of copying wasn’t always to reproduce the original. Often, artists used existing pictures as a starting point for their own work, which would reflect the composition or pose of the original. Indeed, there are many examples of identical human figures in the same pose in entirely different paintings.
So the history of art is a complex web of links between artists and their works, often mapped out in the influences on original works, partial copies, and complete copies.
The human pose plays an important role in this. One job of the art historian is to tease apart this web, to study the human poses used by different artists and glimpse the forces that influenced them.
Today, that gets easier thanks to the work of Tomas Jenicek and Ondrej Chum at the Czech Technical University in Prague. These guys have used a machine vision system to analyze the poses of human subjects in fine art paintings throughout history. They then search for other paintings that contain people in the same poses.
The technique reveals previously unknown links between art and artists. It adds a powerful new tool to the armory that art historians can use, with the potential to change the way we understand the history of art.
The method is relatively straightforward and based on the vast databases that art historians have created in recent years. These have digitized the collections from many of the world’s top museums and galleries, and many of them are openly available online. These databases are suddenly amenable to analysis by machine intelligence.
At the same time, other researchers have been developing machine vision algorithms that can determine a human pose from a 2D image. Probably the most advanced is an algorithm called OpenPose, an open-source program for real-time pose detection in 2D images, developed at Carnegie Mellon University in Pittsburgh.
Jenicek and Chum use this software to search for similar poses in a database of manually annotated images. This acts a kind of gold standard.
They say the automated process easily outperforms other ways of finding similar images. “We experimentally show that explicit human pose matching is superior to standard content-based image retrieval methods on a manually annotated art composition transfer dataset,” they say.
They go on to look for similar poses in an online database called the Web Gallery of Art, which contains 37,000 images. The researchers say their algorithm discovered a wide range of links between pictures that would have been impossible to identify by other means (see image).
Of course, the algorithm is not perfect. It finds a number of false positives, in which poses in different images appear similar but after visual inspection turn out to be entirely different.
This is by no means the first attempt to use machine vision to study fine art. Researchers have already used algorithms to find striking new links between artworks based on the general composition of a painting.
Human pose estimation is much harder for machines than studying general composition, so it’s taken longer to bring this technology to bear on the art world. But the prevalence of humans in art is so great that this technique has significant potential.
It offers a powerful new way to analyze art works through the ages and to study how copies and variations of human poses have influenced artists. Just how art historians use this new tool will be fascinating to watch.
Ref: arxiv.org/abs/1907.03537 : Linking Art through Human Poses
A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?
Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.
The viral AI avatar app Lensa undressed me—without my consent
My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.
Roomba testers feel misled after intimate images ended up on Facebook
An MIT Technology Review investigation recently revealed how images of a minor and a tester on the toilet ended up on social media. iRobot said it had consent to collect this kind of data from inside homes—but participants say otherwise.
How to spot AI-generated text
The internet is increasingly awash with text written by AI software. We need new tools to detect it.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.