Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

U.S. researchers have released a new online program for automatically tagging images according to their content. In its first real-world test, the program processed thousands of publicly accessible images available on the photo-sharing site Flickr. At least one accurate tag was generated for 98 percent of all the pictures analysed.

The new software, called ALIPR (Automatic Linguistic Indexing of Pictures), uses a combination of statistical techniques to process an image and assign it a batch of 15 words, arranged in order of perceived relevance. These words may refer to a specific object within the picture, such as a “person” or “car,” or to a more general theme, such as “outdoors” or “manmade.”

For humans, deciphering an image is deceptively simple. And yet for computers, which can sort through millions of text documents with blistering speed and accuracy, identifying the content of an image remains a devilishly difficult task.

“Recognizing what an image is about semantically is one of the most difficult problems in AI,” says Jia Li, a mathematician at Pennsylvania State University, in State College, who created the software with colleague James Wang, a member of the College of Information Sciences and Technology. “Objects in the real world are 3-D,” Li explains. “When showing up in an image, they can vary vastly in color, shape, gesture, size, and position, and a computer usually has no prior knowledge about the variations.”

Because a complex understanding of the world remains beyond the ability of computers, more-efficient vision-processing algorithms are needed to help them mimic human vision and intelligence.

ALIPR analyses an image pixel by pixel and applies a novel statistical method to calculate the probability that a particular word may describe its content. This involves examining the distribution of color and texture within the image and comparing these features with a stored database of words and images. Li and Wang trained their program using a commercial database containing around 50,000 images that had already been tagged.

Recently, they tested ALIPR on 5,411 previously unseen images available on the popular picture-sharing site Flickr. For 51 percent of these images, the first word generated by ALIPR appeared in users’ tags. The program also produced at least one accurate word 98 percent of the time. The researchers employed images made publicly accessible by Flickr users, which were also openly accessible through Flickr’s own Application Programming Interface.

3 comments. Share your thoughts »

Tagged: Computing, Web

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me
×

A Place of Inspiration

Understand the technologies that are changing business and driving the new global economy.

September 23-25, 2014
Register »