Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

{ action.text }

Web ads help subsidize free content and services, and have made Google into the behemoth it is today. But the software used to tailor them to a user’s interests can only do this by analyzing the words on a webpage.

Qiang Yang at Hong Kong University of Science and Technology wants to change that. He has developed software able to select contextual ads based on the contents of images or videos on a page. Yang and colleagues from Shanghai Jiao Tong University in China presented their work at the AAAI Conference on Artificial Intelligence in Atlanta last week.

Many fast-growing parts of the Web, such as Facebook or Google’s Picasa, are populated with user-generated images. They could become a rich advertising opportunity with the right technology, says Yang. “Many photos in online photo albums and video scenes do not have texts to describe them,” he says. “People browsing through their own or others’ online photo albums are a potential audience for adverts.” Today, he says, it’s impossible to reach people where there is no surrounding text.

To match an advertisement to an image, the group’s software first converts the image to a collection of words. The software was trained to do this by crawling around 60,000 images on Flickr that have tags added by users.

Any new image can then be roughly summarized with a few words, and a second algorithm uses those words to select an ad to display. In trials of the technique, ads were matched to more than 300,000 images found through Microsoft’s MSN search engine (prior to its rebranding as Bing) using popular search terms. The results were good, says Yang. For example, a photo of a tree frog caused ads to be selected for pet supplies. One of a boat and a beach called up ads for sailing holidays and boat shoes.

The approach is an example of a machine-learning technique dubbed “transfer learning,” says Yang. “Transfer learning tries to learn in one space (text) and then apply the learned model to a very different feature space (such as images),” he says. “It aims to imitate human learning when we can apply our learned knowledge in, say, playing chess, to a seemingly different domain such as strategic planning in business.”

4 comments. Share your thoughts »

Credits: Q Yang

Tagged: Business, Web, software, advertising, online advertising, machine vision

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me