Skip to Content

Ads that Match a Web Page’s Images

Using the contents of images or videos to target Web ads could improve click-through.
July 21, 2010

Web ads help subsidize free content and services, and have made Google into the behemoth it is today. But the software used to tailor them to a user’s interests can only do this by analyzing the words on a webpage.

Learning process: New ad-targeting software was trained to recognize the features in images using photos uploaded to Flickr.

Qiang Yang at Hong Kong University of Science and Technology wants to change that. He has developed software able to select contextual ads based on the contents of images or videos on a page. Yang and colleagues from Shanghai Jiao Tong University in China presented their work at the AAAI Conference on Artificial Intelligence in Atlanta last week.

Many fast-growing parts of the Web, such as Facebook or Google’s Picasa, are populated with user-generated images. They could become a rich advertising opportunity with the right technology, says Yang. “Many photos in online photo albums and video scenes do not have texts to describe them,” he says. “People browsing through their own or others’ online photo albums are a potential audience for adverts.” Today, he says, it’s impossible to reach people where there is no surrounding text.

To match an advertisement to an image, the group’s software first converts the image to a collection of words. The software was trained to do this by crawling around 60,000 images on Flickr that have tags added by users.

Any new image can then be roughly summarized with a few words, and a second algorithm uses those words to select an ad to display. In trials of the technique, ads were matched to more than 300,000 images found through Microsoft’s MSN search engine (prior to its rebranding as Bing) using popular search terms. The results were good, says Yang. For example, a photo of a tree frog caused ads to be selected for pet supplies. One of a boat and a beach called up ads for sailing holidays and boat shoes.

The approach is an example of a machine-learning technique dubbed “transfer learning,” says Yang. “Transfer learning tries to learn in one space (text) and then apply the learned model to a very different feature space (such as images),” he says. “It aims to imitate human learning when we can apply our learned knowledge in, say, playing chess, to a seemingly different domain such as strategic planning in business.”

Image recognition: The software can match advertisement to images it has never seen before based on what they show.

A panel of volunteers was asked to look at images and the ads chosen to go alongside them and evaluate which ads they considered relevant enough to consider clicking on. “That test shows that we can, on average, produce one correct ad per three suggested ads,” says Yang. He believes this is a high enough success rate to suggest the approach could work commercially. When the same users were shown randomly selected ads with images, only one in 50 was deemed relevant enough to be clicked.

Researchers at Microsoft Research Asia previously developed a system that used image analysis to classify photos into a handful of categories in order to refine the text-based selection of advertising. Yang’s goal, he says, is to bring contextual advertising to pages with little or no text. This would require software capable of classifying images using a larger vocabulary, like the one he is developing.

The team is currently working to add thesaurus-like capabilities to its system, so it can generate multiple words to describe the same feature in an image, thereby increasing the number of relevant ads that can be found. It is already possible to have the software work on individual video frames. The group is also working on customizing it to work on video footage.

“This approach to contextual advertising is potentially very interesting for advertisers,” says Debra Williamson, a senior analyst with the digital marketing and advertising research firm eMarketer. “On the Web today, advertising is built around the text on a page, even when the media at the center of people’s attention is imagery or video.”

If the technology is reliable enough, applying it to video would likely have more potential than for still images, says Williamson. For a long video, she says, “a short description can’t represent everything in the footage. If you can scan what’s in the video, you could choose adverts to display minute by minute based on what appears.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.