Skip to Content
Uncategorized

Google’s Play for Radio Advertising

We’re all used to context-sensitive, custom-delivered ads on our search result pages. Now that kind of targeting could come to radio.
January 17, 2006

Google announced today that it plans to buy Dmarc Broadcasting, a company in Newport Beach, CA, that automates the insertion of ads into daily radio programming across the country according to the advertiser’s preferred geographic area, target audience, time of day, and so forth.

Dmarc’s advertising network is, in some ways, the radio equivalent of Google’s AdWords program, which supplements search results on Google and e-mail messages on Gmail with ads selected to match the topic of the current page. Indeed, Google says it plans to integrate Dmarc’s technology into its AdWords platform, “creating a new radio ad distribution channel for Google advertisers,” in the words of Google’s press release.

But there’s one big difference between Dmarc’s system and AdWords – a difference that makes Google’s announcement either puzzling or intriguing, depending on how you look at it. AdWords is all about context: Google’s software automatically analyzes the words on a search page or in an e-mail message, tries to pick the most important ones, and retrieves ads related to those words. That’s why they call it AdWords.

Dmarc’s system, on the other hand, simply automates the radio industry’s existing ad-insertion process. It lets advertisers upload new radio spots, choose one market or multiple markets in which to broadcast them, and see instant reports about which ads have been played. The system is sensitive to context only in the sense that advertisers can manually specify which stations should play their ads, based on the stations’ demographics.

In other words, Dmarc’s network is a nice hack that simplifies radio advertising. There’s nothing clever about it, in the way that AdWords cleverly reads a Web page and divines its subject matter, or in the way that Google’s original search-engine service draws on the collective wisdom of the Web, by giving the highest rank to pages with the most incoming links. One wonders why Google’s brainiacs would be interested.

But here’s the intriguing possibility: Perhaps Google is working on technology that would “listen” to the word-stream in a radio program, parse its meaning, and insert ads related to those subjects. The company employs plenty of PhDs who study natural language processing, including machine translation, so this possibility isn’t so far-fetched. How would the system work? If your drive-time AM station broadcasts a news report about chinchilla farming in Estonia, the report might be immediately followed by an ad for fur coats (to use a politically incorrect example).

And if such a system works for radio, what’s to stop Google from entering the TV advertising market, or even outdoor advertising? If there’s a context to work with, the distribution of any ad can be tailored. On the Web, Google – and, to be fair, Overture, which was acquired several years ago by Yahoo – have already helped advertisers solve the age-old problem of figuring out which half of their ad spending they’re wasting. If the art of radio and TV advertising could also be turned into a science, Google could soon see its profits piling up even faster.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.