Skip to Content

Audio Software for the Moody Listener

Have a big music collection? Try organizing it by feeling.
July 19, 2006

The more music you have, the tougher it can be to find the right song. Researchers at the University of Munich in Germany think they have a solution: a digital music player that maps songs by mood.

AudioRadar plots music collections on a clickable, mood-based map (only a section is shown here). Its creators are still improving the program, but some day it may reach your iPod. (Courtesy of Otmar Hilliges, University of Munich.)

Programs such as Apple’s iTunes have the drawback of requiring their users to scroll through endless lists, says Otmar Hilliges, a graduate student in the Munich research group. “A lot of people who own iPods tell me they don’t read the list anymore,” he notes. “They remember where spatially on the list their favorite artists are and scroll – remembering how long it takes to get to the artist they want.” But this trick isn’t much help if you’re searching through several thousand songs.

In many cases, users might not even have an artist or a title in mind – but rather just a feeling for what kind of music they want to hear. They could search by genre, looking for “jazz,” for instance, but such labels don’t reveal how a song actually sounds – or, better yet, how it feels.

Some people surrender control altogether, setting their player to shuffle. The result is a mix that jerks listeners all over the map, says Paul Lamere, a software engineer at Sun Microsystems Laboratories in Santa Clara, CA. “I may get ACDC followed by Raffi,” he says. “We call this iPod whiplash. What we really want is a button that says, ‘Play me music I like.’”

Instead, the software developed by the Munich group, AudioRadar, provides a map of songs by their sound and similarities. Using algorithms developed by other acoustical researchers over the years, it scans a music collection, measuring song qualities: tempo, chordal shifts, volume, harmony, and so on. Then it weights the songs by four key criteria: fast or slow, melodic or rhythmic, turbulent or calm, and rough or clean. (Turbulence measures the abruptness of shifts; “rough” indicates the number of shifts.)

Based on these metrics, the application creates a map in which a chosen song appears at the center of the screen, with similar songs clustered in a circle around it – sort of like points of light on a radar screen. Then users can gauge, for instance, the “calmness” or “cleanness” of another music choice by its relative position on the map. Distances are scaled; for instance, a song at the circle’s outer edge would be twice as calm as one in the center. And the cluster rearranges itself after each new song. Thus, users can surf their collections without needing to remember every song they own. They can build mood-based playlists or let the program select the next most similar song.

AudioRadar is different from music “discovery” engines such as Liveplasma, Pandora, and Last.fm, which help users expand their collections. These online services analyze your musical tastes and suggest new music you might like. Another program, Musipedia, allows users to hum, whistle, or play a song, and then retrieves the title and artist.

AudioRadar’s closest relatives are two other programs still under development: Playola, created by a student at Columbia University, and Search Inside the Music, by Sun Microsystems. Playola measures patterns in songs and fits them into genres – electronic, college rock, and so on. After listening to an initial song, users adjust sliders to indicate genre preferences for the next choice – a little more “singer-songwriter” and a little less “college rock,” for instance. The program provides mood-based navigation, like AudioRadar, and uses some of the same algorithms, says Dan Ellis, associate professor of electrical engineering at Columbia, who oversees Playola. Ellis says that AudioRadar offers the bonus of a user-friendly display.

Like AudioRadar, Search Inside the Music is a media player that measures song features. It displays songs as clumps of “stars” in an imaginary sky, grouped by genre and sound similarity. Users can take a musical “journey” through their collections, clicking on a starting point, say, a fast rock song, and requesting a playlist that moves toward a finale, such as a quiet classical piece, minimizing “whiplash” along the way.

“Massive music collections…are crying out for better navigation mechanisms,” says Columbia’s Ellis. Both AudioRadar and Search Inside the Music are still prototypes, though. The former will be presented at the Sixth International Symposium on Smart Graphics in Vancouver, Canada, later this month.

These programs haven’t left the labs yet mainly because they’re still inefficient. “It takes very long to extract the songs,” says Hilliges, admitting that he has not yet built his prototype to its 10,000 song capacity because he gets “frustrated” during the extraction process. AudioRadar’s slow algorithms cause songs to take, on average, five to ten percent longer than their playing times to process. For large collections, that can amount to many hours.

Stephen Downie, associate professor and specialist in information retrieval and multimedia at the University of Illinois at Urbana-Champaign, thinks this problem is short-lived, though. As computers and extraction algorithms get faster, systems like AudioRadar will eventually “be built into your iPod,” he predicts.

Still, these programs have other glitches. “Similarity is a human metric,” says Lamere, principal researcher on Sun Lab’s music search project, meaning it’s still a subjective phenomenon: people call songs “similar” for a variety of reasons.

Ellis says current computer programs “do a poor job of duplicating human similarity judgments…In large music collections, we frequently encounter machine similarity judgments that just make no sense to a listener – and the more diverse the collection, the more outlandish these misjudgments become.” Early versions of Search Inside the Music, for instance, grouped classical music with heavy metal, because it measured similarities by timbre of instruments. To the computer, harpsichords and heavy-metal guitars sounded similar.

These programs are also limited by a quality that’s even harder to measure: originality. “You as a human will recognize “Stairway to Heaven” played on a banjo, as opposed to the original version played at the Led Zeppelin concert,” says Downie, “but these systems really can’t get it…It’s nice to see that they’re trying to commercialize [these programs],” he says, “but there’s a lot of ground yet to explore.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.