Skip to Content

Auto-Mash-up Your Favorite Tracks

API “automagically” transforms music in bizarre and wonderful ways.

This is the video for Boom Boom Pow with the beats reversed - you could call it Pow Boom Boom. It took less code to generate than it took to embed it into this blog post:

That bit of magic was accomplished by developer Paul Lamere, via a subset of the Echo Nest remix API, a platform by which programmers can access the large and growing database of song characteristics (and song processing tools), available from the National Science Foundation-funded music categorization service Echo Nest.&

The Remix API is sort of a Swiss Army knife for weird (and occasionally useful) remix tricks. What makes it remarkable is that it has the ability to deconstruct music, identify different elements and then seamlessly re-construct the song in whatever form a user likes. “Automagically”.

Stretching Songs Without Changing The Tempo

One of the cleverest demonstrations of its power is its ability to shrink (or expand) the length of any song, by identifying and seamlessly repeating loops within it. This feature is called Earworm–after “a portion of a song or other music that repeats compulsively within one’s mind.”

Earworm accomplishes this by first constructing a network graph of the piece:

According to its developers, “Each node in the graph is a beat in the song, and an edge exists between two nodes if the two beats, and the several beats that follow them, sound similar (close in timbre and pitch). The graph shows us where we can make seamless transitions between different parts of the song. Stretching (or shrinking) the song is then just a matter of minimizing the number of “loop” points to reach a requested duration.”

The results are nothing short of astonishing: Take, for example, the track “If I Ever Feel Better” by the band Phoenix. Using earworm, the developers easily transformed it into a seamless 10 minute rendition they call If I Ever Feel Longer. They also transformed it into a version that’s one quarter as long as the original, which is “the shortest path through the song with reasonable transitions.”

If you prefer classic rock, here’s the hour-long, 55 megabyte extended jam version of the Rolling Stones’ Can’t You Hear Me Knocking.

More Cowbell

“More Cowbell,” a phrase from a Saturday Night Live skit that has jumped the shark several times over, is given new life by MoreCowbell.dj, based on the same technology. Here’s the Scissor Sisters’ new track Invisible Light with more cowbell. (My only regret is that I didn’t push the “Christopher Walken” slider up even higher.

Sure, any good pair of decks can automatically beat-match for you. But can they beat-match any three songs in the Echo Nest database, in any order? ThisIsMyJam.com is a cubicle dance party in sector Q-6 waiting to happen.

Ringo wasn’t a very precise drummer, but the Remix API can correct his sloppy drumming too.

Echo Nest’s API is free for any developer who wants to apply for a key. There are dozens of more examples in the project’s Google Code library. Here’s one worth trying, straight from the developers:

It’s easy to make your own earworm, even without audio. Install beta pyechonest, install remix, and cd to the earworm example:

> python earworm.py INXS ‘Need you Tonight’

Wait a moment for the audio and analysis to download, and before you know it, you’ll have a 10-minute version of ‘Need you Tonight’ by INXS. What you do next is up to you…

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.