Skip to Content
Artificial intelligence

AI could help us deconstruct why some songs just make us feel so good

Machine learning can map which musical qualities trigger what types of physical and emotional responses. One day the technique could even be used in music therapy.
November 1, 2019
A man listening to music on the radio
A man listening to music on the radioCourtesy of USC Viterbi School of Engineering

We all know that music is a powerful influencer. A movie without a soundtrack doesn’t provoke the same emotional journey. A workout without a pump-up anthem can feel like a drag. But is there a way to quantify these reactions? And if so, could they be reverse-engineered and put to use?

In a new paper, researchers at the University of Southern California mapped out how things like  pitch, rhythm, and harmony induce different types of brain activity, physiological reactions (heat, sweat, and changes in electrical response), and emotions (happiness or sadness), and how machine learning could use those relationships to predict how people might respond to a new piece of music. The results, presented at a conference last week on the intersections of computer science and art, show how we may one day be able to engineer targeted musical experiences for purposes ranging from therapy to movies.

The research is part of the lab’s broader goal to understand how different forms of media, such as films and TV ads as well as music, affect people’s bodies and brains. “Once we understand how media can affect your various emotions, then we can try to productively use it for actually supporting or enhancing human experiences,” says Shrikanth Narayanan, a professor at USC and the principal investigator in the lab.

The researchers first scoured music streaming sites like Spotify for songs with very few plays, tagged either “happy” or “sad.” (They wanted to avoid familiar songs to minimize any confounding variables.) Through a series of human testers, 60 pieces for each emotion were narrowed down to a final list of three: two that reliably induced sadness (Ólafur Arnalds’s “Fyrsta” and Michael Kamen’s “Discovery of the Camp”) and one that reliably induced happiness (Lullatone’s “Race Against the Sunset”). One hundred participants who hadn’t heard the songs before split into two groups, listened to all three pieces, and either took an fMRI scan or wore pulse, heat, and electricity sensors on their skin and rated the intensity of their emotions on a scale of 0 to 10.

The researchers then fed the data, along with 74 features for each song (such as its pitch, rhythm, harmony, dynamics, and timbre), into several machine-learning algorithms and examined which features were the strongest predictors of responses. They found, for example, that the brightness of a song (the level of its medium and high frequencies) and the strength of its beat were both among the best predictors of how a song would affect a listener’s heart rate and brain activity.

The research is still in very early stages, and it will be a while before more powerful machine-learning models will be able to predict your mental and physical reactions to a song with any precision. But the researchers are excited about how such models could be applied: to design music for specific individuals, to create highly evocative movie soundtracks, or to help patients with mental health challenges stimulate specific parts of their brain. The lab is already working with addiction treatment clinics to see how other forms of media could help patients. They want to start incorporating music-based therapies as well.

More simply, the research could be used to generate playlists. “You wouldn’t want to listen to a song that’s gonna make your heart heart rate spike right before bedtime, but maybe you do if you’re going on a long drive and you haven’t had much coffee,” says Greer.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.