We all know that music is a powerful influencer. A movie without a soundtrack doesn’t provoke the same emotional journey. A workout without a pump-up anthem can feel like a drag. But is there a way to quantify these reactions? And if so, could they be reverse-engineered and put to use?
In a new paper, researchers at the University of Southern California mapped out how things like pitch, rhythm, and harmony induce different types of brain activity, physiological reactions (heat, sweat, and changes in electrical response), and emotions (happiness or sadness), and how machine learning could use those relationships to predict how people might respond to a new piece of music. The results, presented at a conference last week on the intersections of computer science and art, show how we may one day be able to engineer targeted musical experiences for purposes ranging from therapy to movies.
The research is part of the lab’s broader goal to understand how different forms of media, such as films and TV ads as well as music, affect people’s bodies and brains. “Once we understand how media can affect your various emotions, then we can try to productively use it for actually supporting or enhancing human experiences,” says Shrikanth Narayanan, a professor at USC and the principal investigator in the lab.
The researchers first scoured music streaming sites like Spotify for songs with very few plays, tagged either “happy” or “sad.” (They wanted to avoid familiar songs to minimize any confounding variables.) Through a series of human testers, 60 pieces for each emotion were narrowed down to a final list of three: two that reliably induced sadness (Ólafur Arnalds’s “Fyrsta” and Michael Kamen’s “Discovery of the Camp”) and one that reliably induced happiness (Lullatone’s “Race Against the Sunset”). One hundred participants who hadn’t heard the songs before split into two groups, listened to all three pieces, and either took an fMRI scan or wore pulse, heat, and electricity sensors on their skin and rated the intensity of their emotions on a scale of 0 to 10.
The researchers then fed the data, along with 74 features for each song (such as its pitch, rhythm, harmony, dynamics, and timbre), into several machine-learning algorithms and examined which features were the strongest predictors of responses. They found, for example, that the brightness of a song (the level of its medium and high frequencies) and the strength of its beat were both among the best predictors of how a song would affect a listener’s heart rate and brain activity.
The research is still in very early stages, and it will be a while before more powerful machine-learning models will be able to predict your mental and physical reactions to a song with any precision. But the researchers are excited about how such models could be applied: to design music for specific individuals, to create highly evocative movie soundtracks, or to help patients with mental health challenges stimulate specific parts of their brain. The lab is already working with addiction treatment clinics to see how other forms of media could help patients. They want to start incorporating music-based therapies as well.
More simply, the research could be used to generate playlists. “You wouldn’t want to listen to a song that’s gonna make your heart heart rate spike right before bedtime, but maybe you do if you’re going on a long drive and you haven’t had much coffee,” says Greer.