Skip to Content
Artificial intelligence

AI Songsmith Cranks Out Surprisingly Catchy Tunes

Google’s songwriting program learns by combining statistical learning and explicit rules—the same approach may make it easier for engineers to shape other AI programs.
November 30, 2016

The piano ditty below, which ascends jauntily, then finishes with a tuneful flourish, sounds a bit like a jingle composed for the latest toothpaste campaign.

The tune was, in fact, dreamed up by a musical AI program developed at Google. And the program’s latest compositions show how combining a powerful machine-learning approach with simple musical rules can produce creative works that sound remarkably human.

Music composition is an enigmatic form of human creativity. Songwriting programs already exist, but they typically follow a specific set of rules, and they tend to produce tunes that feel rigid and mechanical. The same is true of software that recommends music based on your listening habits (see “The Hit Charade”). Teaching computers to be more musically inventive may point to ways that machines can help with other creative acts, from designing products to writing eloquent text.

Google has previously demonstrated its music-generating AI songsmith, which is part of a project called Magenta that’s aimed at fostering artificial creativity (see “OK, Computer, Write Me a Song”). A large neural network is fed tens of thousands of songs and is trained to predict the next note in a sequence. Such a network can also generate new music when given a starting point, although the results tend to lack structure and grace.

Douglas Eck, a research scientist at Google who’s leading the development of the music-generating AI, together with Natasha Jaques, an intern at the company, recently devised a way to make the songwriting systems produce much more elegant and catchy tunes. They use an approach known as reinforcement learning to add simple principles of music theory—avoid repeating a refrain too often, do not play too quickly or slowly, and so on—to the overall learning process. The network receives a positive reward every time it produces a sequence of notes that not only resembles the patterns seen in previous songs, but also adheres to the musical rules it has been given.

“These are simple rules taken from a music composition textbook,” Eck says. “The combination of these rules with reinforcement learning, and the variance of the real world coming from thousands of human compositions, gives us songs that are so catchy—they scratch some itch.”

The new approach, described in a research paper and a blog post, certainly seems to improve automated music generation. Another snippet of music shows how the program fares without these rules to follow. The piece feels flat, repetitive, and mechanical. Eck and Jaques also conducted a user study and found that people much preferred the compositions produced using the new technique.

Eck says the ability to embed rules in reinforcement learning will be useful in many areas, including robotics, recommendation systems, and language translation.

“There is no reason why machines cannot be curious and creative,” says Jürgen Schmidhuber, a professor at the University of Lugano in Switzerland who performed pioneering research on the type of neural networks used by Google’s researchers, and who has experimented with creativity using reinforcement learning. Schmidhuber adds that the approach could have a range of practical applications beyond music. “One could imagine similar combinations of [neural networks] and traditional rule-based expert systems for medical diagnosis,” he says.

Reinforcement learning offers a way to teach machines to do things that would be difficult to achieve through explicit instruction. The technique was employed by AlphaGo, a program developed by Google researchers to play the ancient board game Go. While the rules of Go are simple, it is hard to explain how to play well, and players normally develop an intuitive aptitude through many hours of practice. But sometimes it may be useful to be able to give explicit instruction to a machine-learning system as well.

Stevan Harnad, a professor of psychology at the University of Quebec in Canada who has studied artificial creativity, says the Magenta work is impressive, but adds that there is still a long way to go before computers can be credited with real, human-like creativity. “Deep-learning algorithms are very promising, but so far they have not yet duplicated ordinary, noncreative human capacity, so it’s a bit premature to expect them to be creative,” he says.

In fact, Harnad says, even compositions like those produced by the Google team often appear mechanical after a few listens.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.