Skip to Content

OK Computer, Write Me a Song

Google says its AI software could make creative suggestions to help musicians, architects, and visual artists.

Last summer the Internet was overrun by psychedelic images of swirling skies sprouting dog faces and Van Gogh masterpieces embellished with dozens of staring eyes. By running their image-recognition algorithms in reverse, Google researchers had found they could generate images that some call art. At an auction in February, a print made using their “DeepDream” software fetched $8,000.

But although fun, DeepDream images are limited, says Douglas Eck, a researcher in Google’s main artificial intelligence research group, Google Brain. Last week he announced a new Google project called Magenta aimed at making new kinds of creative software that can generate more sophisticated artworks using music, video, and text.

Magenta will draw on Google’s latest research into artificial neural networks, which underpin what CEO Sundar Pichai calls his company’s “AI first” strategy. Eck says he wants to help artists, creative professionals, and just about anyone else experiment and even collaborate with creative software capable of generating ideas.

“As a writer you could be getting from a computer a handful of partially written ideas that you can then run with,” says Eck. “Or you’re an architect and the computer generates a few directions for a project you didn’t think of.”

Those scenarios are a ways off. But at an event on creativity and AI hosted by Google last week, Project Magenta collaborator Adam Roberts demonstrated prototype software that gives a hint of how a musician might collaborate with a creative machine.

Roberts tapped out a handful of notes on a virtual Moog synthesizer. At the click of a mouse, the software extrapolated them into a short tune, complete with key changes and recurrent phrases. The software learned to do that by analyzing a database of nearly 4,500 popular music tunes.

A short tune written by AI, with drums added by a human.

Eck thinks it learned how to make key changes and melodic loops because it uses a crude form of attention, loosely inspired by human cognition, to extract useful information from the past tunes it analyzed. Researchers at Google and elsewhere are using attention mechanisms as a way to make learning software capable of understanding complex sentences or images.

Ideas that helped Google’s AlphaGo software beat one of the world’s top Go players this year could also help Google’s quest for creative software.

AlphaGo’s design made use of an approach called reinforcement learning, in which software picks up new skills a little like an animal—it is programmed to try to maximize a virtual reward (see “How Google Plans to Solve Artificial Intelligence”).

The technique is seen as one of the most promising ways to transition from machine learning that’s good at just pattern recognition—like transcribing speech—to software that is capable of planning and taking actions in the world (see “This Factory Robot Learns a New Job Overnight”).

Eck thinks reinforcement learning could make software capable of more complex artworks. For example, the sample tunes from Magenta’s current demo lack the kind of larger structure we expect in a song.

Magenta’s software is all being released as open-source in the hope of helping programmers and artists experiment with ideas like that. Eck also hopes to one day get help training Magenta’s software by releasing music or other creations to get feedback from the public.

Google’s project could bring more attention and resources to a field of research that has existed for a long time in academia but is smaller than areas of artificial intelligence with more obvious business applications, says Mark Riedl, an associate professor at Georgia Tech, who creates software that creates stories and video games.

Yet the effect of that could be to improve the products based on machine learning that Google and others are unleashing on consumers. Humans use their powers of creativity all the time, not just when making art, for example during conversation when we make jokes or use metaphors. Adding a tiny dash of creativity to the language used by a chatbot, for example, could make it much nicer to use, says Riedl.

However, Riedl notes that Google’s move into creative artificial intelligence is unlikely to yield quick progress on a question that looms over the field of computational creativity: can a machine ever be an artist in its own right, not just a tool directed by a human artist?

Good human artists generally start out emulating established artists before developing new styles and genres of their own, guided by an evolving artistic motivation, says Riedl. How software could develop artistic autonomy is unclear. “Neural networks are kind of in the imitation mode,” he says. “You can pipe in the works of the classics and they’ll learn patterns, but they need to learn creative intent somewhere.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.