Skip to Content
Alumni profile

Linguistics expert parses syntax, song, and Spotify

Ruth Jeannine Brillman, PhD ’17
June 27, 2018

People don’t always say what they mean or mean what they say. Imagine the problem this poses for Siri, Alexa, or Google Assistant when you ask them to “play great new music.” Decoding these mysteries is the work of smart speakers and artificial intelligence—and linguistics experts like Ruth Jeannine Brillman, PhD ’17, a research scientist at Spotify, the music, podcast, and video streaming service.

“Alexa and Siri are all about understanding how to represent language as a structured system,” she says.

Courtesy of Ruth Jeannine Brillman, PhD ’17

While earning her undergrad degree at New York University, Brillman planned a journalism career. But with her love for numbers, she found linguistics to be a great combination of language and math.

As a linguistics grad student at MIT, she took computer science and artificial-intelligence classes and completed several internships focusing on technical language, such as one that involved developing product specifications for a microprocessor. “I started to realize that there were a lot of software companies working on how to model human language,” says Brillman. After an internship working on Alexa at Amazon in 2016, she realized how valuable it was to have people with a linguistics background developing and fine-tuning voice commands. “My favorite part of the work I did at Amazon was looking at the ways that people talk to voice assistants about music, and I was thrilled to find a job that centered around that issue,” she says.

After she earned her PhD at MIT, Brillman’s focus turned more technical.

“Software companies and linguists look at language in very different ways,” she says. “You need people with a computer science background to design and program these platforms, but you also need people who understand how to treat the more nuanced components of language so you can really understand what the user is asking for.”

At Spotify, she is working on how to respond to voice commands made through a smart device like Google Home or Amazon’s Echo. “We have to figure out a way to return a different, correct answer for all different requests,” says Brillman, who helps create the code that produces those responses.

“Working at Spotify is great—the people who use Spotify really love it,” she says. A Somerville resident, she counts herself among the enthusiastic users, having used its music curation for both her mom’s recent wedding and her own wedding reception last July. And, of course, being a Spotify employee means it’s part of her job to listen to a lot of great new music every day. 

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.