Skip to Content
Artificial intelligence

Software Translates Your Voice into Another Language

Research software from Microsoft synthesizes speech in a foreign language, but in a voice that sounds like yours.
March 9, 2012

Researchers at Microsoft have made software that can learn the sound of your voice, and then use it to speak a language that you don’t. The system could be used to make language tutoring software more personal, or to make tools for travelers.

In a demonstration at Microsoft’s Redmond, Washington, campus on Tuesday, Microsoft research scientist Frank Soong showed how his software could read out text in Spanish using the voice of his boss, Rick Rashid, who leads Microsoft’s research efforts. In a second demonstration, Soong used his software to grant Craig Mundie, Microsoft’s chief research and strategy officer, the ability to speak Mandarin.

Hear Rick Rashid’s voice in his native language and then translated into several other languages:

English: Listen to a clip of Rick Rashid talking normally.

Spanish: Listen to it in Spanish.

Italian: Listen to it in Italian.

Mandarin: Listen to it in Mandarin.

In English, a synthetic version of Mundie’s voice welcomed the audience to an open day held by Microsoft Research, concluding, “With the help of this system, now I can speak Mandarin.” The phrase was repeated in Mandarin Chinese, in what was still recognizably Mundie’s voice.

“We will be able to do quite a few scenario applications,” said Soong, who created the system with colleagues at Microsoft Research Asia, the company’s second-largest research lab, in Beijing, China.

“For a monolingual speaker traveling in a foreign country, we’ll do speech recognition followed by translation, followed by the final text to speech output [in] a different language, but still in his own voice,” said Soong.

The new technique could also be used to help students learn a language, said Soong. Providing sample foreign phrases in a person’s own voice could be encouraging, or easier to imitate. Soong also showed how his new system could improve a navigational directions phone app, allowing a stock synthetic English voice to seamlessly read out text written on Chinese road signs as it relayed instructions for a route in Beijing.

The system needs around an hour of training to develop a model able to read out any text in a person’s own voice. That model is converted into one able to read out text in another language by comparing it with a stock text-to-speech model for the target language. Individual sounds used by the first model to build up words using a person’s voice in his or her own language are carefully tweaked to give the new text-to-speech model a full ability to sound out phrases in the second language.

Soong says that this approach can convert between any pair of 26 languages, including Mandarin Chinese, Spanish, and Italian.

Preserving a person’s voice when synthesizing speech for them in another language would likely be reassuring to a user, and could make interactions reliant on translation software more meaningful, says Shrikanth Narayanan, a professor at the University of Southern California, in Los Angeles, leads a research group working on systems to translate speech in situations such as doctor-patient consultations.

“The word is just one part of what a person is saying,” he says, and to truly convey all the information in a person’s speech, translation systems will need to be able to preserve voices and much more. “Preserving voice, preserving intonation, those things matter, and this project clearly knows that,” says Narayanan. “Our systems need to capture the expression a person is trying to convey, who they are, and how they’re saying it.”

His research group is investigating how features such as emphasis, intonation, and the way people use pauses or hesitation affects the effectiveness and perceived quality of a word-for-word translation. “We’re asking if you can build systems that can mediate between people as well as just replacing the words,” he says. “I view this [Microsoft research] as a part of how you make this happen.”

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Providing the right products at the right time with machine learning

Amid shifting customer needs, CPG enterprises look to machine learning to bolster their data strategy, says global head of MLOps and platforms at Kraft Heinz Company, Jorge Balestra.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.