Skip to Content
Artificial intelligence

Facebook’s new polyglot AI can translate between 100 languages

The model, a culmination of various automated and machine learning techniques, is being open-sourced to the research community.
October 19, 2020
An English-Basque dictionary
Edurne Chopeitia / Unsplash

The news: Facebook is open-sourcing a new AI language model called M2M-100 that can translate between any pair among 100 languages. Of the 4,450 possible language combinations, it translates 1,100 of them directly. This is in contrast to previous multilingual models, which heavily rely on English as an intermediate. A Chinese to French translation, for example, typically passes from Chinese to English and then English to French, which increases the chance of introducing errors.

Data curation: The model was trained on 7.5 billion sentence pairs. In order to compile a data set that large, the researchers relied heavily on automated curation. They used web crawlers to scrape billions of sentences from the web and had another language model called FastText identify the language. (They didn’t use any Facebook data.) Then they used a program called LASER 2.0, developed previously by Facebook’s AI research lab, which uses unsupervised learning—machine learning that doesn’t require manually labeled data—to match sentences across languages by their meaning.

LASER 2.0 creates what are known as “embeddings” from large, unstructured data sets of sentences. It trains on the available sentence examples within each language and maps out their relationships to one another based on how often and how close together they’re used. These embeddings help the machine-learning model approximate the meaning of each sentence, which then allows LASER 2.0 to automatically pair up sentences that share the same meaning in different languages.

Pairing languages: The researchers focused on the language combinations that they believed would be most commonly requested. They grouped languages according to linguistic, geographic, and cultural similarities, with the assumption that people who live in the same region would communicate more often. One language group, for example, included the most common languages spoken in India, including Bengali, Hindi, Tamil, and Urdu. LASER 2.0 then targeted its search for sentences pairs on all the possible language pairs within each group.

Ongoing challenges: Languages spoken in places like Africa and Southeast Asia still suffer from translation quality issues because too little language data is available to be scraped from the web, says Angela Fan, the lead researcher on the project. Given the reliance on web data, the researchers also need to figure out techniques for identifying and eradicating any embedded sexism, racism, and other discriminatory biases. Right now, the researchers have used a profanity filter to clean up some particularly egregious language, but it is mostly limited to English.

Research only: Facebook has no current plans to use the model in its products. M2M-100 is meant for research purposes only, says Fan. Ultimately, however, the goal is for the model to improve on and expand Facebook’s existing translation capabilities. Applications could include user communication (for example, the feature that allows people to translate posts into their native language) and perhaps content moderation.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

What’s next for generative video

OpenAI's Sora has raised the bar for AI moviemaking. Here are four things to bear in mind as we wrap our heads around what's coming.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.