Skip to Content
Artificial intelligence

Facebook’s new polyglot AI can translate between 100 languages

The model, a culmination of various automated and machine learning techniques, is being open-sourced to the research community.
October 19, 2020
An English-Basque dictionary
Edurne Chopeitia / Unsplash

The news: Facebook is open-sourcing a new AI language model called M2M-100 that can translate between any pair among 100 languages. Of the 4,450 possible language combinations, it translates 1,100 of them directly. This is in contrast to previous multilingual models, which heavily rely on English as an intermediate. A Chinese to French translation, for example, typically passes from Chinese to English and then English to French, which increases the chance of introducing errors.

Data curation: The model was trained on 7.5 billion sentence pairs. In order to compile a data set that large, the researchers relied heavily on automated curation. They used web crawlers to scrape billions of sentences from the web and had another language model called FastText identify the language. (They didn’t use any Facebook data.) Then they used a program called LASER 2.0, developed previously by Facebook’s AI research lab, which uses unsupervised learning—machine learning that doesn’t require manually labeled data—to match sentences across languages by their meaning.

LASER 2.0 creates what are known as “embeddings” from large, unstructured data sets of sentences. It trains on the available sentence examples within each language and maps out their relationships to one another based on how often and how close together they’re used. These embeddings help the machine-learning model approximate the meaning of each sentence, which then allows LASER 2.0 to automatically pair up sentences that share the same meaning in different languages.

Pairing languages: The researchers focused on the language combinations that they believed would be most commonly requested. They grouped languages according to linguistic, geographic, and cultural similarities, with the assumption that people who live in the same region would communicate more often. One language group, for example, included the most common languages spoken in India, including Bengali, Hindi, Tamil, and Urdu. LASER 2.0 then targeted its search for sentences pairs on all the possible language pairs within each group.

Ongoing challenges: Languages spoken in places like Africa and Southeast Asia still suffer from translation quality issues because too little language data is available to be scraped from the web, says Angela Fan, the lead researcher on the project. Given the reliance on web data, the researchers also need to figure out techniques for identifying and eradicating any embedded sexism, racism, and other discriminatory biases. Right now, the researchers have used a profanity filter to clean up some particularly egregious language, but it is mostly limited to English.

Research only: Facebook has no current plans to use the model in its products. M2M-100 is meant for research purposes only, says Fan. Ultimately, however, the goal is for the model to improve on and expand Facebook’s existing translation capabilities. Applications could include user communication (for example, the feature that allows people to translate posts into their native language) and perhaps content moderation.

Deep Dive

Artificial intelligence

The inside story of how ChatGPT was built from the people who made it

Exclusive conversations that take us behind the scenes of a cultural phenomenon.

AI is dreaming up drugs that no one has ever seen. Now we’ve got to see if they work.

AI automation throughout the drug development pipeline is opening up the possibility of faster, cheaper pharmaceuticals.

GPT-4 is bigger and better than ChatGPT—but OpenAI won’t say why

We got a first look at the much-anticipated big new language model from OpenAI. But this time how it works is even more deeply under wraps.

The original startup behind Stable Diffusion has launched a generative AI for video

Runway’s new model, called Gen-1, can change the visual style of existing videos and movies.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.