Skip to Content
Artificial intelligence

Tiny AI models could supercharge autocorrect and voice assistants on your phone

October 4, 2019
Illustration of a cell phone with a talkative voice assistant
Illustration of a cell phone with a talkative voice assistantMs. Tech

Researchers have successfully shrunk a giant language model to use in commercial applications.

Who’s counting?  In the past year, natural language models have become dramatically better at the expense of getting dramatically bigger. In October of last year, for example, Google released a model called BERT that passed a long-held reading-comprehension benchmark in the field. The larger version of the model had 340 million data parameters, and training it just one time through cost enough electricity to power a US household for 50 days.

Four months later, OpenAI quickly topped it with its model GPT-2. The model demonstrated an impressive knack for constructing convincing prose; it also used 1.5 billion parameters. Now, MegatronLM, the latest and largest model from Nvidia, has 8.3 billion parameters. (Yes, things are getting out of hand.)

The big, the bad, the ugly: AI researchers have grown increasingly worried about the consequences of this trend. In June, a group at the University of Massachusetts, Amherst, showed the climate toll of developing and training models at such a large scale. Training BERT, they calculated, emitted nearly as much carbon as a round-trip flight between New York and San Francisco; GPT-2 and MegatronLM, by extrapolation, would likely emit a whole lot more.

The trend could also accelerate the concentration of AI research into the hands of a few tech giants. Under-resourced labs in academia or countries with fewer resources simply don’t have the means to use or develop such computationally expensive models.

Honey, I shrunk the AI: In response, many researchers are focused on shrinking the size of existing models without losing their capabilities. Now two new papers, released within a day of one another, have successfully done that to the smaller version of BERT, with 100 million parameters.

The first paper, from researchers at Huawei, produces a model called TinyBERT that is less than a seventh the size of the original and nearly 10 times faster. It also performs nearly as well in language understanding as the original. The second, from researchers at Google, produces another that is smaller by a factor of more than 60, but its language understanding is slightly worse than the Huawei version.

How they did it: Both papers use variations of a common compression technique known as knowledge distillation. It involves using the large AI model that you want to shrink (the “teacher”) to train a much smaller model (the “student”) in its image. To do so, you feed the same inputs into both and then tweak the student until its outputs match the teacher’s.

Outside of the lab: In addition to improving access to state-of-the-art AI, tiny models will help bring the latest AI advancements to consumer devices. They avoid the need to send consumer data to the cloud, which improves both speed and privacy. For natural-language models specifically, more powerful text prediction and language generation could improve myriad applications like autocomplete on your phone and voice assistants like Alexa and Google Assistant.

To have more stories like this delivered directly to your inbox, sign up for our Webby-nominated AI newsletter The Algorithm. It's free.

Deep Dive

Artificial intelligence

Geoffrey Hinton tells us why he’s now scared of the tech he helped build

“I have suddenly switched my views on whether these things are going to be more intelligent than us.”

Deep learning pioneer Geoffrey Hinton has quit Google

Hinton will be speaking at EmTech Digital on Wednesday.

The future of generative AI is niche, not generalized

ChatGPT has sparked speculation about artificial general intelligence. But the next real phase of AI will be in specific domains and contexts.

Welcome to the new surreal. How AI-generated video is changing film.

Exclusive: Watch the world premiere of the AI-generated short film The Frost.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.