Skip to Content
Artificial intelligence

Google’s auto-complete for speech can cover up glitches in video calls

aehdeschaine / Flickr

The news: With many of us now relying on video calls for face-to-face interaction, choppy connections are more frustrating than ever. An artificial intelligence that mimics an individual speaker’s way of talking can smooth over the cracks by filling in small gaps with snippets of generated speech. Developed by a team at Google, the technology is now being used in Google’s video-calling app Duo

What’s the problem? When you’re on an online call your voice gets chopped up into lots of tiny pieces that are zipped across the internet in data blocks known as packets. Packets often arrive at the other end jumbled up and software has to reorder them. But sometimes packets don’t arrive at all, which creates glitches and gaps in a conversation. This happens at the best of times. According to Google 99% of Duo calls have to deal with jumbled up or lost packets. A tenth of those calls lose more than 8% of their audio.   

Generating speech: To fix the problem, the team built on a neural network developed by DeepMind that can generate realistic speech from text. Called WaveNetEQ, the new neural network was then trained on a large dataset of 100 recorded human voices speaking 48 different languages until it could auto-complete short sections of speech based on common patterns in the way people talk. Because Duo is end-to-end encrypted, the AI runs on the device, not the cloud. During a call, WaveNetEQ is able to learn characteristics of a speaker’s voice and generates audio snippets that match both the style and content of what the speaker is saying. When a packet is lost, the AI generated voice is inserted in its place. 

For now, the AI can only generate syllables rather than whole words or phrases. But short samples Google posted online show that the results can be pretty lifelike. In one case, the AI replaces the second syllable of the word “trouble” in a voice that mimics the male speaker exactly.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.