You can thank DeepMind for the slick new voice that emanates from Google’s Home speaker and Assistant app.

This time last year, Google’s London-based AI division announced a new way to synthesize speech. Its software, called WaveNet, tore up the regular rule book for generating human-like voices: instead of stitching together chunks of sound, which ends up creating the clunky robotic voices we’re used to, it generated a whole audio waveform from scratch, one sample after the next. The result was far smoother, with more natural intonations than other speech synthesis approaches.

But there was a hitch: the software took one second to generate 0.02 seconds of audio, making it impractical for use in consumer products. DeepMind said it wouldn’t be used in any of Google’s software for some time, meaning that the clunky old-style voices had to remain.

But over the last 12 months, things have changed. DeepMind now reports that it’s managed to speed up the algorithm by a factor of 1,000, so it can create 20 seconds of audio in one second of compute time. (It does that while actually creating higher-fidelity audio than the old algorithm.) That’s a huge leap, and it has made it possible to run the software on Google’s AI cloud system.

In fact, that’s what is now used to create all the speech uttered by Google’s Assistant AI (which, incidentally, now comes in both male and females versions) on phones and smart speakers.

You can hear an example of old-style non-WaveNet speech synthesis here and the same sentence uttered by the new, fast algorithm here. The difference is pretty stark.

Sadly, DeepMind hasn’t yet published details about how it managed to create the ultra-efficient version of WaveNet, but it says that it plans to in the near future.