Skip to Content

The AI That Cut Google’s Energy Bill Could Soon Help You

The same type of algorithm that beats humans at complex games is being applied in more practical areas.
July 20, 2016

Google is using a powerful new machine-learning approach to save huge amounts of energy (and hundreds of millions of dollars) each year at its vast data centers.

It might not be long before the technique, which involves a machine-learning algorithm gradually learning to perfect a task with positive reinforcement, catches on in a range of other areas.

Demis Hassabis, the CEO of Google DeepMind, a subsidiary based in the U.K. that is focused on artificial intelligence, said at a conference recently that Google was using techniques developed by his company to improve the energy efficiency of its data centers. Because Google spends so much on electricity for the buildings that house its massive server farms, a savings of just a few percent equals hundreds of millions of dollars per year.

DeepMind, which Google acquired in 2014 for around $600 million, has shown how large artificial neural networks combined with reinforcement learning can train computers to perform complex tasks incredibly well. DeepMind has demonstrated algorithms capable of mastering certain Atari games to a superhuman level, and earlier this year a program called AlphaGo, developed to play the immensely complex and subtle Chinese board game Go, beat one of the best players of all time in a highly publicized match.

It seems Google is now looking for ways to apply these techniques in more practical settings. Hassabis reportedly told the conference that DeepMind developed a reinforcement-learning algorithm that experimented with different data-center configurations (perhaps in simulation), including cooling systems and windows, until it lowered overall power consumption.

We’ll probably see Google apply reinforcement learning in lots of areas, including some consumer products. Last week, I saw another DeepMind researcher, David Silver, give a talk in which he said that the company was working toward commercializing the technology behind AlphaGo, too. Most immediately, it is likely to be turned into some sort of virtual personal assistant.

“For years we thought reinforcement learning was this nice pipe dream,” Silver said at the event. “Now it feels like these reinforcement-learning mechanisms really work. We can start to look around now and see many, many domains [where it might be applied].”

(Read more: Bloomberg, "Google’s AI Masters Space Invaders," "Google’s AI Masters the Game of Go a Decade Earlier Than Expected")

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.