Skip to Content

Here’s What Developers Are Doing with Google’s AI Brain

Researchers outside Google are testing the software that the company uses to add artificial intelligence to many of its products.
December 8, 2015

An artificial intelligence engine that Google uses in many of its products, and that it made freely available last month, is now being used by others to perform some neat tricks, including translating English into Chinese, reading handwritten text, and even generating original artwork.

Jeff Dean speaks at a Google event in 2007.

The AI software, called TensorFlow, provides a straightforward way for users to train computers to perform tasks by feeding them large amounts of data. The software incorporates various methods for efficiently building and training simulated “deep learning” neural networks across different computer hardware.

Deep learning is an extremely effective technique for training computers to recognize patterns in images or audio, enabling machines to perform with human-like competence useful tasks such as recognizing faces or objects in images. Recently, deep learning also has shown significant promise for parsing natural language, by enabling machines to respond to spoken or written queries in meaningful ways.

Speaking at the Neural Information Processing Society (NIPS) conference in Montreal this week, Jeff Dean, the computer scientist at Google who leads the TensorFlow effort, said that the software is being used for a growing number of experimental projects outside the company.

These include software that generates captions for images and code that translates the documentation for TensorFlow into Chinese. Another project uses TensorFlow to generate artificial artwork. “It’s still pretty early,” Dean said after the talk. “People are trying to understand what it’s best at.”

TensorFlow grew out of a project at Google, called Google Brain, aimed at applying various kinds of neural network machine learning to products and services across the company. The reach of Google Brain has grown dramatically in recent years. Dean said that the number of projects at Google that involve Google Brain has grown from a handful in early 2014 to more than 600 today.

Most recently, the Google Brain helped develop Smart Reply, a system that automatically recommends a quick response to messages in Gmail after it scans the text of an incoming message. The neural network technique used to develop Smart Reply was presented by Google researchers at the NIPS conference last year.

Dean expects deep learning and machine learning to have a similar impact on many other companies. “There is a vast array of ways in which machine learning is influencing lots of different products and industries,” he said. For example, the technique is being tested in many industries that try to make predictions from large amounts of data, ranging from retail to insurance.

Google was able to give away the code for TensorFlow because the data it owns is a far more valuable asset for building a powerful AI engine. The company hopes that the open-source code will help it establish itself as a leader in machine learning and foster relationships with collaborators and future employees. TensorFlow “gives us a common language to speak, in some sense,” Dean said. “We get benefits from having people we hire who have been using TensorFlow. It’s not like it’s completely altruistic.”

A neural network consists of layers of virtual neurons that fire in a cascade in response to input. A network “learns” as the sensitivity of these neurons is tuned to match particular input and output, and having many layers makes it possible to recognize more abstract features, such as a face in a photograph.

TensorFlow is now one of several open-source deep learning software libraries, and its performance currently lags behind some other libraries for certain tasks. However, it is designed to be easy to use, and it can easily be ported between different hardware. And Dean says his team is hard at work trying to improve its performance.

In the race to dominate machine learning and attract the best talent, however, other companies may release competing AI engines of their own.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.