Skip to Content

Machine Learning for Everyone

Recent advances are making machine learning useful outside the tech industry, says the leader of the Google Brain research group.
March 28, 2016

A lot of the computational plumbing that powers Google owes something to Jeff Dean. He built early versions of the company’s Web search and ad systems. And he invented MapReduce, a system for working with big data sets that triggered a major shift across the computing industry.

Dean is now laboring to reinvent the inner workings of Google and the wider world all over again. He leads the Google Brain research group, which aims to advance machine learning—the art of making software figure out how to do things for itself instead of being explicitly programmed. Software from Google Brain is now drawn on by more than 600 teams inside Google, often for internal systems invisible to consumers. But in the past year, technology originating in Google Brain has also delivered major upgrades to Google’s Web search, spam filtering, and translation services.

Machine learning has a longer history inside Google, where engineers have trained software to show people Web pages relevant to their search queries, select ads related to content they are looking at, show ads people will click on, and pick videos to recommend on YouTube. It is one of many companies that expanded investment in machine-learning research after software that passes data through networks of simulated neurons produced breakthrough results in speech and image recognition.

Now Dean says that before long, the kind of technology his team builds will come to many other industries besides computing. He met with MIT Technology Review’s Tom Simonite at Google’s headquarters in Mountain View, California.

How has more powerful, easy-to-use machine learning changed the way teams inside Google work on new problems and products?

It’s been a very big change. In the past five years machine learning has dramatically expanded the scope of what is possible using computers, especially in areas like computer vision and language understanding. This naturally leads to great new products and features—for example, the search facilities of Google Photos [where you can search your photos using terms like “dog” or “beach”], or the Gmail Smart Reply capability. But it also enables Google engineers to think more ambitiously about what sorts of problems they might tackle. By way of analogy, five years ago computers couldn’t see very well. Now they can see very well in some circumstances, and so this naturally expands the sets of things we believe can be accomplished.

You led development of TensorFlow, software that powers Google’s machine-learning research as well as products like a new Gmail feature that composes replies to e-mails. Now the company is giving it away for free. Why?

Having a common way of expressing machine-learning ideas is really helpful. There’s a lot of potential for machine learning all around the world. We’re seeing it in academia, at other companies, in government.

Will every industry end up relying heavily on machine learning?

I think there are a lot of industries that are collecting a lot of data and have not yet considered the implications of machine learning but will ultimately use it. Transportation, with self-driving vehicles, is going to be a big use of machine learning. Health care has a lot of interesting machine-learning problems—outpatient outcomes, or when you have x-ray images and you want to predict things. I don’t think there’s one industry that’s going to be affected; I think there are going to be lots.

Machine learning is going to become a fundamental component of applying computing?

Yeah, absolutely. The enrollment in computer science program machine-learning classes is shooting through the roof.

It’s just going to be expected that people have some basic understanding of machine learning and have done a few projects, [and want to] use machine learning.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.