Skip to Content

Google’s Research Boss on Turning Exploration into Products

Google’s head of research and machine intelligence says that the company turns breakthroughs in artificial intelligence into products faster than ever.
November 23, 2015

In April 1998 two Stanford grad students in computer science published a paper describing a new way to rank hypertext documents. The company they incorporated that September is now worth $480 billion and employs 60,000 people.

John Giannandrea

Google, as they called it, is now just one division inside its recently created corporate parent, Alphabet. Its combined operations spent almost $10 billion on research and development last year, 15 percent of the company’s revenue, and that outlay is growing. Striking projects like self-driving cars and the wearable computer Google Glass attract the most attention, and some investors have complained that the company is spending too liberally on things that won’t pay off soon. But Alphabet’s largest research team, inside the original Google, shows that returns from these investments in research are accelerating, says John Giannandrea, the VP of engineering overseeing research and machine intelligence. In this edited conversation, he tells MIT Technology Review San Francisco bureau chief Tom Simonite that breakthroughs from lower-profile efforts are rapidly turning into important new products.

What research areas are most important for Google to invest in?

Our priorities are essentially all the grand challenges of computing science. For example, language understanding, teaching computers to read, to translate from one language to another, to be perfect at understanding speech in noisy environments or with different accents. We need to work on all these unsolved problems to make the computing products we want to build as a company.

How are the people working on that organized? Are they separate from product groups?

It’s a very fluid boundary between research and product. We have a large [research] group, which has subgroups with specializations; we might have a subgroup for handwriting recognition or speech recognition. The researchers work hand in glove with the product teams that are using their technology.

We take leading-edge breakthroughs and put them into products as soon as possible. A good example is the Google Photos app that we launched this summer. You can search your personal photos by typing, say, “dachshund” and it will find a picture of a dachshund if you have one. That’s based on an advanced computer-vision algorithm that was published at a research conference early this year and went into a product a few months later.

“Three or four years ago the results from deep learning became quite extraordinary … We made a deliberate effort to invest a lot in deep learning and attract people in that field.”

Your group does research for Google, which is now just one company inside Alphabet, alongside others including the home automation company Nest and the X Labs, working on projects including self-driving cars. Are there things you will not work on because they’re being done elsewhere within Alphabet?

No. We have by far the largest group of people within Alphabet who are working on things which are not quite products yet. We continue to invest in leading-edge computer science, and it’s a very core part of what we are as a company. There have always been lots and lots of groups all over Google doing crazy stuff. Alphabet is to some extent intended to accelerate that by enabling entire subcompanies to go faster in things like life sciences and self-driving cars, not just the Google.com products.

Google publishes a lot of research papers on artificial intelligence and machine learning. A relatively new field within that, called deep learning, seems to be a big focus. And you acquired the startup DeepMind.

You can sort of tell our level of investment by our publishing. If we’re doing good work, we always publish, because we’re proud of the achievement—and our competitors do the same thing. Google’s always used machine learning in products. Three or four years ago the results from deep learning became quite extraordinary. In all the areas we applied it to, initially speech recognition but then image understanding and then eventually language understanding, we saw tremendous improvements.

We made a deliberate effort to invest a lot in deep learning and attract people who were interested in that field. I would say deep learning, and machine learning generally, is something that we’ve been prioritizing at Google in the last few years.

Where can this deep-learning work go next? Will you look to apply it to new areas such as understanding natural language?

We don’t view these things as silos. It’s not like text analysis over here, speech analysis over here, image analysis over here. We do one thing we call machine intelligence. With the search engine, we’ve built a product that can answer lots of questions, but we still need to invest further in really understanding what it is you are saying and asking. We’ve published results where we’ve done image analysis and language analysis at the same time, for example. You could even view robotics as part of this. For a robot to function in the world, it would be really useful if it could see. Google has quite a few people working on robotic control.

You said the pace of discovery has increased. Does this mean you get a quicker return on your investment in research, because it takes less time for a breakthrough to become a product that can generate revenue?

I think the cycle time between a paper being published and something being in a product is probably shorter now than it historically has been. I think the velocity has been enabled by the Internet [and] by the startup culture. It’s easier to start a startup now than it ever was, because you can do stuff in the cloud; you could build a deep-learning startup today with just no money. I think that the velocity of innovation is just faster than it’s ever been.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.