Skip to Content

Google Tries to Make Machine Learning a Little More Human

As Google puts machine-learning software into more products, it must train it to behave more as humans expect.
November 5, 2015

Google CEO Sundar Pichai told investors last month that advances in machine-learning technology would soon have an impact every product or service the company works on. “We are rethinking everything we are doing,” he said.

Part of that push to make its services smarter involves rethinking the way it’s employing machine learning, which enables computers to learn on their own from data. In short, Google is working to teach those systems to be a little more human.

Google discussed some of those efforts at a briefing Tuesday at its headquarters in Mountain View, California. “We’re at the Commander Data stage,” staff research engineer Pete Warden said in a reference to the emotionless android in the television show Star Trek: The Next Generation. “But we’re trying to get a bit more Counselor Troi into the system”—the starship Enterprise’s empathetic counselor.

Warden works on the team that developed Google Photos, which lets you search for things like “beach” or “dog” in your snaps. The underlying technology emerged from a long research effort into enabling software to identify objects in photos. But Warden and his coworkers discovered that just being able to spot, say, children, eggs, or baskets wasn’t enough. People wanted to search for “Easter egg hunt.” Likewise, the system needed to be trained to understand that photos with a turkey and plates taken in late November should be associated with “Thanksgiving.”

Another Google project, nicknamed GlassBox, is trying to stop software that learns from a limited sample of data from making what look to humans like simple, dumb mistakes. Headed by senior staff research scientist Maya Gupta, it aims to give the software something of the common sense that enables humans to discount misleading examples.

For example, a person shown a few examples of houses and their associated prices could see immediately that larger houses generally cost more—even if there was one outlier, such as a tiny house offered for $1.8 million in the expensive city of Palo Alto, California. But that same outlier might cause a machine-learning system looking for a relationship in the same sample of data to attribute high prices to another factor, such as house color. Gupta has developed mathematical methods to smooth out the influence of such outliers that can trip up a machine-learning system. “We’re trying to put back as much of the human knowledge as we can,” Gupta told MIT Technology Review.

Google has increased its investment in machine-learning research in recent years, after the emergence of a technology known as deep learning, which uses networks of roughly simulated neurons (see “10 Breakthrough Technologies 2013: Deep Learning”). It has produced striking improvements in speech recognition and image recognition. Facebook, Google, IBM, Microsoft, and Baidu are all investigating how deep learning can enable machines to understand language, and perhaps even converse with us (see “Teaching Machines to Understand Us”).

In the past week, Google has confirmed that its core search service is now processing a large portion of queries using a new deep-learning-driven system called RankBrain. And on Tuesday it debuted a service called Smart Reply that uses machine learning to automatically offer several short choices of responses to e-mail messages.

Greg Corrado, a senior research scientist and cofounder of Google’s deep-learning team, says the e-mail-writing software is just an early example of how machine learning is now creating completely new products, not just enhancing existing ones, such as spam filtering or search.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.