Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo


Unsupported browser: Your browser does not meet modern web standards. See how it scores »

Frustration with waiting for computers to learn things inspired a better approach.

Growing up in rural Vietnam, Quoc Le didn’t have electricity at home. But he lived near a library, where he read compulsively about great inventions and dreamed of adding to the list. He decided around age 14 that humanity would be helped most by a machine smart enough to be an inventor in its own right—an idea that remains only a dream. But it set Le on a path toward pioneering an approach to artificial intelligence that could let software understand the world more the way humans do.

That technology sprang from the frustration Le felt at the Australian National University and then as a PhD candidate at Stanford as he learned about the state of machine intelligence. So-called machine learning software often needed a lot of assistance from humans. People had to annotate data—for example, by labeling photos with and without faces—before software could learn from it. Then they had to tell the software what features in the data it should pay attention to, such as the shapes characteristic of noses. That kind of painstaking work didn’t appeal to Le. Although personable with other humans, he is uncompromising in his expectations for machines. “I’m a guy without a lot of patience,” he says with a laugh.

While at Stanford, Le worked out a strategy that would let software learn things itself. Academics had begun to report promising but very slow results with a method known as deep learning, which uses networks of simulated neurons. Le saw how to speed it up significantly—by building simulated neural networks 100 times larger that could process thousands of times more data. It was an approach practical enough to attract the attention of Google, which hired him to test it under the guidance of the AI researcher Andrew Ng (see “A Chinese Internet Giant Starts to Dream”).

When Ng’s results became public in 2012, they sparked a race at Facebook, Microsoft, and other companies to invest in deep-learning research. Without any human guidance, his system had learned how to detect cats, people, and over 3,000 other objects just by ingesting 10 million images from YouTube videos. It proved that machines could learn without labored assistance from humans, and reach new levels of accuracy to boot.

The technique is now used in Google’s image search and speech-recognition software. The ultra-intelligent machine Le once imagined remains distant. But seeing his ideas make software smart enough to assist people in their everyday lives feels pretty good.

Tom Simonite

Watch this Innovator at EmTech 2014
Meet the Innovators Under 35

Credit: Illustration by Lynne Carty

Tagged: Computing, EmTech2014, EmTech Digital 2015 News

Reprints and Permissions | Send feedback to the editor

From the Archives


Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me