Skip to Content

More efficient machine learning could upend the AI paradigm

Smaller algorithms that don’t need mountains of data to train are coming.
February 2, 2018
Yaopai

In January, Google launched a new service called Cloud AutoML, which can automate some tricky aspects of designing machine-learning software. While working on this project, the company’s researchers sometimes needed to run as many as 800 graphics chips in unison to train their powerful algorithms.

Unlike humans, who can recognize coffee cups from seeing one or two examples, AI networks based on simulated neurons need to see tens of thousands of examples in order to identify an object. Imagine trying to learn to recognize every item in your environment that way, and you begin to understand why AI software requires so much computing power.

If researchers could design neural networks that could be trained to do certain tasks using only a handful of examples, it would “upend the whole paradigm,” Charles Bergan, vice president of engineering at Qualcomm, told the crowd at MIT Technology Review’s EmTech China conference earlier this week.

If neural networks were to become capable of “one-shot learning,” Bergan said, the cumbersome process of feeding reams of data into algorithms to train them would be rendered obsolete. This could have serious consequences for the hardware industry, as both existing tech giants and startups are currently focused on developing more powerful processors designed to run today’s data-intensive AI algorithms.

It would also mean vastly more efficient machine learning. While neural networks that can be trained using small data sets are not a reality yet, research is already being done on making algorithms smaller without losing accuracy, Bill Dally, chief scientist at Nvidia, said at the conference.

Nvidia researchers use a process called network pruning to to make a neural network smaller and more efficient to run by removing the neurons that do no contribute directly to output. “There are ways of training that can reduce the complexity of training by huge amounts,” Dally said.

Keep Reading

Most Popular

Conceptual illustration of a therapy session
Conceptual illustration of a therapy session

The therapists using AI to make therapy better

Researchers are learning more about how therapy works by examining the language therapists use with clients. It could lead to more people getting better, and staying better.

street in Kabul at night
street in Kabul at night

Can Afghanistan’s underground “sneakernet” survive the Taliban?

A once-thriving network of merchants selling digital content to people without internet connections is struggling under Taliban rule.

Conceptual illustration showing a file folder with the China flag and various papers flying out of it
Conceptual illustration showing a file folder with the China flag and various papers flying out of it

The US crackdown on Chinese economic espionage is a mess. We have the data to show it.

The US government’s China Initiative sought to protect national security. In the most comprehensive analysis of cases to date, MIT Technology Review reveals how far it has strayed from its goals.

IBM engineers at Ames Research Center
IBM engineers at Ames Research Center

Where computing might go next

The future of computing depends in part on how we reckon with its past.

Stay connected

Illustration by Rose WongIllustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.