Hello,

We noticed you're browsing in private or incognito mode.

To continue reading this article, please exit incognito mode or log in.

Not an Insider? Subscribe now for unlimited access to online articles.

  • Charles Bergan, Qualcomm's VP of engineering, speaking at EmTech China.
  • Yaopai
  • Intelligent Machines

    More efficient machine learning could upend the AI paradigm

    Smaller algorithms that don’t need mountains of data to train are coming.

    In January, Google launched a new service called Cloud AutoML, which can automate some tricky aspects of designing machine-learning software. While working on this project, the company’s researchers sometimes needed to run as many as 800 graphics chips in unison to train their powerful algorithms.

    Unlike humans, who can recognize coffee cups from seeing one or two examples, AI networks based on simulated neurons need to see tens of thousands of examples in order to identify an object. Imagine trying to learn to recognize every item in your environment that way, and you begin to understand why AI software requires so much computing power.

    If researchers could design neural networks that could be trained to do certain tasks using only a handful of examples, it would “upend the whole paradigm,” Charles Bergan, vice president of engineering at Qualcomm, told the crowd at MIT Technology Review’s EmTech China conference earlier this week.

    If neural networks were to become capable of “one-shot learning,” Bergan said, the cumbersome process of feeding reams of data into algorithms to train them would be rendered obsolete. This could have serious consequences for the hardware industry, as both existing tech giants and startups are currently focused on developing more powerful processors designed to run today’s data-intensive AI algorithms.

    It would also mean vastly more efficient machine learning. While neural networks that can be trained using small data sets are not a reality yet, research is already being done on making algorithms smaller without losing accuracy, Bill Dally, chief scientist at Nvidia, said at the conference.

    Nvidia researchers use a process called network pruning to to make a neural network smaller and more efficient to run by removing the neurons that do no contribute directly to output. “There are ways of training that can reduce the complexity of training by huge amounts,” Dally said.

    Time is running out to register for EmTech Digital. You don’t want to miss expert discussions on artificial intelligence.

    Learn more and register
    More from Intelligent Machines

    Artificial intelligence and robots are transforming how we work and live.

    Want more award-winning journalism? Subscribe to Insider Online Only.
    • Insider Online Only {! insider.prices.online !}*

      {! insider.display.menuOptionsLabel !}

      Unlimited online access including articles and video, plus The Download with the top tech stories delivered daily to your inbox.

      See details+

      What's Included

      Unlimited 24/7 access to MIT Technology Review’s website

      The Download: our daily newsletter of what's important in technology and innovation

    /3
    You've read of three free articles this month. for unlimited online access. You've read of three free articles this month. for unlimited online access. This is your last free article this month. for unlimited online access. You've read all your free articles this month. for unlimited online access. You've read of three free articles this month. for more, or for unlimited online access. for two more free articles, or for unlimited online access.