Hello,

We noticed you're browsing in private or incognito mode.

To continue reading this article, please exit incognito mode or log in.

Not an Insider? Subscribe now for unlimited access to online articles.

  • John Malta
  • Intelligent Machines

    AI Fight Club Could Help Save Us from a Future of Super-Smart Cyberattacks

    The best defense against malicious AI is AI.

    A new competition heralds what is likely to become the future of cybersecurity and cyberwarfare, with offensive and defensive AI algorithms doing battle.

    The contest, which will play out over the next five months, is run by Kaggle, a platform for data science competitions. It will pit researchers’ algorithms against one another in attempts to confuse and trick each other, the hope being that this combat will yield insights into how to harden machine-learning systems against future attacks.

    “It’s a brilliant idea to catalyze research into both fooling deep neural networks and designing deep neural networks that cannot be fooled,” says Jeff Clune, an assistant professor at the University of Wyoming who studies the limits of machine learning.

    The contest will have three components. One challenge will involve simply trying to confuse a machine-learning system so that it doesn’t work properly. Another will involve trying to force a system to classify something incorrectly. And a third will involve developing the most robust defenses. The results will be presented at a major AI conference later this year.

    Machine learning, and deep learning in particular, is rapidly becoming an indispensable tool in many industries. The technology involves feeding data into a special kind of computer program, specifying a particular outcome, and having a machine develop its own algorithm to achieve the outcome. Deep learning does this by tweaking the parameters of a huge, interconnected web of mathematically simulated neurons.

    Sign up for The Download
    What's important in technology and innovation, delivered to you every day.
    Manage your newsletter preferences

    It’s long been known that machine-learning systems can be tricked. Spammers can, for instance, evade modern spam filters by figuring out what patterns the filter’s algorithm has been trained to identify.

    In recent years, however, researchers have shown that even the smartest algorithms can sometimes be misled in surprising ways. For example, deep-learning algorithms with near-human skill at recognizing objects in images can be fooled by seemingly abstract or random images that exploit the low-level patterns these algorithms look for (see “The Dark Secret at the Heart of AI”).

    “Adversarial machine learning is more difficult to study than conventional machine learning—it’s hard to tell if your attack is strong or if your defense is actually weak,” says Ian Goodfellow, a researcher at Google Brain, a division of Google dedicated to researching and applying machine learning, who organized the contest.

    As machine learning becomes pervasive, the fear is that such attacks could be used for profit or pure mischief. It could be possible for hackers to evade security measures in order to install malware, for instance.

    “Computer security is definitely moving toward machine learning,” Goodfellow says. “The bad guys will be using machine learning to automate their attacks, and we will be using machine learning to defend.”

    In theory, criminals might also bamboozle voice- and face-recognition systems, or even put up posters to fool the vision systems in self-driving cars, causing them to crash.

    Kaggle has become an invaluable breeding ground for algorithm development, and a hotbed for talented data scientists. The company was acquired by Google in March and is now part of the Google Cloud platform. Goodfellow and another Google Brain researcher, Alexey Kurakin, submitted the idea for the challenge before the acquisition.

    Benjamin Hamner, Kaggle’s cofounder and CTO, says he hopes the contest will draw attention to a looming problem. “As machine learning becomes more widely used, understanding the issues and risks from adversarial learning becomes increasingly important,” he says.

    The benefits of the open contest outweigh any risks associated with publicizing new kinds of attacks, he adds: “We believe that this research is best created and shared openly, instead of behind closed doors.”

    Clune, meanwhile, says he is keen for the contest to test algorithms that supposedly can withstand attack. “My money is on the networks continuing to be fooled for the foreseeable future,” he says.

    Couldn't make it to Cambridge? We've brought EmTech MIT to you!

    Watch session videos

    Uh oh–you've read all of your free articles for this month.

    Insider Premium
    $179.95/yr US PRICE

    More from Intelligent Machines

    Artificial intelligence and robots are transforming how we work and live.

    Want more award-winning journalism? Subscribe to Insider Plus.
    • Insider Plus {! insider.prices.plus !}*

      {! insider.display.menuOptionsLabel !}

      Everything included in Insider Basic, plus the digital magazine, extensive archive, ad-free web experience, and discounts to partner offerings and MIT Technology Review events.

      See details+

      What's Included

      Unlimited 24/7 access to MIT Technology Review’s website

      The Download: our daily newsletter of what's important in technology and innovation

      Bimonthly print magazine (6 issues per year)

      Bimonthly digital/PDF edition

      Access to the magazine PDF archive—thousands of articles going back to 1899 at your fingertips

      Special interest publications

      Discount to MIT Technology Review events

      Special discounts to select partner offerings

      Ad-free web experience

    /
    You've read all of your free articles this month. This is your last free article this month. You've read of free articles this month. or  for unlimited online access.