Hello,

We noticed you're browsing in private or incognito mode.

To continue reading this article, please exit incognito mode or log in.

Not an Insider? Subscribe now for unlimited access to online articles.

  • John Malta
  • Intelligent Machines

    AI Fight Club Could Help Save Us from a Future of Super-Smart Cyberattacks

    The best defense against malicious AI is AI.

    A new competition heralds what is likely to become the future of cybersecurity and cyberwarfare, with offensive and defensive AI algorithms doing battle.

    The contest, which will play out over the next five months, is run by Kaggle, a platform for data science competitions. It will pit researchers’ algorithms against one another in attempts to confuse and trick each other, the hope being that this combat will yield insights into how to harden machine-learning systems against future attacks.

    “It’s a brilliant idea to catalyze research into both fooling deep neural networks and designing deep neural networks that cannot be fooled,” says Jeff Clune, an assistant professor at the University of Wyoming who studies the limits of machine learning.

    The contest will have three components. One challenge will involve simply trying to confuse a machine-learning system so that it doesn’t work properly. Another will involve trying to force a system to classify something incorrectly. And a third will involve developing the most robust defenses. The results will be presented at a major AI conference later this year.

    Machine learning, and deep learning in particular, is rapidly becoming an indispensable tool in many industries. The technology involves feeding data into a special kind of computer program, specifying a particular outcome, and having a machine develop its own algorithm to achieve the outcome. Deep learning does this by tweaking the parameters of a huge, interconnected web of mathematically simulated neurons.

    Sign up for The Download
    Your daily dose of what's up in emerging technology
    Manage your newsletter preferences

    It’s long been known that machine-learning systems can be tricked. Spammers can, for instance, evade modern spam filters by figuring out what patterns the filter’s algorithm has been trained to identify.

    In recent years, however, researchers have shown that even the smartest algorithms can sometimes be misled in surprising ways. For example, deep-learning algorithms with near-human skill at recognizing objects in images can be fooled by seemingly abstract or random images that exploit the low-level patterns these algorithms look for (see “The Dark Secret at the Heart of AI”).

    “Adversarial machine learning is more difficult to study than conventional machine learning—it’s hard to tell if your attack is strong or if your defense is actually weak,” says Ian Goodfellow, a researcher at Google Brain, a division of Google dedicated to researching and applying machine learning, who organized the contest.

    As machine learning becomes pervasive, the fear is that such attacks could be used for profit or pure mischief. It could be possible for hackers to evade security measures in order to install malware, for instance.

    “Computer security is definitely moving toward machine learning,” Goodfellow says. “The bad guys will be using machine learning to automate their attacks, and we will be using machine learning to defend.”

    In theory, criminals might also bamboozle voice- and face-recognition systems, or even put up posters to fool the vision systems in self-driving cars, causing them to crash.

    Kaggle has become an invaluable breeding ground for algorithm development, and a hotbed for talented data scientists. The company was acquired by Google in March and is now part of the Google Cloud platform. Goodfellow and another Google Brain researcher, Alexey Kurakin, submitted the idea for the challenge before the acquisition.

    Benjamin Hamner, Kaggle’s cofounder and CTO, says he hopes the contest will draw attention to a looming problem. “As machine learning becomes more widely used, understanding the issues and risks from adversarial learning becomes increasingly important,” he says.

    The benefits of the open contest outweigh any risks associated with publicizing new kinds of attacks, he adds: “We believe that this research is best created and shared openly, instead of behind closed doors.”

    Clune, meanwhile, says he is keen for the contest to test algorithms that supposedly can withstand attack. “My money is on the networks continuing to be fooled for the foreseeable future,” he says.

    Want to go ad free? No ad blockers needed.

    Become an Insider
    Already an Insider? Log in.
    More from Intelligent Machines

    Artificial intelligence and robots are transforming how we work and live.

    Want more award-winning journalism? Subscribe to Insider Basic.
    • Insider Basic {! insider.prices.basic !}*

      {! insider.display.menuOptionsLabel !}

      Six issues of our award winning print magazine, unlimited online access plus The Download with the top tech stories delivered daily to your inbox.

      See details+

      Print Magazine (6 bi-monthly issues)

      Unlimited online access including all articles, multimedia, and more

      The Download newsletter with top tech stories delivered daily to your inbox

    /3
    You've read of three free articles this month. for unlimited online access. You've read of three free articles this month. for unlimited online access. This is your last free article this month. for unlimited online access. You've read all your free articles this month. for unlimited online access. You've read of three free articles this month. for more, or for unlimited online access. for two more free articles, or for unlimited online access.