Hello,

We noticed you're browsing in private or incognito mode.

To continue reading this article, please exit incognito mode or log in.

Not an Insider? Subscribe now for unlimited access to online articles.

Intelligent Machines

To protect artificial intelligence from attacks, show it fake data

Google Brain’s Ian Goodfellow explains how AI systems defend themselves, onstage at EmTech Digital

AI systems can sometimes be tricked into seeing something that’s not actually there, as when Google’s software “saw” a 3-D-printed turtle as a rifle. A way to stop these potential attacks is crucial before the technology can be widely deployed in safety-critical systems like the computer vision software behind self-driving cars.

At MIT Technology Review’s annual EmTech Digital conference in San Francisco this week, Google Brain researcher Ian Goodfellow explained how researchers can protect their systems.

Goodfellow is best known as the creator of generative adversarial networks (GANs), a type of artificial intelligence that makes use of two networks trained on the same data. One of the networks, called the generator, creates synthetic data, usually images, while the other network, called the discriminator, uses the same data set to determine whether the input is real. Goodfellow went through nearly a dozen examples of how different researchers have used GANs in their work, but he focused on his current main research interest, defending machine-learning systems from being fooled in the first place. He says for earlier technologies, like operating systems, defense of the technology was added afterwards, a mistake he doesn’t want made with machine learning.

“I want it to be as secure as possible before we rely on it too much,” he says.

GANs are very good at creating realistic adversarial examples, which end up being a very good way to train AI systems to develop a robust defense. If systems are trained on adversarial examples that they have to spot, they get better at recognizing adversarial attacks. The better those adversarial examples, the stronger the defense.

Goodfellow says these concerns are still theoretical and that he hasn’t heard of adversarial examples being used to attack computer vision systems, but bots or spammers are trying to use similar methods to look like more legitimate traffic and accomplish their goals.

Luckily, Goodfellow says, there is still time to prepare our systems to defend themselves from AI-enabled attacks.

“So far, machine learning isn’t good enough to be used in attacks," he says.

Keep up with the latest in artificial intelligence at EmTech MIT.
Discover where tech, business, and culture converge.

September 11-14, 2018
MIT Media Lab

Register now
More from Intelligent Machines

Artificial intelligence and robots are transforming how we work and live.

Want more award-winning journalism? Subscribe to Insider Plus.
  • Insider Plus {! insider.prices.plus !}*

    {! insider.display.menuOptionsLabel !}

    Everything included in Insider Basic, plus the digital magazine, extensive archive, ad-free web experience, and discounts to partner offerings and MIT Technology Review events.

    See details+

    Print + Digital Magazine (6 bi-monthly issues)

    Unlimited online access including all articles, multimedia, and more

    The Download newsletter with top tech stories delivered daily to your inbox

    Technology Review PDF magazine archive, including articles, images, and covers dating back to 1899

    10% Discount to MIT Technology Review events and MIT Press

    Ad-free website experience

/3
You've read of three free articles this month. for unlimited online access. You've read of three free articles this month. for unlimited online access. This is your last free article this month. for unlimited online access. You've read all your free articles this month. for unlimited online access. You've read of three free articles this month. for more, or for unlimited online access. for two more free articles, or for unlimited online access.