Skip to Content
Artificial intelligence

AI time bombs could sneak cyberattacks past watchful eyes

February 22, 2018

Malicious code hidden inside neural networks could hijack things like image recognition algorithms long after people start using them.

The situation: Image recognition AIs can be tricked quite easily, which raises the specter of, say, a cyberattack convincing a self-driving car to ignore a stop sign. But what if malware could be woven into algorithms so that they were, in effect, programmed to mess up?

The fear: A new paper shows how certain neural networks could be tainted by sneaking in malicious code. The nefarious program then sits there, waiting for a trigger that activates it to hijack the system and force it to start falsely predicting or classifying data.

Why it matters: The US government already worries that hardware built in other countries could have back doors that allow foreign agents to spy on or take control of computerized systems. High-tech paranoia? Maybe. But this latest work suggests that even AI isn’t immune to digital cloak-and-dagger tactics.

Deep Dive

Artificial intelligence

DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.

“This is a profound moment in the history of technology,” says Mustafa Suleyman.

Deepfakes of Chinese influencers are livestreaming 24/7

With just a few minutes of sample video and $1,000, brands never have to stop selling their products.

AI hype is built on high test scores. Those tests are flawed.

With hopes and fears about the technology running wild, it's time to agree on what it can and can't do.

You need to talk to your kid about AI. Here are 6 things you should say.

As children start back at school this week, it’s not just ChatGPT you need to be thinking about.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.