Skip to Content
Artificial intelligence

AI time bombs could sneak cyberattacks past watchful eyes

February 22, 2018

Malicious code hidden inside neural networks could hijack things like image recognition algorithms long after people start using them.

The situation: Image recognition AIs can be tricked quite easily, which raises the specter of, say, a cyberattack convincing a self-driving car to ignore a stop sign. But what if malware could be woven into algorithms so that they were, in effect, programmed to mess up?

The fear: A new paper shows how certain neural networks could be tainted by sneaking in malicious code. The nefarious program then sits there, waiting for a trigger that activates it to hijack the system and force it to start falsely predicting or classifying data.

Why it matters: The US government already worries that hardware built in other countries could have back doors that allow foreign agents to spy on or take control of computerized systems. High-tech paranoia? Maybe. But this latest work suggests that even AI isn’t immune to digital cloak-and-dagger tactics.

Deep Dive

Artificial intelligence

The inside story of how ChatGPT was built from the people who made it

Exclusive conversations that take us behind the scenes of a cultural phenomenon.

AI is dreaming up drugs that no one has ever seen. Now we’ve got to see if they work.

AI automation throughout the drug development pipeline is opening up the possibility of faster, cheaper pharmaceuticals.

GPT-4 is bigger and better than ChatGPT—but OpenAI won’t say why

We got a first look at the much-anticipated big new language model from OpenAI. But this time how it works is even more deeply under wraps.

The original startup behind Stable Diffusion has launched a generative AI for video

Runway’s new model, called Gen-1, can change the visual style of existing videos and movies.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.