AI time bombs could sneak cyberattacks past watchful eyes
Malicious code hidden inside neural networks could hijack things like image recognition algorithms long after people start using them.
The situation: Image recognition AIs can be tricked quite easily, which raises the specter of, say, a cyberattack convincing a self-driving car to ignore a stop sign. But what if malware could be woven into algorithms so that they were, in effect, programmed to mess up?
The fear: A new paper shows how certain neural networks could be tainted by sneaking in malicious code. The nefarious program then sits there, waiting for a trigger that activates it to hijack the system and force it to start falsely predicting or classifying data.
Why it matters: The US government already worries that hardware built in other countries could have back doors that allow foreign agents to spy on or take control of computerized systems. High-tech paranoia? Maybe. But this latest work suggests that even AI isn’t immune to digital cloak-and-dagger tactics.
Deep Dive
Artificial intelligence

Meta has built a massive new language AI—and it’s giving it away for free
Facebook’s parent company is inviting researchers to pore over and pick apart the flaws in its version of GPT-3

Yann LeCun has a bold new vision for the future of AI
One of the godfathers of deep learning pulls together old ideas to sketch out a fresh path for AI, but raises as many questions as he answers.

The dark secret behind those cute AI-generated animal images
Google Brain has revealed its own image-making AI, called Imagen. But don't expect to see anything that isn't wholesome.

The hype around DeepMind’s new AI model misses what’s actually cool about it
Some worry that the chatter about these tools is doing the whole field a disservice.
Stay connected

Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.