Malicious code hidden inside neural networks could hijack things like image recognition algorithms long after people start using them.

The situation: Image recognition AIs can be tricked quite easily, which raises the specter of, say, a cyberattack convincing a self-driving car to ignore a stop sign. But what if malware could be woven into algorithms so that they were, in effect, programmed to mess up?

The fear: A new paper shows how certain neural networks could be tainted by sneaking in malicious code. The nefarious program then sits there, waiting for a trigger that activates it to hijack the system and force it to start falsely predicting or classifying data.

Why it matters: The US government already worries that hardware built in other countries could have back doors that allow foreign agents to spy on or take control of computerized systems. High-tech paranoia? Maybe. But this latest work suggests that even AI isn’t immune to digital cloak-and-dagger tactics.