Malicious code hidden inside neural networks could hijack things like image recognition algorithms long after people start using them.
The situation: Image recognition AIs can be tricked quite easily, which raises the specter of, say, a cyberattack convincing a self-driving car to ignore a stop sign. But what if malware could be woven into algorithms so that they were, in effect, programmed to mess up?
The fear: A new paper shows how certain neural networks could be tainted by sneaking in malicious code. The nefarious program then sits there, waiting for a trigger that activates it to hijack the system and force it to start falsely predicting or classifying data.
Why it matters: The US government already worries that hardware built in other countries could have back doors that allow foreign agents to spy on or take control of computerized systems. High-tech paranoia? Maybe. But this latest work suggests that even AI isn’t immune to digital cloak-and-dagger tactics.
AI for everything: 10 Breakthrough Technologies 2024
Generative AI tools like ChatGPT reached mass adoption in record time, and reset the course of an entire industry.
What’s next for AI in 2024
Our writers look at the four hot trends to watch out for this year
OpenAI teases an amazing new generative video model called Sora
The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.
Google’s Gemini is now in everything. Here’s how you can try it out.
Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.