A new tool helps us understand what an AI is actually thinking
Google researchers developed a way to peer inside the minds of deep-learning systems, and the results are delightfully weird.
What they did: The team built a tool that combines several techniques to provide people with a clearer idea of how neural networks make decisions. Applied to image classification, it lets a person visualize how the network develops its understanding of what is, for instance, a kitten or a Labrador. The visualizations, above, are ... strange.
Why it matters: Deep learning is powerful—but opaque. That’s a problem if you want it to, say, drive a car for you. So being able to visualize decisions behind image recognition could help reveal why an autonomous vehicle has made a serious error. Plus, humans tend to want to know why a decision was made, even if it was correct.
But: Not everyone thinks machines needs to explain themselves. In a recent debate, Yann Lecunn, who leads Facebook’s AI research, argued that we should simply focus on their behavior. After all, we can’t always explain the decisions humans make either.
Deep Dive
Artificial intelligence
DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.
“This is a profound moment in the history of technology,” says Mustafa Suleyman.
AI hype is built on high test scores. Those tests are flawed.
With hopes and fears about the technology running wild, it's time to agree on what it can and can't do.
You need to talk to your kid about AI. Here are 6 things you should say.
As children start back at school this week, it’s not just ChatGPT you need to be thinking about.
AI language models are rife with different political biases
New research explains you’ll get more right- or left-wing answers, depending on which AI model you ask.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.