A new tool helps us understand what an AI is actually thinking
Google researchers developed a way to peer inside the minds of deep-learning systems, and the results are delightfully weird.
What they did: The team built a tool that combines several techniques to provide people with a clearer idea of how neural networks make decisions. Applied to image classification, it lets a person visualize how the network develops its understanding of what is, for instance, a kitten or a Labrador. The visualizations, above, are ... strange.
Why it matters: Deep learning is powerful—but opaque. That’s a problem if you want it to, say, drive a car for you. So being able to visualize decisions behind image recognition could help reveal why an autonomous vehicle has made a serious error. Plus, humans tend to want to know why a decision was made, even if it was correct.
But: Not everyone thinks machines needs to explain themselves. In a recent debate, Yann Lecunn, who leads Facebook’s AI research, argued that we should simply focus on their behavior. After all, we can’t always explain the decisions humans make either.
Deep Dive
Artificial intelligence
The inside story of how ChatGPT was built from the people who made it
Exclusive conversations that take us behind the scenes of a cultural phenomenon.
AI is dreaming up drugs that no one has ever seen. Now we’ve got to see if they work.
AI automation throughout the drug development pipeline is opening up the possibility of faster, cheaper pharmaceuticals.
GPT-4 is bigger and better than ChatGPT—but OpenAI won’t say why
We got a first look at the much-anticipated big new language model from OpenAI. But this time how it works is even more deeply under wraps.
The original startup behind Stable Diffusion has launched a generative AI for video
Runway’s new model, called Gen-1, can change the visual style of existing videos and movies.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.