Skip to Content
Uncategorized

DeepMind’s New Way to Think About the Brain Could Improve How AI Makes Plans

October 3, 2017

DeepMind thinks that we imagine the future so well because part of our brain creates efficient summaries of how the future could play out.

For all of the recent advances in AI, machines still struggle to effectively plan in situations where even a few procedural steps cause huge explosions in complexity. We’ve seen that in AI’s struggle to master, say, the computer game Starcraft. In contrast, humans are pretty good at it: chances are you can quickly imagine how to handle a whole set of different scenarios for getting dinner if, say, the bodega is closed on your journey home from work.

Now, in a paper published in Nature Neuroscience, a team of researchers from Google’s AI division draws parallels between reinforcement learning—the field of machine learning where an AI learns to perform a task through trial and error by being rewarded when it does it correctly—and the brain’s hippocampus, to understand why humans have that edge.

While the hippocampus is usually thought to deal with a human’s current situation, DeepMind proposes that it actually makes predictions about the future, too. From a blog post describing the new work:

We argue that the hippocampus represents every situation—or state—in terms of the future states which it predicts. For example, if you are leaving work (your current state) your hippocampus might represent this by predicting that you will likely soon be on your commute, picking up your kids from school or, more distantly, at home. By representing each current state in terms of its anticipated successor states, the hippocampus conveys a compact summary of future events. We suggest that this specific form of predictive map allows the brain to adapt rapidly in environments with changing rewards, but without having to run expensive simulations of the future.

Of course, it’s not clear that this is the case. Nor is it clear that this alone is what makes humans good at planning. But DeepMind plans to try and work out if its new theory could help AIs to plan more efficiently by applying a mathematical implementation of the idea—where each future state can be assigned its own reward in order to calculate an optimal decision—inside neural networks. And if it works, the machines may just get a little bit better at thinking ahead.

Keep Reading

Most Popular

Scientists are finding signals of long covid in blood. They could lead to new treatments.

Faults in a certain part of the immune system might be at the root of some long covid cases, new research suggests.

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.