Skip to Content
MIT News feature

A Less-Artificial Intelligence

Studying 70,000 mouse neurons could help Andreas Tolias build smarter AI.
February 21, 2018
Adrian forrow

A fair number of engineers working on artificial intelligence don’t care whether their systems resemble real brains or not, as long as they perform well. But even today’s best systems can generalize only if fed thousands of samples, and they can’t transfer their generalizations to new contexts. This leaves AI vulnerable to attackers, who can trick it with tiny tweaks to the data. Neuroscientist Andreas Tolias believes that brain-like features could fix these problems.

In 2016, he founded Neuroscience-Inspired Networks for Artificial Intelligence (NINAI), a tag team of neuroscientists, physicists, mathematicians, and computer scientists that’s part of a larger effort to understand neural function (see “Inside the Moonshot Effort to Finally Figure Out the Brain,”  November/December 2017). Their relay race toward better AI starts in Tolias’s lab at Baylor College of Medicine, which records all the neurons firing inside a one-millimeter cube of a mouse’s cortex. In December, they captured the activity of 70,000 neurons in one mousea feat that would have been impossible without the two- and three-photon imaging techniques Tolias’s lab helped advance. The mice then go to the Allen Institute in Seattle, which slices and photographs their brains so a third team, at Princeton, can diagram which neurons are connected. By comparing this diagram with their recordings, Tolias’s lab deduces how the cells are influencing each other and what purpose each cell serves. If, as many neuroscientists suspect, the cortex is essentially built from a few common, repeated configurations of neurons, then explaining the activity in a one-millimeter cube could reveal the building blocks for all cognition.

Adrian Forrow

Tolias has zeroed in on two key structural differences between brains and AI. First, a mouse’s brain has roughly a hundred types of neurons, while a typical AI network has only two or three varieties of artificial neurons. The brain’s extra cell types include interneurons, which can stop large groups of other neurons from firing. AI has no direct equivalent. Brains also have more types of connections between neurons than AI networks do. Most AI networks are “feed-forward,” meaning signals only go in one direction, from one layer of the network to the next. Unlike real brains, these networks don’t have recurrent connections (which allow feedback signals in opposite directions) or lateral connections (which link neurons within the same layer). The few types of AI networks with recurrent and lateral connections show promise, but the role of feedback in the cortex needs much more study. “The brain didn’t create all this recurrence for the fun of it,” says Tolias. He also suspects interneurons may be regulating the brain’s lateral connections to create the generalizing powers that AI lacks.

Tolias hopes to use neuro-inspired components, including lateral connections, interneurons, and feedback, to build AI capable of one-shot learning, or generalizing from a single example. Success would be a big deal for AI, and for neuroscience, by identifying which features of neural circuits are needed for abstract thought. Tolias explains his quest in the words of Richard Feynman: “What I cannot create, I do not understand.”

Keep Reading

Most Popular

The inside story of how ChatGPT was built from the people who made it

Exclusive conversations that take us behind the scenes of a cultural phenomenon.

ChatGPT is about to revolutionize the economy. We need to decide what that looks like.

New large language models will transform many jobs. Whether they will lead to widespread prosperity or not is up to us.

Sam Altman invested $180 million into a company trying to delay death

Can anti-aging breakthroughs add 10 healthy years to the human life span? The CEO of OpenAI is paying to find out.

GPT-4 is bigger and better than ChatGPT—but OpenAI won’t say why

We got a first look at the much-anticipated big new language model from OpenAI. But this time how it works is even more deeply under wraps.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.