Skip to Content

The Smartest Virtual Brain Yet

With 2.5 million virtual neurons, researchers have created a brain model that can perform complex tasks.
December 3, 2012

Computational wizards have been trying to copy the behavior and structure of the human brain in all sorts of ways. In one recent stab, a group created a digital model that not only reproduced aspects of complex brain behavior, it even made similar mistakes.

This virtual model is called “Spaun”–for Semantic Pointer Architecture Unified Framework. The digital “brain” can receive visual cues and sketch responses to them with a mechanical arm. It can do basic tasks like complete lists of numbers or solve simple arithmetic problems—tasks regular people encounter in IQ tests. Surprisingly, the model even picked up on bizarre brain behavior, like remembering the first and last numbers of a list better than other members. 

The Spaun brain simulation involves 2.5 million virtual neurons. That’s a mere handful compared to the human brain’s 86 billion neurons, but that’s part of the point. The goal of the Spaun team is not to replicate physiology neuron-for-neuron, but rather to reproduce complex behavior. In contrast, other big brain modeling groups like the Blue Brain Project, seek to achieve a high level of biological accuracy with as many neurons as possible, with the hope that complex behavior will eventually follow. 

The Spaun team explains in their paper published last week in Science:

…simulating a complex brain alone does not address one of the central challenges for neuroscience: explaining how complex brain activity generates complex behavior. In contrast, we present here a spiking neuron model of 2.5 million neurons that is centrally directed to bridging the brain-behavior gap.

Nature News has a great explanation of how Spaun makes connections, in some ways mirroring the working of the brain itself:

The computing cells are divided into groups, corresponding to specific parts of the brain that process images, control movements and store short-term memories. These regions are wired together in a realistic way, and even respond to inputs that mimic the action of neurotransmitters.

For all its cleverness, Spaun does come with several shortcomings, not surprising given its scale. But it does have its place among models that seek to understanding the brain–after all, given the complexity of that project, it’s all hands on deck. 

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.