Skip to Content

IBM Scientists Show Blueprints for Brainlike Computing

IBM researchers unveil TrueNorth, a new computer architecture that imitates how a brain works.

To create a computer as powerful as the human brain, perhaps we first need to build one that works more like a brain. Today, at the International Joint Conference on Neural Networks in Dallas, IBM researchers will unveil a radically new computer architecture designed to bring that goal within reach. Using simulations of enormous complexity, they show that the architecture, named TrueNorth, could lead to a new generation of machines that function more like biological brains.

image showing the firing of virtual neurons
Visually stimulating: TrueNorth can be used to simulate the processing of a retina. This image shows the firing of virtual neurons in such a system.

The announcement builds on IBM’s ongoing projects in cognitive computing. In 2011, the research team released computer chips that use a network of “neurosynaptic cores” to manage information in a way that resembles the functioning of neurons in a brain (see “IBM’s New Chips Compute More Like We Do”). With TrueNorth, the researchers demonstrate a way to use those chips for specific tasks, and they show that the approach could be used to build, among other things, a more efficient biologically inspired visual sensor.

“It doesn’t make sense to take a programming language from the previous era and try to adapt it to a new architecture. It’s like a square peg in a round hole,” said Dharmendra S. Modha, lead researcher. “You have to rethink the very notion of what programming means.”

In a series of three papers released today, Modha’s team details the TrueNorth system and its possible applications.

Most modern computer systems are built on the Von Neumann architecture—with separate units for storing information and processing it sequentially—and they use programming languages designed specifically for that architecture. Instead, TrueNorth stores and processes information in a distributed, parallel way, like the neurons and synapses in a brain.

Modha’s team has also developed software that runs on a conventional supercomputer but simulates the functioning of a massive network of neurosynaptic cores—with 100 trillion virtual synapses and two billion neurosynaptic cores.

Each core of the simulated neurosynaptic computer contains its own network of 256 “neurons,” which operate using a new mathematical model. In this model, the digital neurons mimic the independent nature of biological neurons, developing different response times and firing patterns in response to input from neighboring neurons.

“Programs” are written using special blueprints called corelets. Each corelet specifies the basic functioning of a network of neurosynaptic cores. Individual corelets can be linked into more and more complex structures—nested, Modha says, “like Russian dolls.”

TrueNorth comes with a library of 150 pre-designed corelets, each for a particular task. One corelet can detect motion, for example, while another can sort images by color. Also included with TrueNorth is a curriculum to help academics and, eventually, customers learn to use the system.

Karlheinz Meier, co-director of the European Union’s Human Brain Project, says that untraditional computing architectures like TrueNorth aren’t meant as a replacement for existing devices but as gateways into entirely new markets for technology. They might, for example, be used to solve some problems involving big data that the traditional Von Neumann approach cannot untangle.

“If you look at which architecture can already [solve these problems] today, it’s the brain,” says Meier. “We learn from data. We do not have predetermined algorithms. We are able to make predictions and causal relationships even in situations we have never seen before.”

For example, the researchers hope to use TrueNorth to develop systems as powerful as human vision. The brain sorts through more than one terabyte of visual data each day but requires little power to do so. IBM and iniLabs, a partner company in Zurich, plan to involve TrueNorth in the development of a visual sensor.

The team envisions the technology one day making its way into everyday machines like smartphones and automobiles. They plan to continue refining the software, which is derived from a basic model of how the brain functions and is not restricted by enduring questions about how the brain really works.

“At this point, we are not wanting for more insights from neuroscience today. We are not limited by it,” says Modha. “We are extending the boundaries of what computers can do efficiently.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.