Skip to Content

IBM Making Plans to Commercialize Its Brain-Inspired Chip

Phones and other compact devices with silicon neurons and synapses inside could be much more useful.
October 15, 2015

In August last year, IBM unveiled a chip designed to operate something like the neurons and synapses of the brain (see “IBM Chip Process Data Similar to the Way Your Brain Does”). Now the company has begun work on a next generation aimed at making mobile devices better at tasks that are easy for brains but tough for computers, such as speech recognition and interpreting images.

IBM designed this chip to borrow principles seen at work in the brain and is now working on a version that could make mobile devices smarter.

“We’re working on a next generation of the chip, but what’s most important now is commercial partners,” says John Kelly, a senior vice president at IBM who oversees IBM Research and several business units, including two dedicated to the company’s Watson suite of machine intelligence software. “Companies could incorporate this in all sorts of mobile devices, machinery, automotive, you name it.”

Adding brain-inspired chips to products such as phones could make them capable of recognizing anything their owners say and tracking what’s going on around them, says Kelly. The closest today’s devices come to that is listening out for certain keywords. Apple’s latest iPhone can be roused by saying “Hey Siri,” and some phones using Google’s software can be woken with the phrase “OK Google.”

IBM’s TrueNorth chip architecture, as it is called, was developed through a DARPA-funded program intended to make it possible for mobile computers to run advanced machine intelligence software such as image or speech recognition without having to tap into cloud computing infrastructure, and using very little power (see “Thinking In Silicon”).

Kelly says that IBM is in discussions with leading computer system manufacturers about how TrueNorth designs could help them, but declines to name any. “We’re talking with the who’s who in the mobile space and the IoT [Internet of things] space,” he says. A TrueNorth chip would be added to device designs as a “co-processor” that works alongside the conventional processor and never powers down, says Kelly.

The TrueNorth chip unveiled last August is roughly the size of a postage stamp and has one million silicon “neurons” with 256 million connections between them that are analogous to the synapses that link real neurons. The chip consumes over 1,000 times less power than a conventional processor of a similar size. IBM has demonstrated how its network of neurons can be programmed to perform tasks such as recognizing different vehicles in video footage in real time.

However, because the TrueNorth chip architecture is very different from those in existing computers it requires new approaches to writing software. And its fake neurons work differently than the software-based artificial neural networks that companies such as Google, Facebook, and Microsoft have recently used to make breakthroughs in speech and image processing using a method known as deep learning (see “10 Breakthrough Technologies 2013: Deep Learning”).

Neurons in IBM’s TrueNorth architecture encode data using electrical on-off “spikes,” attempting to mimic the spiking signals of biological neurons. The simulated neurons used in deep learning do not use spikes.

Artificial neural networks that use spiking neurons—IBM’s included—have not been shown to match the performance achieved using deep learning on tasks such as speech recognition or image processing. Yann LeCun, who leads Facebook’s AI research lab and helped pioneer deep learning, has expressed skepticism that it will be practical to do.

Dharmendra Modha, who leads development of IBM’s brain-inspired chips, counters that spiking is critical if neural networks are to be run in a chip with high power efficiency. His team has begun to create tools that will make it possible to transfer trained-up deep learning neural networks onto a TrueNorth chip, he says.

“This chip was envisioned as a substrate onto which a large variety of neural networks can be mapped for real-time, ultra-low energy, ultra-low volume applications,” he says.

Terrence Sejnowski, leader of the computational neurobiology lab at the Salk Institute for Biological Studies, agrees that spiking neurons are important if compact computers are to become capable of doing intelligent things without guzzling power or tapping the cloud. They appeared in nature for a reason, he says.

New research from another pioneer of deep learning, Yoshua Bengio of the University of Montreal, suggests that the technique’s accuracy could be easier to transfer to spiking hardware neurons than was previously thought, says Sejnowski. Bengio, who collaborates with IBM on language software, posted a preliminary paper online last week showing that tweaking the simulated neurons used in deep learning in a way that makes them more like spiking neurons didn’t harm accuracy on image processing.

Even if IBM’s brain chip architecture is reconciled with the techniques of deep learning, it will have competition. Google is already working on ways to crunch down artificial neural networks to run on existing mobile devices (see “Google App Puts Neural Networks on Your Phone”). Several companies, including leading mobile processor designer Qualcomm, are working on chip designs that would run existing deep learning software on mobile computers such as phones or in cars (see “Silicon Chips That See Are Going to Make Your Smartphone Brilliant”).

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.