Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

{ action.text }

A new kind of computer chip, unveiled by IBM today, takes design cues from the wrinkled outer layer of the human brain. Though it is no match for a conventional microprocessor at crunching numbers, the chip consumes significantly less power, and is vastly better suited to processing images, sound, and other sensory data.

IBM’s SyNapse chip processes information using a network of just over one million “neurons,” which communicate with one another using electrical spikes—as actual neurons do. The chip uses the same basic components as today’s commercial chips—silicon transistors. But its transistors are configured to mimic the behavior of both neurons and the connections—synapses—between them.

The SyNapse chip breaks with a design known as the Von Neumann architecture that has underpinned computer chips for decades. Although researchers have been experimenting with chips modeled on brains—known as neuromorphic chips—since the late 1980s, until now all have been many times less complex, and not powerful enough to be practical (see “Thinking in Silicon”). Details of the chip were published today in the journal Science.

The new chip is not yet a product, but it is powerful enough to work on real-world problems. In a demonstration at IBM’s Almaden research center, MIT Technology Review saw one recognize cars, people, and bicycles in video of a road intersection. A nearby laptop that had been programed to do the same task processed the footage 100 times slower than real time, and it consumed 100,000 times as much power as the IBM chip. IBM researchers are now experimenting with connecting multiple SyNapse chips together, and they hope to build a supercomputer using thousands.

When data is fed into a SyNapse chip it causes a stream of spikes, and its neurons react with a storm of further spikes. The just over one million neurons on the chip are organized into 4,096 identical blocks of 250, an arrangement inspired by the structure of mammalian brains, which appear to be built out of repeating circuits of 100 to 250 neurons, says Dharmendra Modha, chief scientist for brain-inspired computing at IBM. Programming the chip involves choosing which neurons are connected, and how strongly they influence one another. To recognize cars in video, for example, a programmer would work out the necessary settings on a simulated version of the chip, which would then be transferred over to the real thing.

In recent years, major breakthroughs in image analysis and speech recognition have come from using large, simulated neural networks to work on data (see “Deep Learning”). But those networks require giant clusters of conventional computers. As an example, Google’s famous neural network capable of recognizing cat and human faces required 1,000 computers with 16 processors apiece (see “Self-Taught Software”).

Although the new SyNapse chip has more transistors than most desktop processors, or any chip IBM has ever made, with over five billion, it consumes strikingly little power. When running the traffic video recognition demo, it consumed just 63 milliwatts of power. Server chips with similar numbers of transistors consume tens of watts of power—around 10,000 times more.

The efficiency of conventional computers is limited because they store data and program instructions in a block of memory that’s separate from the processor that carries out instructions. As the processor works through its instructions in a linear sequence, it has to constantly shuttle information back and forth from the memory store—a bottleneck that slows things down and wastes energy.

IBM’s new chip doesn’t have separate memory and processing blocks, because its neurons and synapses intertwine the two functions. And it doesn’t work on data in a linear sequence of operations; individual neurons simply fire when the spikes they receive from other neurons cause them to.

Horst Simon, the deputy director of Lawrence Berkeley National Lab and an expert in supercomputing, says that until now the industry has focused on tinkering with the Von Neumann approach rather than replacing it, for example by using multiple processors in parallel, or using graphics processors to speed up certain types of calculations. The new chip “may be a historic development,” he says. “The very low power consumption and scalability of this architecture are really unique.”

One downside is that IBM’s chip requires an entirely new approach to programming. Although the company announced a suite of tools geared toward writing code for its forthcoming chip last year (see “IBM Scientists Show Blueprints for Brainlike Computing”), even the best programmers find learning to work with the chip bruising, says Modha: “It’s almost always a frustrating experience.” His team is working to create a library of ready-made blocks of code to make the process easier.

Asking the industry to adopt an entirely new kind of chip and way of coding may seem audacious. But IBM may find a receptive audience because it is becoming clear that current computers won’t be able to deliver much more in the way of performance gains. “This chip is coming at the right time,” says Simon.

18 comments. Share your thoughts »

Credit: Photo courtesy of IBM Research

Tagged: Computing, Communications, Mobile, EmTech2014, IBM, brain, neuromorphic chips, computer processor

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me