Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo


Unsupported browser: Your browser does not meet modern web standards. See how it scores »

{ action.text }

Conventional computers have become very powerful, but they require huge amounts of capacity and power to mimic tasks that humans take for granted. IBM’s Watson computing system, for example, famously beat two of the best human Jeopardy! players in a match this February. But it needed 16 terabytes of memory and a cluster of tremendously powerful servers to do so.

“The brain has solved these problems brilliantly, with just 10 watts of power,” says Kwabena Boahen, a professor of bioengineering at Stanford University who is not currently involved with the IBM project. “A machine with the intelligence we have could read and make connections, pull in information and make sense of it, rather than just make matches.”

How such a “cognitive computer” should be designed and how it should operate is controversial, however. After all, biologists still don’t understand how the brain works.

IBM has released only limited details about the workings and performance of its new chips. But project leader Dharmendra Modha says the chips go beyond previous work in this area by mimicking two aspects of the brain: the proximity of parts responsible for memory and computation (mimicked by the hardware) and the fact that connections between these parts can be made and unmade, and become stronger or weaker over time (accomplished by the software).

The new chips contain 45-nanometer digital transistors built directly on top of a memory array. “It’s like having data storage next to each logic gate within the processor,” says Cornell University computer scientist Rajit Manohar, who’s collaborating with IBM on hardware designs. Critically, this means the chips consume 45 picojoules per “event,” mimicking the transmission of a pulse in a neural network. That’s about 1,000 times less power than a conventional computer consumes, says Gert Cauwenberghs, director of the Institute for Neural Computation at the University of California, San Diego.

So far the IBM team has demonstrated only very basic software on these chips, but they have laid the foundation for running more complex software on simpler computers than has been possible in the past. In 2009, Modha’s group ran simulations of a neural network as complex as a cat’s brain on a supercomputer. “They cut their teeth on massive simulations,” says Michael Arbib, director of the USC Brain Project. “Now they’ve come up with chips that may make it easier to [run cognitive computing software]—but they haven’t proven this yet,” he says.

Modha’s group started by modeling a system of mouse-like complexity, then worked up to a rat, a cat, and finally a monkey. Each time they had to switch to a more powerful supercomputer. And they were unable to run the simulations in real time, because of the separation between memory and processor that the new chip designs are intended to overcome. The new hardware should run this software faster, using less energy, and in a smaller space. “Our eventual goal is a human-scale cognitive-computing system,” Modha says.

12 comments. Share your thoughts »

Credit: IBM Research

Tagged: Computing, Materials, IBM, neuroscience, computing, supercomputers, cognitive ability

Reprints and Permissions | Send feedback to the editor

From the Archives


Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me