A microchip with about as much brain power as a garden worm might not seem very impressive, compared with the blindingly fast chips in modern personal computers. But a new microchip made by researchers at IBM represents a landmark. Unlike an ordinary chip, it mimics the functioning of a biological brain—a feat that could open new possibilities in computation.
Inside the brain, information is processed in parallel, and computation and memory are entwined. Each neuron is connected to many others, and the strength of these connections changes constantly as the brain learns. These dynamics are thought to be crucial to learning and memory, and they are what the researchers sought to mimic in silicon. Conventional chips, by contrast, process one bit after another and shunt information between a discrete processor and memory components. The bigger a problem is, the larger the number of bits that must be shuffled around.
The IBM researchers have built and tested two demonstration chips that store and process information in a way that mimics a natural nervous system. The company says these early chips could be the building blocks for something much more ambitious: a computer the size of a shoebox that has about half the complexity of a human brain and consumes just one kilowatt of power. This is being developed with $21 million in funding from the Defense Advanced Research Projects Agency, in collaboration with several universities.
The company’s researchers and their academic collaborators will present two papers next month at the Custom Integrated Circuits conference in San Jose, California, showing that the chip designs have very low power requirements and work with neural-circuit-mimicking software. In one experiment, a “neural core,” as the new chips are called, learns to play Pong; in another, it learns to navigate a car on a simple race track; and in another it learns to recognize images.
Conventional computers have become very powerful, but they require huge amounts of capacity and power to mimic tasks that humans take for granted. IBM’s Watson computing system, for example, famously beat two of the best human Jeopardy! players in a match this February. But it needed 16 terabytes of memory and a cluster of tremendously powerful servers to do so.
“The brain has solved these problems brilliantly, with just 10 watts of power,” says Kwabena Boahen, a professor of bioengineering at Stanford University who is not currently involved with the IBM project. “A machine with the intelligence we have could read and make connections, pull in information and make sense of it, rather than just make matches.”
How such a “cognitive computer” should be designed and how it should operate is controversial, however. After all, biologists still don’t understand how the brain works.
IBM has released only limited details about the workings and performance of its new chips. But project leader Dharmendra Modha says the chips go beyond previous work in this area by mimicking two aspects of the brain: the proximity of parts responsible for memory and computation (mimicked by the hardware) and the fact that connections between these parts can be made and unmade, and become stronger or weaker over time (accomplished by the software).
The new chips contain 45-nanometer digital transistors built directly on top of a memory array. “It’s like having data storage next to each logic gate within the processor,” says Cornell University computer scientist Rajit Manohar, who’s collaborating with IBM on hardware designs. Critically, this means the chips consume 45 picojoules per “event,” mimicking the transmission of a pulse in a neural network. That’s about 1,000 times less power than a conventional computer consumes, says Gert Cauwenberghs, director of the Institute for Neural Computation at the University of California, San Diego.
So far the IBM team has demonstrated only very basic software on these chips, but they have laid the foundation for running more complex software on simpler computers than has been possible in the past. In 2009, Modha’s group ran simulations of a neural network as complex as a cat’s brain on a supercomputer. “They cut their teeth on massive simulations,” says Michael Arbib, director of the USC Brain Project. “Now they’ve come up with chips that may make it easier to [run cognitive computing software]—but they haven’t proven this yet,” he says.
Modha’s group started by modeling a system of mouse-like complexity, then worked up to a rat, a cat, and finally a monkey. Each time they had to switch to a more powerful supercomputer. And they were unable to run the simulations in real time, because of the separation between memory and processor that the new chip designs are intended to overcome. The new hardware should run this software faster, using less energy, and in a smaller space. “Our eventual goal is a human-scale cognitive-computing system,” Modha says.
A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?
Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.
A startup says it’s begun releasing particles into the atmosphere, in an effort to tweak the climate
Make Sunsets is already attempting to earn revenue for geoengineering, a move likely to provoke widespread criticism.
10 Breakthrough Technologies 2023
These exclusive satellite images show that Saudi Arabia’s sci-fi megacity is well underway
Weirdly, any recent work on The Line doesn’t show up on Google Maps. But we got the images anyway.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.