Skip to Content

Silicon Brains

Computer chips designed to mimic how the brain works could shed light on our cognitive abilities.

Unlike most neuroscience labs, Kwabena Boahen’s lab at ­Stanford University is spotless–no scattered pipettes or jumbled arrays of chemical bottles. Instead, a lone circuit board, housing a very special chip, sits on a bare lab bench. The transistors in a typical computer chip are arranged for maximal processing speed; but this microprocessor features clusters of tiny transistors designed to mimic the electrical properties of neurons. The transistors are arranged to behave like cells in the retina, the cochlea, or even the hippocampus, a spot deep in the brain that sorts and stores information.

Kwabena Boahen is an associate professor of bioengineering at Stanford University and head of the neuroscience lab that developed the computer chip.

Boahen is part of a small but growing community of scientists and engineers using a process they call “neuromorphing” to build complicated electronic circuits meant to model the behavior of neural circuits. Their work takes advantage of anatomical diagrams of different parts of the brain generated through years of painstaking animal studies by neuroscientists around the world. The hope is that hardwired models of the brain will yield insights difficult to glean through existing experimental techniques. “Brains do things in technically and conceptually novel ways which we should be able to explore,” says ­Rodney ­Douglas, a professor at the Institute of Neuroinformatics, in Zurich. “They can solve rather effortlessly issues which we cannot yet resolve with the largest and most modern digital machines. One of the ways to explore this is to develop hardware that goes in the same direction.”

Among the most intriguing aspects of the brain is its capacity to form memories–something that has fascinated neuroscientists for decades. That capacity appears to be rooted in the hippocampus, damage to which can lead to amnesia.

Extensive studies of neurons in the hippocampus and other parts of the brain have shed some light on how neural behavior gives rise to memories. Neurons encode information in the form of electrical pulses that can be transmitted to other neurons. When two connected neurons repeatedly fire in close succession, the connection between them is strengthened, so that the firing of the first helps trigger the firing of the second. As this process–known to neuroscientists as Hebbian learning–occurs in multiple neighboring cells, it creates webs of connections between different neurons, encoding and linking information.

Multimedia

  • View a demo of the computer chip.

To better understand how this works, Boahen and graduate student John Arthur developed a chip based on a layer of the hippocampus known as CA3. Sandwiched between two other cellular layers, one that receives input from the cortex and one that sends information back out again, CA3 is thought to be where memory actually happens–where information is stored and linked. Pointing to a diagram of the chip’s architecture, ­Boahen explains that each model cell on the chip is made up of a cluster of transistors designed to mimic the electrical activity of a neuron. The silicon cells are arranged in a 32-by-32 array, and each of them is programmed to connect weakly to 21 neighboring cells. To start with, the connections between the cells are turned off, mimicking “silent synapses.” (A synapse is a junction between neurons; a silent synapse is one where, if a given neural cell fires, it transmits a slight change in electrical activity to its neighbors, but not enough to trigger the propagation of an electrical signal.)

However, Boahen explains, the chip has the ability to change the strength of these connections, imitating what happens with neurons during ­Hebbian learning. The silicon cells monitor when their neighbors fire. If a cell fires just before its neighbor does, then the programmed connection between the two cells is strengthened. “We want to capture the associative memory function, so we want connections between the cells to turn on or off depending on whether cells are activated together,” Boahen says.

Sitting at his desk with the circuit board and a laptop in front of him, Arthur, who is now a postdoc in ­Boahen’s lab, demonstrates the chip’s ability to remember. First he sends electrical signals to the chip from the laptop, which also records the output of the chip’s silicon neurons. He repeatedly triggers activity only in neurons that form a U shape on the array; his laptop screen shows flashes of light that reproduce that pattern, representing the activity in the chip. Each neuron fires at a slightly different time, constantly monitoring the firing of its 21 connected neighbors. Gradually, connections between the neurons that make up the U are strengthened: the chip has “learned” the pattern. When Arthur then triggers activity in just the upper left corner of the U, flashes of light on the screen spontaneously re-create the rest of the pattern, as electrical activity spreads among silicon neurons on the chip. The chip has effectively recalled the rest of the U.

The Stanford researchers plan to add circuitry to the chip so that it will also model a layer of the hippocampus known as the dentate, which receives signals from the cortex and sends them to CA3. They hope this model will be able to lay down memories that are even more complex. “We want to be able to give it an A and have it recall the whole alphabet,” says Boahen.

The team is also in the process of developing other neuromorphic chips. Its latest project–and the most ambitious neuromorphic effort anywhere to date–is a model of the cortex, the most recently evolved part of our brain. The intricate structure of the cortex allows us to perform complex computational feats, such as understanding language, recognizing faces, and planning for the future. The model’s first-generation design will consist of a circuit board with 16 chips, each containing a 256-by-256 array of silicon neurons.

By creating chips that are able to mimic the cortex, the hippocampus, and the retina, Boahen hopes to better comprehend the brain and, eventually, to design neural prosthetics, such as an artificial retina. “Kwabena is one of the few people straddling two perspectives: those who want to engineer better chips and those who want to understand the brain,” says Terry Sejnowski, a computational neuro­scientist at the Salk Institute in La Jolla, CA. “I think he’s one of those people who is ahead of his time.”

Emily Singer is the biotechnology and life sciences editor of Technology Review.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.