Skip to Content

Single Artificial Neuron Taught to Recognize Hundreds of Patterns

Biologists have long puzzled over why neurons have thousands of synapses. Now neuroscientists have shown they are crucial not just for recognizing patterns but for learning the sequence in which they appear.

Artificial intelligence is a field in the midst of rapid, exciting change. That’s largely because of an improved understanding of how neural networks work and the creation of vast databases to help train them. The result is machines that have suddenly become better at things like face and object recognition, tasks that humans have always held the upper hand in (see “Teaching Machines to Understand Us”).

But there’s a puzzle at the heart of these breakthroughs. Although neural networks are ostensibly modeled on the way the human brain works, the artificial neurons they contain are nothing like the ones at work in our own wetware. Artificial neurons, for example, generally have just a handful of synapses and entirely lack the short, branched nerve extensions known as dendrites and the thousands of synapses that form along them. Indeed, nobody really knows why real neurons have so many synapses.

Today, that changes thanks to the work of Jeff Hawkins and Subutai Ahmad at Numenta, a Silicon Valley startup focused on understanding and exploiting the principles behind biological information processing. The breakthrough these guys have made is to come up with a new theory that finally explains the role of the vast number of synapses in real neurons and to create a model based on this theory that reproduces many of the intelligent behaviors of real neurons.

Real neurons consist of a cell body, known as the soma, that contains the cell nucleus and from which extend a number of nearby, or proximal, dendrites as well as the axon, a fine cable-like projection that can extend many centimeters to connect to other neurons. At the end of the axon are another set of branches, known as distal dendrites because of their distance from the soma.

Proximal and distal dendrites all make thousands connections, called synapses, to the axons of other nerve cells. These connections famously influence the rate at which the nerve cell produces electrical signals known as spikes.

The consensus is that neurons “learn” by recognizing certain patterns of connections among its synapses and  fire when they see this pattern.

But while it’s easy to understand how proximal synapses can influence the cell body and the rate of firing, it’s hard to understand how distal synapses can do the same thing, because they are so far away.

Hawkins and Ahmad now say they know what’s going on. Their new idea is that distal and proximal synapses play entirely different roles in the process of learning. Proximal synapses play the conventional role of triggering the cell to fire when certain patterns of connections crop up.

This is the conventional process of learning. “We show that a neuron can recognize hundreds of patterns even in the presence of large amounts of noise and variability as long as overall neural activity is sparse,” say Hawkins and Ahmad.

But distal synapses do something else. They also recognize when certain patterns are present, but do not trigger firing. Instead, they influence the electric state of the cell in a way that makes firing more likely if another specific pattern occurs. So distal synapses prepare the cell for the arrival of other patterns. Or, as Hawkins and Ahmad put it, these synapses help the cell predict what the next pattern sensed by the proximal synapses will be.

That’s hugely important. It means that in addition learning when a specific pattern is present, the cell also learns the sequence in which patterns appear. “We show how a network of neurons with this property will learn and recall sequences of patterns,” they say.

What’s more, they show that all this works well, even in the presence of large amounts of noise, as is always the case in biological systems.

That’s a significant new way of thinking about neurons and one that reproduces some of the key features of information processing in the human brain. For example, Hawkins and Ahmad show that this system doesn’t remember every detail of every pattern in a sequence but instead stores the difference between one pattern and the next.

So what’s important is not the total amount of information in a pattern but the difference between this pattern and the next.

That’s an interesting property that may help to explain another puzzling feature of human memory called chunking. This is the observation that, on average, humans can store about seven chunks of information in their working memories. These chunks can be things like digits, letters or even words but whatever they are, humans can remember only about seven of them (plus or minus two!).

But here’s the thing—the information content of a single word, such as “synapse,” is significantly greater than the information content of a single digit, such as a “7.” The puzzle is that nobody knows how the brain manages to hold the information in seven words as easily as it holds the information in seven digits.

But in Hawkins and Ahmad’s new model this problem disappears. The brain isn’t storing the information related to the word or digit, only the difference between them, which can be significantly less. That should lead to some testable hypotheses about the nature of memory.

The new model leads to other testable hypotheses too. For example, the model only works when there are a few synapses between the axon of one neuron and a dendrite of another. If there were too many synapses, it wouldn’t be possible to distinguish one pattern from another and all patterns would look the same.

If Hawkins and Ahmad’s model is correct, this cannot happen in real neurons. “To prevent this from happening we predict the existence of a mechanism that actively discourages the formation of a multiple synapses after one has been established,” they say.

That’s an unusual thing in biology—a testable hypothesis. But it is one that surely gives neuroscientists something to look for with their magnifying glasses.

One final point is that this new thinking does not come  from an academic environment but from a Silicon Valley startup. This company is the brain child of Jeff Hawkins, an entrepreneur, inventor and neuroscientist. Hawkins invented the Palm Pilot in the 1990s and has since turned his attention to neuroscience full-time.

That’s an unusual combination of expertise but one that makes it highly likely that we will see these new artificial neurons at work on real world problems in the not too distant future. Incidentally, Hawkins and Ahmad call their new toys Hierarchical Temporal Memory neurons or HTM neurons. Expect to hear a lot more about them.

Ref: arxiv.org/abs/1511.00083 : Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in Neocortex

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.