Skip to Content

Intel Reveals Neuromorphic Chip Design

Intel’s goal is to build chips that work more like the human brain. Now its engineers think they know how

The brain is the most extraordinary of computing machines. It carries out tasks as a matter of routine that would fry the circuits of the most powerful supercomputers on the planet: walking, talking, recognising, analysing and so on.

And where supercomputers require enough juice to power a small town, the human brain does all its work using little more than the energy in a bowl of porridge.

So its no surprise that computer scientists would like to understand the brain and copy its ability. There’s a problem, however. The brain is built from neurons and these work in a rather different way from the silicon transistor-based circuits that lie under the bonnet of conventional chips.

Of course, computer scientists can simulate the behaviour of neurons and how they link together on conventional computers. But this is a profoundly wasteful process that is unable to exploit the partallel processing and network effects that the brain clearly makes use of and that eats power in the process.

So the race is on to develop a different kind of chip that more accurately mimics the way the brain works. So-called neuromorphic chips must be built from devices that behave like neurons—in other words they transmit and respond to information sent in spikes rather than in a continuously varying voltage.

(One reason the brain is so power efficient is that neural spikes charge only a small fraction of a neuron as they travel. By contrast, conventional chips keep each and every transmission line at a certain voltage all the time.)

Today, Charles Augustine at Intel’s Circuit Research Laboratory in Hillsboro, Oregon, and a few pals unveil their design for a neuromorphic chip.

They base their design on two technologies: lateral spin valves and memristors. Lateral spin valves are tiny magnets connected via metal wires that can switch orientation depending on the spin of the electrons passing through them.  We’ve looked at memristors many times on this blog. These are fundamental electronic devices that act like resistors with memory.

Augustine and co argue that that the architecture they’ve designed works in a similar way to neurons and can therefore be used to test various ways of reproducing the brain’s processing ability.

The icing on the cake, they say, is that spin valves operate at terminal voltages measured in milliVolts, that’s significantly less than conventional chips.

They claim this translates into a dramatic energy saving. “We show that the spin-based neuromorphic designs can achieve 15X-300X lower computation energy,” they say. (What they actually mean is that they ‘tell’ us that this kind of saving is possible since there is little in the way of a demonstration in their paper.)

They also say the new design is ideally suited for the kind of processing tasks that brains do rather well: analog-data-sensing, cognitive-computing, associative memory and so on.

Intel’s new chip design certainly looks to be an improvement over existing ones but it is still orders of magnitude away from the computational efficiency that real neurons achieve.

Clearly, recent advancements in memristor technology and spintronics are making possible entirely new ways to design chips. However, there’s a long way to go before synthetic systems can begin to match the capability of natural ones.

Ref: arxiv.org/abs/1206.3227: Proposal For Neuromorphic Hardware Using Spin Devices

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.