A new computer chip mimics the neurocircuitry of our noses to smell
Of all the things our brain can do, the way it helps us smell is one of the best understood. When an odor hits the olfactory cells in our nose, they send a signal to the corresponding cluster of neurons in the brain known as the olfactory bulb. The bulb then ferries the signal out to other parts of the brain, allowing us to appreciate the perfume of a grapefruit or avoid the stench of trash.
Olfactory bulbs are specific to mammals, but other animals, like insects, also exhibit similar neural structures. It means “there’s probably something fairly fundamental and efficient about these implementations if evolution has arrived on them in different cases,” says Mike Davies, the director of Intel’s Neuromorphic Computing Lab.
Both because they are so efficient and because we understand them so well, olfaction systems are great starting point for neuromorphic chips, a new type of computing hardware that takes inspiration directly from the structure of the brain.
On Monday, scientists at Intel published a paper in Nature that proposes a new neuromorphic chip design that mimics the structure and capabilities of the olfactory bulb. The researchers worked with olfactory neurophysiologists who study the brains of animals as they smell. They designed an electrical circuit, based on the neural circuits that activate when their brains process an odor, that could be carved onto a silicon chip. They also designed an algorithm that mirrors the behavior of the electrical signals that pulse through the circuit. When they trained the algorithm on the chip using an existing data set of 10 “smells”—characterized by their measurements from 72 different chemical sensors—it was able to accurately distinguish between them with far fewer training samples than a conventional chip.
The chip is still a relatively early-stage prototype, but once mature it could serve a number of applications, such as bomb sniffing or the detection of noxious fumes in chemical plants. It also demonstrates the potential of neuromorphic computing for more data-efficient AI.
Currently the most popular chips for running state-of-the-art deep-learning algorithms all follow a von Neumman architecture, a design convention that has powered the computing revolution for decades. But these architectures are inefficient learners: the algorithms that run on them require massive amounts of training data, in contrast to our far more efficient brains. Neuromorphic chips, therefore, try to preserve the brain’s structure as much as possible. The idea is that such close mimicry will increase the chip’s learning efficiency. Indeed, Intel successfully got the chip to learn from very few samples.
Moving forward, the research team plans to improve the design of its neuromorphic chip and apply it to other functions of the brain beyond smell. Davies says the team will likely turn its attention to vision or touch next but has longer-term ambitions to tackle more complex processes. “Our sensing mechanisms are the natural place to start because these are well understood,” he says. “But in a sense we’re working our way in and into the brain, up to the higher-order thought processes that happen."
How Rust went from a side project to the world’s most-loved programming language
For decades, coders wrote critical systems in C and C++. Now they turn to Rust.
Welcome to the oldest part of the metaverse
Ultima Online, which just turned 25, offers a lesson in the challenges of building virtual worlds.
These simple design rules could turn the chip industry on its head
An open standard called RISC-V is rewriting the economics of chip design and shaking up the tech sector’s power dynamics.
A new paradigm for managing data
Open data lakehouse architectures speed insights and deliver self-service analytics capabilities.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.