This week, Intel will show off a chip that learns to recognize objects in pictures captured by a webcam. Nothing fancy about that, except that the chip uses about a thousandth as much power as a conventional processor.
The device, called Loihi, which Intel is putting through its paces at the Consumer Electronics Show (CES) in Las Vegas, is a neuromorphic chip—one that mimics, in a simplified way, the functioning of neurons and synapses in the brain.
The best AI algorithms already use brain-like programs called simulated neural networks, which rely on parallel processing to recognize patterns in data—including objects in images and words in speech. Neuromorphic chips take this idea further by etching the workings of neural networks into silicon. They are less flexible and powerful than the best general-purpose chips, but being specialized to their task makes them very energy efficient, and thus ideal for mobile devices, vehicles, and industrial equipment.
The idea of neuromorphic chips has been around for decades, but the technology may finally be ready to find its commercial niche. Across the tech industry, progress in AI has inspired new research into hardware capable of using machine-learning algorithms more efficiently.
Chris Eliasmith, a professor who studies neuroscience and computer architectures at the University of Waterloo in Canada, says the biggest challenge with neuromorphic chips in the past has been scaling them up. “This is one thing I really like about Intel entering the space,” he says. “They have the resources to push things ahead quickly.”
The chip is part of Intel’s attempt to reinvent itself. The company can no longer bank on delivering ever-faster processors, as Moore’s Law is bumping up against the laws of physics (see “Moore’s Law Is Dead. Now What?”). Meanwhile, the publicity for the new device offers some respite from the fallout caused by a recently revealed security flaw affecting hundreds of millions of Intel chips.
The company also announced at CES that it has built a relatively large new quantum computing chip, a device that exploits the weird and wonderful rules of quantum physics to do certain types of computation with incredible speed. That chip, called Tangle Lake, contains 49 quantum bits, or “qubits.”
Intel hasn’t yet revealed other details, including how reliable the quantum chip is. The news puts the company on a par with IBM, which recently unveiled a 50-qubit chip. However, while the devices are approaching a stage where they are capable of performing useful work, it isn’t yet clear how they might be used beyond a few niches such as cracking codes and modeling materials.
Artificial intelligence is creating a new colonial world order
An MIT Technology Review series investigates how AI is enriching a powerful few by dispossessing communities that have been dispossessed before.
Meta has built a massive new language AI—and it’s giving it away for free
Facebook’s parent company is inviting researchers to pore over and pick apart the flaws in its version of GPT-3
This horse-riding astronaut is a milestone in AI’s journey to make sense of the world
OpenAI’s latest picture-making AI is amazing—but raises questions about what we mean by intelligence.
How the AI industry profits from catastrophe
As the demand for data labeling exploded, an economic catastrophe turned Venezuela into ground zero for a new model of labor exploitation.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.