This week, Intel will show off a chip that learns to recognize objects in pictures captured by a webcam. Nothing fancy about that, except that the chip uses about a thousandth as much power as a conventional processor.
The device, called Loihi, which Intel is putting through its paces at the Consumer Electronics Show (CES) in Las Vegas, is a neuromorphic chip—one that mimics, in a simplified way, the functioning of neurons and synapses in the brain.
The best AI algorithms already use brain-like programs called simulated neural networks, which rely on parallel processing to recognize patterns in data—including objects in images and words in speech. Neuromorphic chips take this idea further by etching the workings of neural networks into silicon. They are less flexible and powerful than the best general-purpose chips, but being specialized to their task makes them very energy efficient, and thus ideal for mobile devices, vehicles, and industrial equipment.
The idea of neuromorphic chips has been around for decades, but the technology may finally be ready to find its commercial niche. Across the tech industry, progress in AI has inspired new research into hardware capable of using machine-learning algorithms more efficiently.
Chris Eliasmith, a professor who studies neuroscience and computer architectures at the University of Waterloo in Canada, says the biggest challenge with neuromorphic chips in the past has been scaling them up. “This is one thing I really like about Intel entering the space,” he says. “They have the resources to push things ahead quickly.”
The chip is part of Intel’s attempt to reinvent itself. The company can no longer bank on delivering ever-faster processors, as Moore’s Law is bumping up against the laws of physics (see “Moore’s Law Is Dead. Now What?”). Meanwhile, the publicity for the new device offers some respite from the fallout caused by a recently revealed security flaw affecting hundreds of millions of Intel chips.
The company also announced at CES that it has built a relatively large new quantum computing chip, a device that exploits the weird and wonderful rules of quantum physics to do certain types of computation with incredible speed. That chip, called Tangle Lake, contains 49 quantum bits, or “qubits.”
Intel hasn’t yet revealed other details, including how reliable the quantum chip is. The news puts the company on a par with IBM, which recently unveiled a 50-qubit chip. However, while the devices are approaching a stage where they are capable of performing useful work, it isn’t yet clear how they might be used beyond a few niches such as cracking codes and modeling materials.
A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?
Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.
The viral AI avatar app Lensa undressed me—without my consent
My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.
Roomba testers feel misled after intimate images ended up on Facebook
An MIT Technology Review investigation recently revealed how images of a minor and a tester on the toilet ended up on social media. iRobot said it had consent to collect this kind of data from inside homes—but participants say otherwise.
How to spot AI-generated text
The internet is increasingly awash with text written by AI software. We need new tools to detect it.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.