Skip to Content
Artificial intelligence

Google Researchers Have a New Alternative to Traditional Neural Networks

November 1, 2017

Say hello to the capsule network.

AI has enjoyed huge growth in the past few years, and much of that success is owed to deep neural networks, which provide the smarts behind impressive tricks like image recognition. But there is growing concern that some of the fundamental principles that have made those systems so successful may not be able to overcome the major problems facing AI—perhaps the biggest of which is a need for huge quantities of data from which to learn (for a deep dive on this, check out our feature "Is AI Riding a One-Trick Pony?").

Google’s Geoff Hinton appears to be among those fretting about AI's future. As Wired reports, Hinton has unveiled a new take on traditional neural networks that he calls capsule networks. In a pair of new papers—one published on the arXIv, the other on OpenReview—Hinton and a handful of colleagues explain how they work.

Their approach uses small groups of neurons, collectively known as capsules, which are organized into layers to identify things in video or images. When several capsules in one layer agree on having detected something, they activate a capsule at a higher level—and so on, until the network is able to make a judgment about what it sees. Each of those capsules is designed to detect a specific feature in an image in such a way that it can recognize them in different scenarios, like from varying angles.

Hinton claims that the approach, which has been in the making for decades, should enable his networks to recognize objects in new situations with less data than regular neural nets must use.

In the papers published so far, capsule networks have been shown to keep up with regular neural networks when it comes to identifying handwritten characters, and they make fewer errors when trying to recognize previously observed toys from different angles. But for the moment, at least, they're still a bit slower than their traditional counterparts.

Now, then, comes the interesting part. Will these systems provide a compelling alternative to traditional neural networks, or will they stall? We can expect the machine-learning community to implement the work, and fast, in order to find out. Either way, those concerned about the limitations of current AI systems can be heartened by the fact that researchers are pushing the boundaries to build new deep-learning alternatives.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.