Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

Hawkins’s inspiration for the underlying technology comes from the neocortex, the wrinkled outer layer of our brains that’s responsible for activities such as speech, movement, and planning. Strikingly, those very different abilities emerge from a common architecture of neurons, rather than from different clumps of neurons with very specific features.

Hawkins has borrowed that idea of a common architecture. “We use primary sources on the neuroscience of that as a guide, so there’s a tremendous amount of biology in this,” he says. The algorithms simulate the several layers of neurons that process information in the neocortex. The bottom layer receives the raw input data and then passes a processed and condensed version up to the next layer of algorithms. “As information ascends that hierarchical model, it becomes abstracted from the original, and the most salient features are extracted,” says Itamar Arel, who works on machine learning at the University of Tennessee.

The system’s ability to make predictions about unfolding events is rooted in its unique capacity for processing temporal, or time-dependent, data. Conventional learning software cannot do that, because it can’t handle input consisting of many variables that change over time. Instead, engineers generally have to extract the handful of variables they think are useful and feed them into the algorithms.

That “pre-processing” isn’t necessary in models inspired by studies of biological brains, Arel says. Instead, the learning system can decide for itself what is important and what isn’t. This is an emerging field dubbed deep machine learning. “Most academic efforts are focused on processing images, though,” he says. “What’s unique about Numenta is that it’s able to handle temporal data, which opens up different kinds of applications.” Among the examples Hawkins envisions: businesses could better analyze human speech or patterns of electricity use in buildings.

But while this approach raises the prospect of systems that can learn about any kind of data rather than being specialized to just one task, Numenta still has to prove that its technology is widely applicable and cost-effective. It’s also unclear how the company will bring the technology to market, but it will probably be in the form of development tools rather than off-the-shelf products. “Now that the technology is really working,” Hawkins says, “next year will see us switch into product-development mode.”

4 comments. Share your thoughts »

Credit: Numenta

Tagged: Business, Business Impact, Predictive Modeling

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me