Jeff Hawkins has a track record at predicting the future. The founder of Palm and inventor of the PalmPilot, he spent the 1990s talking of a coming world in which we would all carry powerful computers in our pockets. “No one believed in it back then—people thought I was crazy,” he says. “Of course, I’m thrilled about how successful mobile computing is today.”
At his current firm, Numenta, Hawkins is working on another idea that seems to come out of left field: copying the workings of our own brains to build software that makes accurate snap decisions for today’s data-deluged businesses. He and his team have been working on their algorithms since 2005 and are finally preparing to release a version that is ready to be used in products. Numenta’s technology is aimed at a variety of applications, such as judging whether a credit card transaction is fraudulent, anticipating what a Web user will click next, or predicting the likelihood that a particular hospital patient will suffer a relapse.
“What those examples have in common is that they contain complex patterns that evolve over time,” says Hawkins. The algorithms can analyze and extrapolate from those patterns because they borrow techniques from parts of the human brain that have evolved to interpret complex data streaming in from our senses and use it to predict what might be coming.
Some companies are already putting Numenta’s latest approach to the test. Sm4rt Security Services, a computer security firm based in Mexico City, is one of them. “We were hired by one of the world’s top banks to prove this new technology was able to prevent card fraud,” says CEO Victor Chapela. “In just three months we’ve managed to match the accuracy of the existing systems, which have been developed over 25 years.”
The bank will deploy a Numenta-based fraud checker alongside its existing measures sometime next year, he says. The bank suffers more than $100 million of fraud every year, he says, “so anything that can cut even a fraction of that has a very quick payback.”
Numenta’s technology is attractive to banks because its ability to learn from previous data sidesteps a crucial limit on fraud prevention technology. A bank’s computer system has just 10 milliseconds to decide whether to authorize a transaction, says Chapela: “There’s simply no time to search a person’s past transactions.” As a result, transactions are typically divided into narrowly defined categories and judged according to rules specific to each one—rules that have to do with characteristics like the type of card, the amount charged, and the type of merchant.
But Numenta’s technology makes these separate sets of rules unnecessary. Instead, a raw feed of each person’s spending patterns is used to train a set of algorithms so they can learn that customer’s habits. At any moment the system has an internalized representation of past events that it uses to predict what kinds of transactions are likely to come next. If a new transaction doesn’t fit those expectations, it can be flagged as potential fraud. In this approach, the fraud detectors are always up to date, says Chapela. A traditional analytic system, on the other hand, must have its rules updated in a laborious process that’s usually undertaken only once every six months.
Hawkins’s inspiration for the underlying technology comes from the neocortex, the wrinkled outer layer of our brains that’s responsible for activities such as speech, movement, and planning. Strikingly, those very different abilities emerge from a common architecture of neurons, rather than from different clumps of neurons with very specific features.
Hawkins has borrowed that idea of a common architecture. “We use primary sources on the neuroscience of that as a guide, so there’s a tremendous amount of biology in this,” he says. The algorithms simulate the several layers of neurons that process information in the neocortex. The bottom layer receives the raw input data and then passes a processed and condensed version up to the next layer of algorithms. “As information ascends that hierarchical model, it becomes abstracted from the original, and the most salient features are extracted,” says Itamar Arel, who works on machine learning at the University of Tennessee.
The system’s ability to make predictions about unfolding events is rooted in its unique capacity for processing temporal, or time-dependent, data. Conventional learning software cannot do that, because it can’t handle input consisting of many variables that change over time. Instead, engineers generally have to extract the handful of variables they think are useful and feed them into the algorithms.
That “pre-processing” isn’t necessary in models inspired by studies of biological brains, Arel says. Instead, the learning system can decide for itself what is important and what isn’t. This is an emerging field dubbed deep machine learning. “Most academic efforts are focused on processing images, though,” he says. “What’s unique about Numenta is that it’s able to handle temporal data, which opens up different kinds of applications.” Among the examples Hawkins envisions: businesses could better analyze human speech or patterns of electricity use in buildings.
But while this approach raises the prospect of systems that can learn about any kind of data rather than being specialized to just one task, Numenta still has to prove that its technology is widely applicable and cost-effective. It’s also unclear how the company will bring the technology to market, but it will probably be in the form of development tools rather than off-the-shelf products. “Now that the technology is really working,” Hawkins says, “next year will see us switch into product-development mode.”