Skip to Content

Stochastic Pattern Recognition Dramatically Outperforms Conventional Techniques

A stochastic computer, designed to help an autonomous vehicle navigate, outperforms a conventional computer by three orders of magnitude, say computer scientists

Stochastic computing is one of logic’s little gems. Its advantage is essentially that it makes multiplication as easy as addition. 

That’s significant. Imagine adding 0.4397625 and 0.8723489. It’s a calculation you could do in your head in a few seconds. But imagine multiplying those two numbers instead. That’s still something you could do in your head but I bet you’d feel happier reaching for a calculator.

Conventional computers have a similar problem. Adding numbers is straightforward but multiplying them is much more intensive. 

Stochastic computers change all this. That’s because they represent numbers using probability: as a stream of bits with a certain probability of being a number. 

For example, a bit stream representing 0.25 might contain three 0s for every 1, even though the actual distribution of 0s and 1 is otherwise random.  

The big advantage comes from the laws of probability, which turn adding into multiplication: the chances of two events occurring together is equal to their probabilities multiplied. That means stochastic computers can do multiplication with simple AND gates.

Another advantage is that stochastic computers are fantastically robust to noise. Flip a few bits in a stochastic calculation and the chances are that the result will be entirely unaffected.

Of course, there’s a caveat. It’s only possible to ‘read’ a probabilistic number by  making lots of measurements. But in certain applications that doesn’t matter. When it works, stochastic computing can be spectacularly successful. 

Today, Vincent Canals and pals at the University of the Balearic Islands in Palma off the coast of Spain, reveal a good example.

These guys have applied stochastic computing to the process of pattern recognition. The problem here is to compare an input signal with a reference signal to determine whether they match.  

In the real world, of course, input signals are always noisy so a system that can cope with noise has an obvious advantage. 

Canals and co use their technique to help an autonomous vehicle navigate its way through a simple environment for which it has an internal map. For this task, it has to measure the distance to the walls around it and work out where it is on the map. It then computes a trajectory taking it to its destination.

These guys say that in several tests, their vehicle calculated the optimal route it needed to take (although they don’t bother with the details about how this was done, which is a potentially significant omission).

But how much better is the stochastic computing approach compared to a conventional one? Canals and co say that a conventional microprocessor operates 70 times faster than a stochastic chip but can only process signals in sequence. 

By contrast, the stochastic chip can process the signals in parallel. That makes it up to three orders of magnitude faster than a conventional microprocessor in solving the pattern recognition task. That’s a significant improvement. 

Although the idea of stochastic computing has been around for half a century, attempts to exploit have only just begun. Clearly there’s much work to be done. And since one line of thought is that the brain might be a stochastic computer, at least in part, there could be exciting times ahead.

Ref: arxiv.org/abs/1202.4495: Stochastic-Based Pattern Recognition Analysis

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.