MIT Technology Review Subscribe

Why compressive sensing will change the world

A new way to sample signals produces 2D images using a single pixel…and that’s just the start.

If you haven’t come across compressive sensing, you will do soon. It’s a way of sampling and reconstructing an analogue signal at a rate far lower than standard information theory would deem possible.

If you’re curious, Olga Holtz from the University of California, Berkeley, has prepared a handy primer so you can impress your friends with your superior knowledge when they finally stumble across it.

Advertisement

Holtz points out that the conventional limit is determined by the Shannon-Nyquist-Whittaker sampling theory which states that perfect reconstruction is possible only when the sampling frequency is greater than twice the maximum frequency of the signal under study.

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

Entire fields of electronics engineering and information theory are based on this idea; unnecessarily as it now turns out.

Compressive sensing relies on the fact that most analogue signals have a structure of some kind that can be exploited to reconstruct them. Know this structure and the signal can be reconstructed using a sampling rate that is significantly lower than the Nyquist rate.

The difficulty is in determining the structure, an NP-hard problem that cannot usually be solved in a reasonable amount of time. But it turns out that with a little mathematical trickery, even this isn’t necessary and the signal can indeed be reconstructed successfully with a fraction of the Nyquist sampling rate.

That’s going to have big implications for all kinds of measurements. Holtz gives the example of a camera developed by Richard Baraniuk and Kevin Kelly at Rice University which produces an image equivalent to a 5 megapixel image compressed using a standard jpeg algorithm to about 50,000 pixels.

The Baraniuk/Kelly camera records 200,000 pixels but does it with a single solitary pixel used over and over again.

The trick is in the way the camera processes the image before it is recorded: the image is reflected off a randomised array of micromirrors before being focused onto the single pixel. The array is randomised again and the recording repeated 200,000 times to create the image.

The result is a 25-fold saving in the amount of data the camera needs to collect compared with a 5 megapixel image.

Advertisement

That may not be of much significance for your holiday snaps. But if you’re an astronomer, medical imaging specialist, communications engineer (or just about anybody who ever makes any kind of measurement) this should make your eyes light up.

Ref: arxiv.org/abs/0812.3137: Compressive Sensing: A Paradigm Shift in Signal Processing

(Incidentally, this idea explains a phenomenon that has puzzled physicists for some time: the curious creation of “ghost images” that physicists had thought were the result of entanglement. Last year, we discussed some work showing that entanglement could not be involved but raising the quite reasonable question of what on Earth was to blame. In fact, the entire affair can be explained by compressive sensing, as pointed out by Wim and Igor Carron in the comments at the time.)

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement