Why compressive sensing will change the world
If you haven’t come across compressive sensing, you will do soon. It’s a way of sampling and reconstructing an analogue signal at a rate far lower than standard information theory would deem possible.

If you’re curious, Olga Holtz from the University of California, Berkeley, has prepared a handy primer so you can impress your friends with your superior knowledge when they finally stumble across it.
Holtz points out that the conventional limit is determined by the Shannon-Nyquist-Whittaker sampling theory which states that perfect reconstruction is possible only when the sampling frequency is greater than twice the maximum frequency of the signal under study.
Entire fields of electronics engineering and information theory are based on this idea; unnecessarily as it now turns out.
Compressive sensing relies on the fact that most analogue signals have a structure of some kind that can be exploited to reconstruct them. Know this structure and the signal can be reconstructed using a sampling rate that is significantly lower than the Nyquist rate.
The difficulty is in determining the structure, an NP-hard problem that cannot usually be solved in a reasonable amount of time. But it turns out that with a little mathematical trickery, even this isn’t necessary and the signal can indeed be reconstructed successfully with a fraction of the Nyquist sampling rate.
That’s going to have big implications for all kinds of measurements. Holtz gives the example of a camera developed by Richard Baraniuk and Kevin Kelly at Rice University which produces an image equivalent to a 5 megapixel image compressed using a standard jpeg algorithm to about 50,000 pixels.
The Baraniuk/Kelly camera records 200,000 pixels but does it with a single solitary pixel used over and over again.
The trick is in the way the camera processes the image before it is recorded: the image is reflected off a randomised array of micromirrors before being focused onto the single pixel. The array is randomised again and the recording repeated 200,000 times to create the image.
The result is a 25-fold saving in the amount of data the camera needs to collect compared with a 5 megapixel image.
That may not be of much significance for your holiday snaps. But if you’re an astronomer, medical imaging specialist, communications engineer (or just about anybody who ever makes any kind of measurement) this should make your eyes light up.
Ref: arxiv.org/abs/0812.3137: Compressive Sensing: A Paradigm Shift in Signal Processing
(Incidentally, this idea explains a phenomenon that has puzzled physicists for some time: the curious creation of “ghost images” that physicists had thought were the result of entanglement. Last year, we discussed some work showing that entanglement could not be involved but raising the quite reasonable question of what on Earth was to blame. In fact, the entire affair can be explained by compressive sensing, as pointed out by Wim and Igor Carron in the comments at the time.)
Keep Reading
Most Popular
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
Data analytics reveal real business value
Sophisticated analytics tools mine insights from data, optimizing operational processes across the enterprise.
How to fix the internet
If we want online discourse to improve, we need to move beyond the big platforms.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.