When a multi-megapixel digital camera snaps a shot, most of the information doesn’t even make it into the final photo file. Indeed, about 90 percent of it is lost during the compression process that creates a JPEG file.
Collecting pixels just to throw them away is a wasteful process, says Richard Baraniuk, professor of electrical and computer engineering at Rice University–and it chews through a camera’s battery life because compressing raw data is computationally demanding.
Baraniuk, Kevin Kelly, and colleagues at Rice are offering an alternative design, which they say makes for a more energy-efficient digital camera. Essentially, they’ve built and tested the hardware and software for a camera that collects just enough information to recreate a picture, while avoiding the traditional compression process.
In their prototype, the researchers used an array of tiny mirrors–a technology developed by Texas Instruments that’s already used in high-definition projection televisions. The micromirror array takes in a small amount of information, and directs it onto a single sensor. Then algorithms are used to reconstruct the image. Since the prototype has only one sensor, in effect it’s a single-pixel camera. However, the algorithm recreates an image with 100 times the resolution of what would traditionally be captured in a single pixel.
Baraniuk and his team recognized that an emerging field of information theory, called “compressive sensing,” offered an alternative approach to conventional image acquisition and compression. Developed by researchers at Caltech, Stanford, the University of California, Los Angeles, and Rice, the technology is based on the idea that datasets, such as those that represent images or signals, often contain a significant amount of structure. When this structure is known, it can be used to extrapolate the image or signal when there’s only a limited amount of available data. This concept of compressive sensing underlies the software for the researchers’ digital camera.
To develop the camera’s hardware that collects the image data, the Rice team turned to Texas Instruments’ digital micromirror technology, which uses a collection of thousands of tiny mirrors that can be angled in two different directions. Facing one way, a mirror reflects the light from the scene onto the sensor, facing the other way it’s dark. The mirrors are angled to transmit a pattern of light and dark onto the camera’s sensor, flipping up to 100,000 times per second.
The orientation of each mirror is random, which is important, say the scientists, because it provides the best possible sampling for the algorithm to reconstruct the image. The random structure is known and fed into the algorithm. In all, only a few hundred samples projected onto the single pixel can provide enough information to reconstruct an image with tens or hundreds of thousands of pixels.
“It’s exciting to me because it changes the way we think,” says Bruce Flinchbaugh, manager of image and video processing at Texas Instruments. “It’s not very often in a field like imaging that somebody comes along and does something so different to solve a problem.”
The researcher’s camera has a long way to go before it’s in a commercialized form, though, notes Baraniuk. Right now, the setup spans an optical table in a lab, and the researchers’ algorithms are slow compared with the compression in commercial cameras. The group is working to make its algorithms faster, and, Baraniuk adds, the hardware continues to improve as more micromirrors are being added to smaller arrays, and their flipping speed increases.
Baraniuk expects that the first application for the new camera could be in terahertz imaging systems–systems that use terahertz-frequency radiation to see through objects and detect small amounts of chemicals. Currently, it’s expensive to build the large sensors needed for these systems, he says, so a single-sensor camera like the one the group developed would be ideal.
Eventually, Rice’s Kelly envisions a version of the group’s algorithm being used in commercial cameras. This could reduce the number of sensors in such a product–decreasing its size and cost–while increasing the overall resolution of pictures. “You might buy a camera with a 2-megapixel sensor, but [the software] might give you a 20- or 30-megapixel image,” he says. “You could exploit the math in a way to allow your pocket camera to give you a much nicer picture.”
DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.
“This is a profound moment in the history of technology,” says Mustafa Suleyman.
What to know about this autumn’s covid vaccines
New variants will pose a challenge, but early signs suggest the shots will still boost antibody responses.
Human-plus-AI solutions mitigate security threats
With the right human oversight, emerging technologies like artificial intelligence can help keep business and customer data secure
Next slide, please: A brief history of the corporate presentation
From million-dollar slide shows to Steve Jobs’s introduction of the iPhone, a bit of show business never hurt plain old business.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.