The digital cameras in smartphones, tablets, and devices like Google Glass are increasingly powerful and useful. But the more powerful they are, the more they drain battery life.
Researchers at Microsoft Research and Rice University have now developed a way to make digital camera sensors far more energy-efficient. The effort could allow smartphones to last longer on a charge and make it feasible for the camera in a wearable computer like Google Glass to always be on (see “Wearable Computing Pioneer Says Google Glass Offers ‘Killer Existence’”).
Over the past decade, while smartphones have gotten immensely more powerful, capable battery development hasn’t kept pace. It’s even harder to eke more life out of a smaller package, such as a sensor-laden gadget you could wear on your face or clothing.
Victor Bahl, research manager of the mobility networking group at Microsoft Research and a coauthor of a paper describing the camera sensor modifications, said at the MIT Technology Review Mobile Summit in San Francisco on Tuesday that while much work has been done to reduce the size and improve the resolution of image sensors, there hasn’t been much attention paid to their power circuitry. The method researchers came up with—to be presented at the annual MobiSys 2013 in Taiwan later this month—addresses this issue.
Bahl said that the researchers tested five different image sensors, paying attention to how the power usage changed while capturing images. They noticed that lowering the quality of the images they captured barely reduced the amount of power used; they also noticed that there was still some power consumption during the short “idle” periods between the “active” periods in which the sensor captured each image.
The researchers propose reducing the amount of active time when taking a picture, or temporarily putting the sensor in a lower-power standby mode when it is idle and then putting it back into the higher-power idle mode before capturing the next frame. They found that this standby method reduced power consumption by 95 percent when they were performing continuous image registration—which relates to estimating depth and building mosaics of images.
Such information could help with the design of wearable computers with computer-vision capabilities, perhaps by determining that a system should first take a low-resolution picture to determine if a person is in front of it, before taking a higher-resolution picture if a person is, in fact, there, Bahl said. He said he envisions the work being applied to robots as well as consumer electronics.
Here’s how a Twitter engineer says it will break in the coming weeks
One insider says the company’s current staffing isn’t able to sustain the platform.
Technology that lets us “speak” to our dead relatives has arrived. Are we ready?
Digital clones of the people we love could forever change how we grieve.
How to befriend a crow
I watched a bunch of crows on TikTok and now I'm trying to connect with some local birds.
Starlink signals can be reverse-engineered to work like GPS—whether SpaceX likes it or not
Elon said no thanks to using his mega-constellation for navigation. Researchers went ahead anyway.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.