A Collection of Articles
Edit

Mobile

Mobile Summit 2013: Camera Tweaks Should Boost Gadget Battery Life

Research could make persistent computer-vision more feasible, and improve your smartphone’s battery life.

Mobile technology has vastly improved in recent years, but battery technology lags behind, limiting what we can do with these gadgets.

The digital cameras in smartphones, tablets, and devices like Google Glass are increasingly powerful and useful. But the more powerful they are, the more they drain battery life.

Researchers at Microsoft Research and Rice University have now developed a way to make digital camera sensors far more energy-efficient. The effort could allow smartphones to last longer on a charge and make it feasible for the camera in a wearable computer like Google Glass to always be on (see “Wearable Computing Pioneer Says Google Glass Offers ‘Killer Existence’”).  

Over the past decade, while smartphones have gotten immensely more powerful, capable battery development hasn’t kept pace. It’s even harder to eke more life out of a smaller package, such as a sensor-laden gadget you could wear on your face or clothing.

Victor Bahl, research manager of the mobility networking group at Microsoft Research and a coauthor of a paper describing the camera sensor modifications, said at the MIT Technology Review Mobile Summit in San Francisco on Tuesday that while much work has been done to reduce the size and improve the resolution of image sensors, there hasn’t been much attention paid to their power circuitry. The method researchers came up with—to be presented at the annual MobiSys 2013 in Taiwan later this month—addresses this issue.

Bahl said that the researchers tested five different image sensors, paying attention to how the power usage changed while capturing images. They noticed that lowering the quality of the images they captured barely reduced the amount of power used; they also noticed that there was still some power consumption during the short “idle” periods between the “active” periods in which the sensor captured each image.

The researchers propose reducing the amount of active time when taking a picture, or temporarily putting the sensor in a lower-power standby mode when it is idle and then putting it back into the higher-power idle mode before capturing the next frame. They found that this standby method reduced power consumption by 95 percent when they were performing continuous image registration—which relates to estimating depth and building mosaics of images.

Such information could help with the design of wearable computers with computer-vision capabilities, perhaps by determining that a system should first take a low-resolution picture to determine if a person is in front of it, before taking a higher-resolution picture if a person is, in fact, there, Bahl said. He said he envisions the work being applied to robots as well as consumer electronics.

Uh oh–you've read all five of your free articles for this month.

Insider basic

$29.95/yr US PRICE

Subscribe
What's Included
  • 1 year (6 issues) of MIT Technology Review magazine in print OR digital format
  • Access to the entire online story archive: 1997-present
  • Special discounts to select partners
  • Discounts to our events

You've read of free articles this month.