Skip to Content

Mobile Summit 2013: Camera Tweaks Should Boost Gadget Battery Life

Research could make persistent computer-vision more feasible, and improve your smartphone’s battery life.
June 12, 2013

The digital cameras in smartphones, tablets, and devices like Google Glass are increasingly powerful and useful. But the more powerful they are, the more they drain battery life.

Researchers at Microsoft Research and Rice University have now developed a way to make digital camera sensors far more energy-efficient. The effort could allow smartphones to last longer on a charge and make it feasible for the camera in a wearable computer like Google Glass to always be on (see “Wearable Computing Pioneer Says Google Glass Offers ‘Killer Existence’”).  

Over the past decade, while smartphones have gotten immensely more powerful, capable battery development hasn’t kept pace. It’s even harder to eke more life out of a smaller package, such as a sensor-laden gadget you could wear on your face or clothing.

Victor Bahl, research manager of the mobility networking group at Microsoft Research and a coauthor of a paper describing the camera sensor modifications, said at the MIT Technology Review Mobile Summit in San Francisco on Tuesday that while much work has been done to reduce the size and improve the resolution of image sensors, there hasn’t been much attention paid to their power circuitry. The method researchers came up with—to be presented at the annual MobiSys 2013 in Taiwan later this month—addresses this issue.

Bahl said that the researchers tested five different image sensors, paying attention to how the power usage changed while capturing images. They noticed that lowering the quality of the images they captured barely reduced the amount of power used; they also noticed that there was still some power consumption during the short “idle” periods between the “active” periods in which the sensor captured each image.

The researchers propose reducing the amount of active time when taking a picture, or temporarily putting the sensor in a lower-power standby mode when it is idle and then putting it back into the higher-power idle mode before capturing the next frame. They found that this standby method reduced power consumption by 95 percent when they were performing continuous image registration—which relates to estimating depth and building mosaics of images.

Such information could help with the design of wearable computers with computer-vision capabilities, perhaps by determining that a system should first take a low-resolution picture to determine if a person is in front of it, before taking a higher-resolution picture if a person is, in fact, there, Bahl said. He said he envisions the work being applied to robots as well as consumer electronics.

Keep Reading

Most Popular

DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.

“This is a profound moment in the history of technology,” says Mustafa Suleyman.

What to know about this autumn’s covid vaccines

New variants will pose a challenge, but early signs suggest the shots will still boost antibody responses.

Human-plus-AI solutions mitigate security threats

With the right human oversight, emerging technologies like artificial intelligence can help keep business and customer data secure

Next slide, please: A brief history of the corporate presentation

From million-dollar slide shows to Steve Jobs’s introduction of the iPhone, a bit of show business never hurt plain old business.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.