MIT Technology Review Subscribe

Saving Power in Handhelds

Taking advantage of human error tolerance could make cell phones more energy efficient.

With the advent of the Apple iPhone and its big, clear screen, the idea of using the morning commute to catch up on missed episodes of Lost became a lot more attractive. But video chews through a handheld’s battery much faster than, say, playing MP3s does. In the most recent issue of the Association for Computing Machinery’s Transactions on Embedded Computing Systems, researchers at the University of Maryland describe a simple way for multimedia devices to save power. In simulations, the researchers applied their technique to several common digital-signal-processing chores and found that, on average, it would cut power consumption by about two-thirds.

The premise of the technique, says Gang Qu, one of its developers, is that in multimedia applications, “the end user can tolerate some execution failure.” Much digital video, for example, plays at a rate of 30 frames per second. But “in the old movie theaters, they played at 24 frames per second,” Qu says. “That’s about 80 percent. If you can get 80 percent of the frames consistently correct, human beings will not be able to tell you’ve made mistakes.”

Advertisement

Unlike the movies in the old theaters, a digital video isn’t stored on reels of wound plastic; it’s stored as a sequence of 1s and 0s. That sequence is decoded as the video plays, and the decoding time can vary from one frame to the next. So digital media systems are designed to work rapidly enough that even the hardest-to-decode frames will be ready to be displayed on time.

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

Qu thinks that’s a waste of processing power. If the viewer won’t miss the extra six frames of video per second, there’s no reason to decode them. Lower decoding standards would mean less work for the video player’s processor, and thus lower power consumption.

The straightforward way to ensure a decoding rate of 80 percent would be to decode, say, eight frames in a row and ignore the next two. That approach–which Qu calls the “naive approach”–did introduce power savings in the Maryland researchers’ simulations. The problem is that such a system doesn’t distinguish frames that are hard to decode from those that are easy: if frame five is the hardest, the decoder will still plow through it; if frame nine is the easiest, the decoder will still skip it.

Qu and his colleagues wrote an algorithm that imposes a series of time limits on the decoding process; if any of the limits is exceeded, the decoding is aborted. “You set certain milestones,” Qu says, “and you say, ‘Okay, after this time I still haven’t reached that first milestone, so it seems this is a hard task. Let me drop this one.’” Using statistics on the durations of particular tasks, the researchers can tune the algorithm to guarantee any desired completion rate.

Raj Rajkumar, director of the Real-Time and Multimedia Systems Laboratory at Carnegie Mellon University, mentions that his colleague John Lehoczky and the University of Wisconsin’s Parmesh Ramanathan have investigated approaches similar to Qu’s. But he says that Qu’s work is “the logical extension of earlier work. I think that what Gang did is very useful.” Ramanathan adds that with Qu’s approach, “my guess is that there will be considerable savings in power consumption. I think one can save quite a bit.”

Indeed, the Maryland researchers’ algorithm fared well in simulations, offering a 54 percent energy savings over the naive approach. “If you are using the current approach, which is going to keep on decoding everything,” Qu says, “we are going to probably consume only slightly more than one-third of that energy. That means you can probably extend the battery life by three times.”

Qu is quick to point out that the researchers’ simulations involved signals similar, but not identical, to video signals; real video decoding might not produce such dramatic results. On the other hand, Qu says that more-recent video-coding standards call for frame rates higher than 30 frames per second. That means the decoding rate could drop below 80 percent, saving even more power.

And the simulations do accurately model cell-phone voice decoding. In some handheld devices–notably the iPhone–voice communication is almost as big a battery drain as video playback. Without the handy reference of a near-century of analog movies, however, user tolerance for error in voice is harder to gauge.

Advertisement

Qu says his and his colleagues’ power-saving scheme could be implemented in either hardware or software, although in the near term, software would certainly be the cheaper option. He adds that the work has drawn some corporate interest, but that there are no plans to commercialize it at the moment. Nonetheless, “if we got some partners,” Qu says, “if they have a top engineer trying to work with us, this could be done in half a year.”

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement