With the advent of the Apple iPhone and its big, clear screen, the idea of using the morning commute to catch up on missed episodes of Lost became a lot more attractive. But video chews through a handheld’s battery much faster than, say, playing MP3s does. In the most recent issue of the Association for Computing Machinery’s Transactions on Embedded Computing Systems, researchers at the University of Maryland describe a simple way for multimedia devices to save power. In simulations, the researchers applied their technique to several common digital-signal-processing chores and found that, on average, it would cut power consumption by about two-thirds.
The premise of the technique, says Gang Qu, one of its developers, is that in multimedia applications, “the end user can tolerate some execution failure.” Much digital video, for example, plays at a rate of 30 frames per second. But “in the old movie theaters, they played at 24 frames per second,” Qu says. “That’s about 80 percent. If you can get 80 percent of the frames consistently correct, human beings will not be able to tell you’ve made mistakes.”
Unlike the movies in the old theaters, a digital video isn’t stored on reels of wound plastic; it’s stored as a sequence of 1s and 0s. That sequence is decoded as the video plays, and the decoding time can vary from one frame to the next. So digital media systems are designed to work rapidly enough that even the hardest-to-decode frames will be ready to be displayed on time.
Qu thinks that’s a waste of processing power. If the viewer won’t miss the extra six frames of video per second, there’s no reason to decode them. Lower decoding standards would mean less work for the video player’s processor, and thus lower power consumption.
The straightforward way to ensure a decoding rate of 80 percent would be to decode, say, eight frames in a row and ignore the next two. That approach–which Qu calls the “naive approach”–did introduce power savings in the Maryland researchers’ simulations. The problem is that such a system doesn’t distinguish frames that are hard to decode from those that are easy: if frame five is the hardest, the decoder will still plow through it; if frame nine is the easiest, the decoder will still skip it.
Qu and his colleagues wrote an algorithm that imposes a series of time limits on the decoding process; if any of the limits is exceeded, the decoding is aborted. “You set certain milestones,” Qu says, “and you say, ‘Okay, after this time I still haven’t reached that first milestone, so it seems this is a hard task. Let me drop this one.’” Using statistics on the durations of particular tasks, the researchers can tune the algorithm to guarantee any desired completion rate.
Raj Rajkumar, director of the Real-Time and Multimedia Systems Laboratory at Carnegie Mellon University, mentions that his colleague John Lehoczky and the University of Wisconsin’s Parmesh Ramanathan have investigated approaches similar to Qu’s. But he says that Qu’s work is “the logical extension of earlier work. I think that what Gang did is very useful.” Ramanathan adds that with Qu’s approach, “my guess is that there will be considerable savings in power consumption. I think one can save quite a bit.”
Indeed, the Maryland researchers’ algorithm fared well in simulations, offering a 54 percent energy savings over the naive approach. “If you are using the current approach, which is going to keep on decoding everything,” Qu says, “we are going to probably consume only slightly more than one-third of that energy. That means you can probably extend the battery life by three times.”
Qu is quick to point out that the researchers’ simulations involved signals similar, but not identical, to video signals; real video decoding might not produce such dramatic results. On the other hand, Qu says that more-recent video-coding standards call for frame rates higher than 30 frames per second. That means the decoding rate could drop below 80 percent, saving even more power.
And the simulations do accurately model cell-phone voice decoding. In some handheld devices–notably the iPhone–voice communication is almost as big a battery drain as video playback. Without the handy reference of a near-century of analog movies, however, user tolerance for error in voice is harder to gauge.
Qu says his and his colleagues’ power-saving scheme could be implemented in either hardware or software, although in the near term, software would certainly be the cheaper option. He adds that the work has drawn some corporate interest, but that there are no plans to commercialize it at the moment. Nonetheless, “if we got some partners,” Qu says, “if they have a top engineer trying to work with us, this could be done in half a year.”
The big new idea for making self-driving cars that can go anywhere
The mainstream approach to driverless cars is slow and difficult. These startups think going all-in on AI will get there faster.
Inside Charm Industrial’s big bet on corn stalks for carbon removal
The startup used plant matter and bio-oil to sequester thousands of tons of carbon. The question now is how reliable, scalable, and economical this approach will prove.
The dark secret behind those cute AI-generated animal images
Google Brain has revealed its own image-making AI, called Imagen. But don't expect to see anything that isn't wholesome.
The hype around DeepMind’s new AI model misses what’s actually cool about it
Some worry that the chatter about these tools is doing the whole field a disservice.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.