Raj Rajkumar, director of the Real-Time and Multimedia Systems Laboratory at Carnegie Mellon University, mentions that his colleague John Lehoczky and the University of Wisconsin’s Parmesh Ramanathan have investigated approaches similar to Qu’s. But he says that Qu’s work is “the logical extension of earlier work. I think that what Gang did is very useful.” Ramanathan adds that with Qu’s approach, “my guess is that there will be considerable savings in power consumption. I think one can save quite a bit.”
Indeed, the Maryland researchers’ algorithm fared well in simulations, offering a 54 percent energy savings over the naive approach. “If you are using the current approach, which is going to keep on decoding everything,” Qu says, “we are going to probably consume only slightly more than one-third of that energy. That means you can probably extend the battery life by three times.”
Qu is quick to point out that the researchers’ simulations involved signals similar, but not identical, to video signals; real video decoding might not produce such dramatic results. On the other hand, Qu says that more-recent video-coding standards call for frame rates higher than 30 frames per second. That means the decoding rate could drop below 80 percent, saving even more power.
And the simulations do accurately model cell-phone voice decoding. In some handheld devices–notably the iPhone–voice communication is almost as big a battery drain as video playback. Without the handy reference of a near-century of analog movies, however, user tolerance for error in voice is harder to gauge.
Qu says his and his colleagues’ power-saving scheme could be implemented in either hardware or software, although in the near term, software would certainly be the cheaper option. He adds that the work has drawn some corporate interest, but that there are no plans to commercialize it at the moment. Nonetheless, “if we got some partners,” Qu says, “if they have a top engineer trying to work with us, this could be done in half a year.”