Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

{ action.text }

We owe our smartphones and supercomputers to the mathematicians and engineers who figured out in the 1940s and 1950s how to create machines that can crunch numbers at high speed with perfect accuracy. Some researchers are now going back on that principle by working on designs that sacrifice accuracy for power efficiency. The approach, known as approximate computing, could extend the battery life of mobile devices and enable advanced techniques such as computer vision.

Researchers from Purdue University reported last week on tests of a simple processor that uses approximation. The researchers were able to cut the chip’s energy consumption in half by allowing error to creep into some operations as it ran a range of software for tasks such as recognizing handwriting or detecting eyes in images.

Other researchers, from the University of Washington, have shown that the energy consumption of flash memory, used in mobile computers such as phones, could be cut if chips were allowed to store non-critical data imperfectly. Both groups presented their work at the Micro conference at the University of California, Davis.

Approximate computing has been researched for years but has now advanced to the point where it is possible to build real systems using the technique, says Anand Raghunathan, a professor at Purdue University who in 2006 was named by MIT Technology Review as one of the 35 Innovators under 35 (see “Making Mobile Secure”). “We have proof in working silicon that this can actually be done,” he says.

The timing is good, because although complete accuracy will always be needed for a lot of jobs, such as calculating paychecks, many of the advanced tasks being asked of computers, such as recognizing images or reproducing sound, can tolerate some sloppiness.

“For more and more computers, whether in phones or data centers, the end result is not a precise numerical value, it’s something meant for humans,” says Raghunathan. “The calculations involved in these apps don’t need to be treated as all sacred or precise—we can exploit that forgiving nature.” When a computer tries to recommend a movie or recognize your friend in a photo, for example, approximating some of the numbers used along the way is fine as long as the final answer is correct.

Allowing computers to approximate can save energy in a variety of ways, mostly by removing quality controls on the manipulation of electronic signals. Purdue’s processor design, dubbed Quora, saves energy by scaling back the precision used to express certain values it operates on, which allows some of its circuit elements to remain idle. It also dials down the voltage to some circuit elements when they work on approximated data. Crucially, the design doesn’t do that for every instruction a piece of software directs it to carry out. Instead, it looks for signals written into a program’s code indicating which parts of it are tolerant to some error and by how much.

Being able to specify the degree of noise acceptable for different parts of a program makes it possible to use approximation without overloading it with errors, says Swagath Venkataramani, the Purdue researcher who led work on the processor. He predicts that descendants of Quora will appear in commercial products as co-processors to conventional processors; such co-processors could take on tasks such as image processing that benefit from approximation. “As we have demonstrated, this includes recognition, data mining, search, and vision—applications that are growing extremely popular across the computing spectrum.”

Luis Ceze, an associate professor at the University of Washington, says the Purdue work shows that chips that approximate can be practical. However, Ceze says, it may be better to have chip hardware play a less active role in determining where to apply approximation, and use software instead. That could make it easier to automatically translate software written for conventional computers into a form that could be handled by a system that would use approximation, he says. However, Ceze acknowledges that the field is far from establishing a single way of doing things. “This area is very much in an exploration phase,” he says.

Ceze doesn’t doubt that the ability to approximate will likely make it into commercial computing devices. His own group has begun talking with flash memory companies about a technique it developed that saves energy by cramming more bits into memory blocks than usual, only marginally degrading the stored data.

Consumers are playing a major part in driving the industry’s openness to such ideas, says Ceze. “We have a lot of data these days and a lot of it is approximable in nature, things like images, sound, video data from sensors.”

3 comments. Share your thoughts »

Tagged: Computing, Business, smartphones, supercomputers

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me