The company’s prototype chip is based on the 45-nanometer transistors in its products today, but it incorporates resilient circuits. The chip is run at low voltage, and when an error-detection circuit detects a problem, the calculation is redone at high voltage to correct it. “When you have to correct an error, and reexecute a process more slowly, there is a tiny penalty,” says Wang. “But overall, you get a huge return.” Tests in the lab have shown that the chip can either save 37 percent on power consumption, or operate 21 percent faster at a given power level.
“They push it as close to the danger zone as they can, and things sometimes go bad, and they correct for it, which is very clever,” says Krishna Palem, professor of computing at Rice University in Houston. “The number of times you do that ought to be few and far between.” This strategy has been developed by mathematicians for decades, but Palem says Intel seems to be the only company testing circuits that operate on these principles in the context of a product. Palem is developing low-voltage, low-power computing strategies that are even more laissez-faire about errors. Some of these errors, if they’re made in calculations that aren’t critical (such as a calculation that causes an undetectable distortion in an image but doesn’t freeze it), don’t need to be corrected. Palem believes a combination of his technique with Intel’s resilient circuits could help chips save even more power.
Intel would not disclose when it will incorporate resilient circuits into its products. Its next generation of mobile processors, which will come to market in a few months and which is based on 45-nanometer transistors, won’t use this error-detection strategy. But error-generating leakiness becomes more of a problem as transistors shrink, so something like circuit resiliency may become a necessity in the next few years. “It will really begin to show at the 20-nanometer level,” says Palem.