On-chip cooling could increase performance and decrease power consumption
Source: “On-chip cooling by superlattice-based thin-film thermoelectrics”
Ravi Prasher et al.
Nature Nanotechnology online, January 25, 2009
Results: Researchers at Intel, Arizona State University, and Nextreme Thermal Solutions and RTI International, both located in North Carolina, have integrated a thermoelectric cooler into a computer chip for the first time. The semiconductor-based device, which uses electric current to move heat from one place to another, cooled a targeted region in a chip by 15 °C.
Why it matters: When microprocessors and optoelectronics operate, they generate heat; too much can inhibit performance and reduce reliability. Today’s cooling systems use flat metal plates attached to a chip to disperse the heat, and metal heat sinks, fans, and liquid-based cooling systems to remove it. But these technologies are bulky and inefficient. If small thermoelectric coolers could be built onto the heat-dissipating metal plates to target hot spots in the chip, they could replace other cooling systems and save space. Such focused cooling might also consume less energy.
Methods: The researchers selected thermoelectric coolers made from nanostructured thin films whose cooling properties had been proved superior to those of bulk thermoelectric materials. To attach a cooler to a copper plate already incorporated into the chip packaging, they applied an insulating material to the copper and deposited metal lines to serve as electrical connections to the cooler. Then they filled the spaces between the lines with a polymer for mechanical stability and soldered the cooler to the lines.
Next steps: Thermal resistance in the contact point between the cooler and the copper plate keeps the integrated device from cooling as effectively as a stand-alone device would. To reduce this resistance, the researchers are exploring alternative connectors, such as special types of solder and carbon nanotubes. They also plan to use more thermoelectric coolers to cover all the hot spots on a chip.
Asociological theory could help overloaded routers direct traffic
Source: “Navigability of Complex Networks”
Marián Boguña et al.
Nature Physics 5: 74-80
Results: Researchers at the University of Barcelona and the University of California, San Diego, have developed a mathematical model demonstrating that Internet routers can effectively deliver data even without detailed information about all the routers in a network. Having limited information about neighboring routers is enough.
Why it matters: The current system for routing data between the networks of different Internet service providers (ISPs) isn’t expected to continue working as the Internet grows. The routers that handle this traffic require lists of network addresses, called routing tables, which tell them where to forward packets of information. These tables must be regularly updated, a process that can take minutes for a single change. As the network grows, the number of updates increases to the point that the tables are almost never up to date, and parts of the network are not accessible because addresses are missing. These problems could be avoided with the new model, since it doesn’t require up-to-date routing tables.
Methods: The researchers looked to sociology experiments from the 1960s in which a person was asked to forward a letter to a stranger by sending it through friends and acquaintances. It took only a few hops for the letter to reach its intended recipient because people used clues, such as a friend’s profession, to guess who might help move the letter closer. Similarly, the researchers’ model shows that by using only a little bit of information about the nearest neighboring routers, such as their location and the type of traffic they recently received (data that can be acquired quickly via the direct link between neighbors), routers can continue to deliver packets of information even if their routing tables are missing addresses.
Next steps: The researchers suspect that they could further improve the performance of their model by looking at the location and traffic history of routers a few hops away from a particular router. They also hope to test the protocol in a working network.