An experimental form of data storage can now be made cheaply using conventional manufacturing methods
Source: “Racetrack Memory Cell Array with Integrated Magnetic Tunnel Junction Readout”
Anthony J. Annunziata et al.
Proceedings of the IEEE International Electron Devices Meeting, Washington, D.C., December 5–7, 2011
Results: Researchers at IBM made a novel type of memory, known as racetrack memory, by means of inexpensive manufacturing processes used to make conventional computer chips.
Why it matters: The basic design of racetrack memory, which stores information on nanowires, was first proved feasible in 2009. The technology has the potential to store data faster than hard disk drives and to store thousands of times more data in a given space (see TR10, March/April 2009). The early prototypes, however, were made using specialized lab processes that are impractical for mass production. By making a prototype using existing industrial processes, the researchers have shown that the technology could be commercially viable.
Methods: The researchers used conventional lithography techniques to create the nanowires that are the basis of racetrack memory and to attach devices needed to read out data inside. A layer of conventional silicon circuits underneath is used to operate the completed memory. The team experimented with making nanowires of different shapes and sizes to find structures that could be manufactured reliably.
Next Steps: Although the prototype worked, testing showed that the magnetic properties of the nickel-iron alloy used to make the nanowires limited the amount of data each wire could store. The nickel-iron alloy was initially chosen because it is a soft magnetic material—a material with properties that make it easy to magnetize and demagnetize with an external field. The researchers are now investigating so-called hard magnetic materials, which are not easily demagnetized and could store more data.
Ricocheting radio signals off the ceiling could improve the performance of data centers
Source:“3D Beamforming for Wireless Data Centers”
Weile Zhang et al.
Proceedings of 10th ACM Workshop on Hot Topics in Networks, Cambridge, MA, November 14–15, 2011
Results: Simulations by researchers at the University of California, Santa Barbara, show that using wireless signals rather than cables to link the computers inside data centers can boost the speed at which data moves inside such facilities by 30 percent. That’s because the wireless links enable servers to communicate directly instead of sharing congested network cabling with all computers in the data center.
Why it matters: Keeping information moving more reliably within data centers could lower costs and improve performance for many services, from Facebook to financial trading platforms. Today spikes in demand can cause congestion and slowdowns because cable networks are limited by their complexity and by physical space. Wireless links could be rapidly switched on as needed to link any two points and fight data congestion.
Methods: In the researchers’ design, servers use wireless transmitters to send tight beams that can be picked up only by the antennas on the servers at which they’re aimed. To deliver those beams across a crowded server room, the researchers decided to send them over the tops of the server racks by reflecting them off metal plates on the ceiling. Radio-absorbent material around the receiving antennas limits unwanted reflections that might interfere with the wireless links. The system uses the familiar Wi-Fi protocol for sending wireless data, but at a much higher frequency than that used in homes and businesses (60 gigahertz rather than 2.4 gigahertz). The higher-frequency signal allows data to be transferred at a much greater rate.
Next Steps: The researchers are now outfitting a small data center with the wireless technology.
AI and robotics are changing the future of work. Learn from the humans leading the way at EmTech Next 2019.Register now