Today’s top-of-the-line computers have dual-core processors: two computing units that can handle separate tasks at the same time. And by next year, major chip makers Intel and AMD will have rolled out quad-core systems. Although multiple processors are theoretically faster than a single core, writing software that takes advantage of many processors–a task called parallel programming–is extremely difficult.
Recent research from MIT, however, could make parallel programming easier, ultimately helping to keep personal-computing performance on track. The researchers are proposing a new computing framework that combines specialized software instructions and modifications to multi-core hardware that could allow programmers to write software without having to deal with some tedious parallel-programming details.
Historically, writing software for multi-core systems has been the job of experts in the supercomputing world. But with the coming age of personal supercomputers, average programmers also need to be able to write software with multiple cores in mind.
“That’s a scary thing,” says Krste Asanovic, professor of electrical engineering and computer science at MIT, “because most have never done that, and it’s quite difficult to do.” Asanovic and his colleagues are tackling one of the main challenges that programmers face when they try to write software that will run efficiently on multi-core systems: coordinating multiple tasks that run on separate cores in a way that doesn’t cause the system to crash.
When an application such as Microsoft Outlook or a video player is parallelized, certain tasks are divvied up among the processors. But often, these separate tasks need to dip into a shared memory cache to access data. When one transaction is accessing memory and another transaction needs to access the same part of the memory, and proper safeguards aren’t put in place, a system can crash. This can be compared to a couple with a shared checking account with limited funds writing checks simultaneously and inadvertently overdrawing from the account.
Standard parallel programming requires a programmer to anticipate these simultaneous activities and make sure that once a certain activity begins to access memory, it “locks” out other activities so they wait until the transaction is completed.
When implemented correctly, the locks speed up parallel systems, but putting them into practice is complicated, says Jim Larus, research area manager at Microsoft. For instance, he explains, two different applications could acquire locks at the same time, which forces them to wait for each other. Without some third party coming in to break up the “deadlock,” Larus says, the applications would stay frozen.