Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

Today’s top-of-the-line computers have dual-core processors: two computing units that can handle separate tasks at the same time. And by next year, major chip makers Intel and AMD will have rolled out quad-core systems. Although multiple processors are theoretically faster than a single core, writing software that takes advantage of many processors–a task called parallel programming–is extremely difficult.

Recent research from MIT, however, could make parallel programming easier, ultimately helping to keep personal-computing performance on track. The researchers are proposing a new computing framework that combines specialized software instructions and modifications to multi-core hardware that could allow programmers to write software without having to deal with some tedious parallel-programming details.

Historically, writing software for multi-core systems has been the job of experts in the supercomputing world. But with the coming age of personal supercomputers, average programmers also need to be able to write software with multiple cores in mind.

“That’s a scary thing,” says Krste Asanovic, professor of electrical engineering and computer science at MIT, “because most have never done that, and it’s quite difficult to do.” Asanovic and his colleagues are tackling one of the main challenges that programmers face when they try to write software that will run efficiently on multi-core systems: coordinating multiple tasks that run on separate cores in a way that doesn’t cause the system to crash.

When an application such as Microsoft Outlook or a video player is parallelized, certain tasks are divvied up among the processors. But often, these separate tasks need to dip into a shared memory cache to access data. When one transaction is accessing memory and another transaction needs to access the same part of the memory, and proper safeguards aren’t put in place, a system can crash. This can be compared to a couple with a shared checking account with limited funds writing checks simultaneously and inadvertently overdrawing from the account.

Standard parallel programming requires a programmer to anticipate these simultaneous activities and make sure that once a certain activity begins to access memory, it “locks” out other activities so they wait until the transaction is completed.

When implemented correctly, the locks speed up parallel systems, but putting them into practice is complicated, says Jim Larus, research area manager at Microsoft. For instance, he explains, two different applications could acquire locks at the same time, which forces them to wait for each other. Without some third party coming in to break up the “deadlock,” Larus says, the applications would stay frozen.

4 comments. Share your thoughts »

Tagged: Computing

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me
×

A Place of Inspiration

Understand the technologies that are changing business and driving the new global economy.

September 23-25, 2014
Register »