Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

{ action.text }


The MIT researchers get around this by using an approach called transactional memory, a research area that has exploded in the past five years, says Asanovic. Transactional memory coordinates software operations so that programmers don’t have to write it into their programs. It actually allows numerous transactions to share the same memory at the same time. When a transaction is complete, the system verifies that other transactions haven’t made changes in the memory that would hinder the outcome of the first transaction. If they have, then the transaction is re-executed until it succeeds.

While transactional memory works in some cases, it’s still not perfect, explains Asanovic. Most of the time the transactions are small, and the fixed size of the memory in the hardware can cope with them quickly. But, he says, once in a while transactions require more memory than the fixed amount that is available, and when this happens, the system crashes. Asanovic says that by adding a small backup memory cache to the hardware, and by adding software to recognize when the transactions are overflowing, the capacity of transactional memory can be increased, alleviating previous system failures.

The method that the MIT researchers use relies on a combination of software and hardware to make transactional memory better, says Microsoft’s Larus, and there have been numerous designs that rely on software or hardware to varying degrees. “It’s not clear yet where the right line is” between using hardware and software to solve the problem, he says, but the researchers are tackling important unresolved issues in programming multi-core systems.

Microsoft, AMD, Intel, and universities such as MIT and Stanford, among others, are all invested in making multi-core systems easier to program. In addition to improving transactional memory, researchers are exploring better ways of debugging parallel programs and also creating libraries of ready-made parallel operations so that programmers can plug chunks of code into software without having to work out the kinks each time.

Currently, the dual-core systems aren’t as affected by the lack of truly parallel programs as the coming quad-core systems will be, says Asanovic. For the most part, operating systems such as Windows and Mac OS X are able to effectively split up applications on a dual-core system. For instance, a virus scanner runs unobtrusively in the background on one core, while applications such as Microsoft Word or Firefox run on the other core, without their speed being hampered.

But when it comes to 4, 8, or 16 cores, the applications themselves need to be modified to garner more performance. Asanovic says transactional memory won’t be a silver bullet that makes it easier to program these systems, but he expects it to be a component of the future parallel-computing model. “It’s one mechanism that appears to be useful,” he says.

4 comments. Share your thoughts »

Tagged: Computing

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me