Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

{ action.text }

The evolution of computer programming has been largely independent of actual computer evolution. ­Languages such as C++ have lived through many generations of computers, and although they’ve surely been influenced by changes in technology, most modifications of them have been attempts to meet the needs of ­people, not computers.

Computer evolution, however, is now headed down an entirely new path: instead of simply becoming faster, our computer processors are being conjoined to work together. That new computer architecture requires a serious evolution in computer programming. Without it, we can only scratch the surface of what multicore computing can really do (see “Parallel Universe”).

It’s a change that will not come easily. The last great shift in computer programming was object orientation. This didn’t just represent a new language or syntax; it represented a new way of thinking about programming, a new way of visualizing programs even before the first line of code was written. Some programmers simply could not make the leap. Whether their minds were too stuck on procedural development or the concepts themselves were too abstract, they couldn’t adjust what they knew.

Programming for multicore tech­nology is again not just a fantastic leap in programming, but a leap in conceiving and understanding programs. Historically, programming could be described as giving instructions to a computer on how to act upon some public data; the easier it was to get to the data, the faster and easier coding would be. The principle is similar for multicore programming, except that you must consider that others may be acting on the data at the same time. Programmers must now take into account that someone else might be simultaneously sharing the data their code will work on.

One reason this problem has proved difficult is our insistence on shoehorning old languages into the new paradigm. Languages such as Java and C++ are being patched or updated to try to keep up, but programs written in them require careful coding when run on multicore chips. The good news is that new languages–built with the idea that shared data can and will change without notice–are better suited to this new paradigm. Part of their success lies in the fact that they are designed to keep data unshared until the programmer explicitly says otherwise.

With these new languages and programmers’ development of new skills, the acceleration of computing power that we’ve almost come to take for granted will soon be back on track.

Paul Tyma is the CTO of Home-Account, an analytics startup focusing on the mortgage industry. Previously he was a senior engineer in Google’s multicore team.

3 comments. Share your thoughts »

Credit: Marc Rosenthal

Tagged: Computing

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me