MIT researchers have developed a compiler that makes parallel programs much more efficient, pulling off a coding feat that the industry had thought impossible. The compiler—a program that converts computer code written in a high-level language into low-level machine instructions—“optimizes parallel code better than any commercial or open-source compiler,” says professor Charles Leiserson. “And it also compiles where some of these other compilers don’t.”
A typical compiler has a “front end” tailored to a specific programming language and a “back end” tailored to a specific chip design. In between—in the so-called middle end—the compiler uses an “intermediate representation,” compatible with many different front and back ends, to describe computations.
Optimization typically occurs in the middle end. There, the compiler extensively analyzes a program, trying to deduce the most efficient implementation of its algorithms.
But that approach generally doesn’t work for parallel computing programs. That’s because managing parallel execution requires a lot of extra code, and existing compilers add it before the optimizations occur. The optimizers aren’t sure how to interpret the new code, so they don’t try to improve its performance.
Postdoc Tao Schardl and undergrad William Moses designed a new intermediate representation for the popular open-source compiler LLVM that allows it to preserve a high-level language’s general instructions about parallel execution without first adding all the extra code.
“T.B. and Billy did it by modifying 6,000 lines of a four-million-line code base,” Leiserson says. “Everybody said it was going to be too hard, that you’d have to change the whole compiler. And these guys basically showed that conventional wisdom to be flat-out wrong.”