The Simplex Solution
The mission to improve the widely used simplex-method algorithm showed instead why it works so well.
While sitting in a courtroom, waiting to find out if he would be chosen for jury duty, Daniel Spielman had a revelation-all the work he and colleague Shanghua Teng had built up over the past three years was a house of cards. “I’ll never forget,” says Spielman, an associate professor of mathematics. “As I was sitting there waiting-fortunately not to get picked-I had this awful experience of realizing everything we’d been doing was wrong. I thought of throwing away my research program.” And in that instant, the cards collapsed.
The pair had been trying to discover a way to improve the simplex method, one of the most widely used algorithms in the world. It enables many of the complex systems we take for granted-such as telecommunications networks and scheduling for fleets of delivery vehicles or airline flights-to work as efficiently and inexpensively as possible. As a new assistant professor, Spielman wanted to establish himself within the world of mathematics and gain tenure at MIT by working on a big challenge, namely, making the algorithm simpler, faster, and better. But after that day at the courthouse, when he realized that applying concepts from an unrelated area to the simplex algorithm was a dead end, he knew he would have to find another big breakthrough to achieve his goals.
A few days later, like a person whose home has been destroyed by a hurricane or tornado, Spielman began to salvage what was left of his ruin of research. And that was when the really big idea hit: though his work couldn’t improve the simplex method, perhaps it could explain it. The method had been developed in 1947, but after more than 50 years of analysis, no one had been able to figure out why it worked. Spielman’s hunch turned out to be right. After another three years of collaborative work and hundreds of mathematical formulae, he and Teng, a professor at Boston University, can now explain why the simplex method works. This may allow the so-called optimization experts to solve even more complex organizational problems. Already, the explanation, called “smoothed analysis,” has been cited by the National Science Foundation as a major advance in information technology.
The Path to Discovery
Spielman and Teng first met in the fall of 1990, when Spielman, then an undergraduate at Yale University, visited Carnegie Mellon University to give a speech. Teng, a doctoral candidate there, says he and others at the university admired “this long-haired college junior. He already had two papers of PhD quality. Naturally, he was one of the most prized prospective students that all top universities wanted to attract to their PhD programs.” In 1992, Spielman chose MIT. Teng arrived that same year as an instructor at the Institute. Their student-teacher relationship soon became friendship and then a collaboration that has lasted 11 years.
In 1996, after several years of working together in another area, the duo began seeking to improve the simplex method. “The process of research is a lot like trying to find treasure on a dark island with a small flashlight,” says Teng. “We tried to explore those many hopeful leads. Dan always keeps detailed working journals, which systematically mark the maps of the exploration.”
After three years of such work, Spielman had his courtroom realization. The two mathematicians changed their goal and attacked the new research problem in earnest.
Teng, who by then was an associate professor at the University of Illinois at Urbana-Champaign, moved back to Massachusetts on sabbatical and rented an apartment five minutes away from Spielman’s. After that, both researchers turned their living rooms into workspaces. Teng mounted a large whiteboard on his living-room wall. Spielman kept one stored behind his sofa.
From then on, their work together took place at all hours. “It was one of those things where my wife complained that I saw Shanghua more than I saw her for a few years,” Spielman says. Teng worked full time at Akamai in Cambridge, but he went to Spielman’s apartment almost every night after work and on weekends. “We’d stay up for many hours, probably till two, working,” Spielman notes. Teng adds, “I was like an adopted member of Dan’s family. Even their cat, Chloe, became so used to our presence that she would perch in front of the whiteboard and watch attentively whenever we set it up.” The researchers thanked Chloe in their journal paper’s acknowledgments.
To keep track of their work, Spielman continued his working journals, writing down every thought and equation the whiteboards contained before erasing them. Today, a dozen of these 200-page, notebook-size journals line a bookshelf in his Building 2 office. He says about 60 percent of the information the journals contain is work on smoothed analysis. Meanwhile, Teng used a digital camera to take about 40 photographs of the whiteboards before they were erased.
Finally Answering Why
The result of all this research was the answer to the simple question “Why?” Spielman and Teng finally figured out why the simplex method has worked so well all this time. They did it by developing a new way to analyze the algorithm.
Until their discovery, most mathematicians measured algorithms by using “worst-case analysis,” in which an algorithm is given the most difficult data and then judged on how well it can calculate with it. It would be as if someone handed you the worst possible long-division problem you could imagine and then tested to see whether you could solve it and how long it would take. But this just didn’t work with the simplex method.
So Spielman and Teng found a new approach. They introduced some variability into worst-case analysis. Instead of using exact numbers as inputs to test the algorithm, they allowed imprecision. For example, if the input was 1.31, they allowed a random input between 1.29 and 1.33. They discovered that by allowing for imprecision, the simplex algorithm always efficiently solved the problem, and that is why it has been so successful.
The idea sounds simple, but the mathematics that support it are complex. Spielman and Teng’s first journal paper on the subject, now under review by the Association for Computing Machinery’s Journal of the ACM, contains 80 pages of equations. “I don’t know if that many people could walk through the paper,” Spielman says. In fact, writing the paper even confused Spielman and Teng on occasion. “A couple of times we just threw out what was written and wrote afresh, because if it was complicated for us, it was going to be even more complicated for [other] people,” says Spielman.
Spielman and Teng have presented their findings around the world to enthusiastic response. They published a conference paper in 2001, and since then, both have given invited presentations and keynote addresses throughout the United States and in China, Turkey, Italy, Switzerland, and Denmark.
“This smoothed analysis is an important development,” says Michel Goemans, PhD ‘90, an MIT professor of applied mathematics. And David Johnson, head of the Algorithms and Optimization Department at AT&T Labs-Research, says, “[Smoothed analysis] provides an extra level of confidence for those who use the simplex method.”
Spielman says he hasn’t pulled out his whiteboard since last summer, when the journal paper was finally finished, “but without it, we never would have managed to write the paper.” Now, Spielman recommends that young researchers buy big whiteboards as a good first step toward making breakthroughs. However, Teng attributes a big part of their success to Spielman’s “dynamic” mind and “great taste” in choosing research problems. “He always has the courage to work on the hardest open problem in the field,” Teng says, and that might be an even better starting point for researchers and curious people everywhere.
Become an MIT Technology Review Insider for in-depth analysis and unparalleled perspective.Subscribe today