Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

Asuman Ozdaglar

Asuman Ozdaglar

You and an accomplice in a major heist have been nabbed by the cops and are being interrogated in separate rooms. If you both keep quiet about the crime, you’ll each get a year in prison on a lesser charge. If you both squeal, you’ll each get five years. But if just one of you squeals, that one will go free while the other gets 10 years. If you don’t know what your accomplice will do, what’s the rational decision?

This conundrum, known as the prisoner’s dilemma, is the most familiar example of a game, in the technical sense employed by game theorists. Game theory is a mathematical way to describe strategic reasoning, and the prisoner’s dilemma illustrates the three basic requirements of the situations it encompasses: the game must involve multiple agents (here, the two accomplices); each must make a decision (squeal or don’t squeal); and every decision must carry a quantifiable payoff (the prison terms) that varies according to the other agents’ decisions.

Game theory has been a staple of economics research since 1950, when John Nash, who taught at MIT from 1951 to 1959 and is the subject of the movie A Beautiful Mind, published the seminal paper that would win him the Nobel Prize in economics. As game theory has matured, it’s become even more central to that field. In just the last eight years, the Nobel Prize has gone to game theorists three times, for shedding light on, among other things, the logic of nuclear deterrence, the circumstances in which free markets can and cannot maximize public welfare, and the best solutions to “matching problems”—organs and patients, medical residents and hospitals, and the like.

But recently game theory has been drawing attention in engineering and computer science, too. Researchers are using it to analyze thorny problems such as optimizing traffic flow or preventing blackouts.

Asuman Ozdaglar, SM 98, PhD 03, a professor of electrical engineering and computer science, says the rise of the Internet has made this necessary. Historically, the engineers of communication networks had to contend with a wide range of technical questions—such as power constraints and the relative merits of centralization or decentralization. But with the Internet, they suddenly had to deal with human agency, too.

If a Comcast subscriber in Boston and an EarthLink subscriber in San Francisco are exchanging data, their transmissions are traveling over networks maintained by several different providers: Comcast, EarthLink, and others in between. “The whole operation relies on both collaboration and competition of these different parties,” Ozdaglar says. “How do you design protocols that will actually yield the right incentives for people to collaborate?” In other words: why does the Internet work even though it is made up of individual networks? Game theory provides a way of answering that kind of question.

As engineers began bringing game theory to bear on questions within their field, however, they also realized that the tools of their trade were applicable to outstanding questions of game theory. Indeed, of the handful of researchers in the Department of Electrical Engineering and Computer Science (EECS) who work extensively on game theory, all have spent substantial time on questions more typically addressed by the social sciences.

Going once
EECS professor Constantinos Daskalakis is a good example. In 2008, he won the Association for Computing Machinery’s dissertation prize by showing how techniques drawn from theoretical computer science could shed new light on one of the central concepts in game theory: equilibrium.

Equilibrium is the idea that won Nash his Nobel, and the Nash equilibrium is the type of equilibrium most commonly studied. It describes a balance of strategies that no player of a game has a motive to change unilaterally. The most basic example of a Nash equilibrium involves the so-called penalty-kick game. In soccer, a penalty kick gives an offensive player a free shot on goal with only the goalie defending. The ball travels so quickly that the goalie has to guess which way to dive before it’s struck. In the game-theoretical version, if both players pick the same half of the goal, the goalie wins; if they pick different halves, the shooter wins.

The equilibrium state for this game is for both players to pick a direction randomly on any given kick but to ensure that, overall, they choose both directions with equal frequency. In that case, they’ll each win half the time, and neither can improve his or her odds by deviating from that strategy. For instance, if the goalie suddenly started going the same direction every time and the shooter stuck to the original strategy, the goalie’s winning percentage would merely stay the same. However, a shooter who noticed the shift could win every kick by going the opposite direction every time, so the goalie has no incentive to make this change.

But the penalty-kick game is one of the simplest of games. Finding equilibria for even slightly more complex games can be enormously difficult. In his dissertation, Daskalakis proved that for some situations that can be described through game theory, the Nash equilibrium is so hard to calculate that all the computers in the world couldn’t find it in the lifetime of the universe. In those cases, Daskalakis argues, humans probably haven’t found it through trial and error either. That means game theorists need analytic tools other than the Nash equilibrium if they want any hope of describing the real world.

Fortunately, in the same way that computer science has developed a battery of techniques for determining the complexity of computations such as those that produce Nash equilibria, it’s also developed a battery of techniques for identifying approximate solutions to otherwise intractable problems. Daskalakis and his students, for example, were able to find one for an economics problem that had stood for 30 years.

In 1981, the University of Chicago’s Roger Myerson showed how to structure an auction for a single item so that if all the bidders adopted the bidding strategies in their best interest, the seller would realize the greatest profit. That work helped earn him the 2007 Nobel Prize. It also raised a related question: what is the best way to structure an auction for more than one item? (In economists’ jargon, any market with a single seller and multiple buyers counts as an auction; a Christie’s auction is one, but so are sales at a retail store.) It’s a question with “such large complexity that there’s no succinct description for the auction that gives you the optimal profit,” Daskalakis says. To maximize revenue across multiple items, the seller probably has to sell each item at less than the highest price someone would be willing to pay. But the discount varies according to factors like the mix of items being sold and the populations from which the buyers are drawn.

Computer science offers a fresh perspective on the problem—what Daskalakis calls the approximation perspective. “Maybe you are unable to find the optimal auction,” he says, “but an auction that guarantees 99 percent of the best revenue is a good auction as well.” Daskalakis and his students showed that for any multi-item market, the ideal auction—one that maximized the seller’s revenue—could be approximated by a combination of the results of simpler auctions.

A somewhat different approach to auction problems characterizes the work of engineering professor Silvio Micali. He and EECS professor Shafi Goldwasser are the most recent recipients of the Turing Award, the highest award in computer science. In large part, the award honors their work on so-called interactive proofs, in which a questioner with limited computational resources tries to elicit the result of a calculation from an unreliable interlocutor with unlimited computational resources. One example is a zero–knowledge proof, in which one of the participants establishes possession of a piece of information, like a cryptographic key, without revealing what it is. Zero-knowledge proofs are used to secure transactions between financial institutions, and several startups have been founded to commercialize them.

Micali is pursuing several game–theoretical research projects, but one of them is very close in spirit to zero–knowledge proofs. In many public auctions—as, for instance, when the federal government auctions unused radio spectrum to telecom companies—the auctioneer is bound to disclose all participants’ bids for the sake of transparency. For a company that participates in such an auction and loses, “it’s really the worst of all possible outcomes,” Micali says. “Your competitors now know how much you value this thing, from which they can deduce how large a clientele you serve or what technology you have available.”

So Micali’s group is developing auctions in which participants can publicly disclose enough information about their bids to decide a winner, without revealing the bids themselves. “I believe that eventually this will become mainstream in game theory,” Micali says. “You cannot really have a meaningful science of human behavior while disregarding privacy.”

Who’s in control?
For many situations that can be expressed as games, the Nash equilibrium may be, as Daskalakis showed, nearly impossible to compute. But that doesn’t mean that the players’ behavior is random. Consider a grid of city streets where drivers are making countless decisions at dozens of intersections. Even if the drivers aren’t evaluating every possible consequence of alternative decisions, they’re still adopting some simple strategies—say, if you’ve been sitting still for too long, turn down a side street. According to Munther Dahleh, the associate head of EECS, analyzing such systems brings game theory very close to his own field, control theory, which investigates strategies for controlling dynamic systems such as robots’ limbs and planes’ wings. “We have a different view of these problems,” Dahleh says. “As opposed to imposing the notion of equilibrium and saying ‘What strategies would people play under that equilibrium?’ we look at the controlled dynamical behavior and ask the question ‘What notion of equilibrium emerges?’”

Dahleh has indeed applied the tools of game theory to the analysis of traffic flow, investigating the types of road layouts that can best accommodate the closing of particular routes. His approach also applies to other large-scale dynamic systems, such as the electricity grid.

Every day, power producers—operators of nuclear plants, coal-fired plants, wind farms, and the like—offer new schedules of how much electricity they’re willing to produce, at what price, at what times of day. The utilities that deliver electricity also have administrators who decide, on the basis of expected consumer demand, how much power to purchase from each provider. Power production and consumption must match exactly or the consequences are disastrous.

Using the tools of game theory to analyze the incentives of both power providers and consumers, Dahleh and Mardavij Roozbehani, PhD ‘08, a principal research scientist in the Laboratory for Information and Decision Systems, showed that “smart meters” in the home, which can provide information about spot pricing in the electricity market and let consumers defer energy-intensive household tasks until they are most affordable, could actually cause spikes in demand that would bring down the whole grid.

power surge graph



Dahleh has also collaborated with Ozdaglar and her husband, the MIT economist Daron Acemoglu, to analyze how information propagates through populations. The “game” in this case is one in which people weigh the truth or falsity of information that reaches them, as they strive to maximize the accuracy of their own beliefs.

“These are questions that have been studied in both sociology and economics,” Ozdaglar says. Traditionally, however, these investigations have assumed that any person in a given population can receive information directly from any other. What engineers offer, Ozdaglar argues, are well-honed tools for analyzing the underlying network structure of the population. Most people, for instance, in fact receive most of their information from just a few immediate neighbors in the network—and they assign different probabilities to the accuracy of different neighbors’ claims.

“In the past, I think that social science and economics were dealing with problems differently than engineers,” Dahleh says. “Now we’re all talking about social networks—decisions in social networks, dynamics on networks—so I think the two fields are converging.”

0 comments about this story. Start the discussion »

Credits: Sarah A. King

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me