Skip to Content
MIT News magazine

The Great Balloon Race

MIT team wins DARPA social-networking challenge
February 23, 2010

On Tuesday, December 1, 2009, members of the MIT Media Lab’s Human Dynamics Lab received an e-mail with a lucrative proposition. The U.S. Defense Department’s Defense Advanced Research Projects Agency (DARPA) was holding a competition that weekend: on Saturday, 10 large red weather balloons would be raised at undisclosed locations across the United States. The first team to determine their correct latitude and longitude using social media–such as online social networks–would win $40,000.

UP AND AWAY Teams had to find 10 weather balloons like this one to win DARPA’s challenge.

On Wednesday, members of the lab began discussing the problem; by Thursday evening, they’d put up a website. On Saturday morning the balloons went up, and by the end of the day the MIT team–which consisted of postdocs Riley Crane and Manuel Cebrian and grad students Galen Pickard ‘05, MEng ‘06, Anmol Madan, SM ‘05, and Wei Pan–had won.

More than 4,000 teams entered the competition; some had been working for more than a month. But the Human Dynamics Lab has a particular expertise in using digital media to gain perspective on and even alter the behavior of large groups of people.

The crux of the MIT team’s approach was the incentive structure it designed–a way of splitting up the prize money among people who helped find a balloon. Whoever provided a balloon’s correct coördinates got $2,000, but whoever invited that person to join the network got $1,000, whoever invited that person got $500, and so on. No matter how long the chain got, the total payment per balloon would never quite reach $4,000; whatever was left over went to charity.

Pickard explains that the chain’s “long tail” gave people an incentive to spread the word about the MIT team’s offer. “If I tell somebody, and they tell at least two people, mathematically I do better than if I hadn’t told them,” Pickard says. He explains that if the payment scheme rewarded, say, only the first two people in the chain, a contest participant would want to tell as many other people as possible–but try to prevent them from telling anyone else.

Alex “Sandy” Pentland, PhD ‘82, who heads the Human Dynamics Lab, says the MIT team used what he describes as “broadcast” media–posts on highly trafficked websites like slashdot.org–to draw attention to its incentive scheme. The news then diffused through a variety of social media, but claiming a share of the prize money required registering on the MIT team’s website, which he calls a “concentrating mechanism.” “This is one of the first examples of combining these different types of media,” he says.

Remarkably, the third-place team consisted of two 2008 MIT grads, Christian Rodriguez and Tara Chang. They realized that without the sponsorship of a large and recognizable institution, they were hampered by lack of visibility. So in addition to texting and analyzing Twitter posts about balloon locations, they bought ads through Google’s AdWords network, which would direct anyone looking for information about the competition to their website. They also relied heavily on exchanging information with other teams by phone. So while the competition was intended as a test of new media, a low-profile team sneaked onto the leader board using some old networking principles: advertising and telephone calls.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.