Wisdom of the Crowd Accurately Predicts Supreme Court Decisions
Crowds can sometimes be wiser than the smartest individuals they contain. Now researchers have carried out the largest study of crowdsourcing in predicting SCOTUS decisions.
FantasySCOTUS is an online fantasy league in which contestants compete by predicting decisions made by the U.S. Supreme Court. Players are ranked as in any fantasy league, and the best performers can win prizes such as a “Golden Gavel” and even $10,000 in cash.
Since 2011, some 7,000 players have made over 600,000 predictions about the outcome of over 400 Supreme Court decisions. These people do not need any special qualification and entry is free, although prizes are limited to U.S. citizens. Players can come and go as they please, taking part in some predictions but not others.
All that makes this an interesting group. Social scientists have long been interested in the wisdom of crowds—the phenomenon in which large numbers of individuals, seemingly acting independently, can together make surprisingly accurate decisions, sometimes even better than the smartest among them.
Scientists know that in some circumstances the wisdom of crowds is highly accurate but that in others it is no better than random guessing, and sometimes worse. So just how good is the crowd at predicting Supreme Court decisions?
Today, we get an answer thanks to the work of Daniel Martin Katz at the Chicago Kent College of Law in Illinois and a couple of pals who have crunched the data from FantasySCOTUS to work out how good the wisdom of the crowd really is. And they show that crowdsourcing is pretty good at predicting the outcome over a wide range of conditions.
Their method is straightforward. These guys come up with the following scenario:
Recommended for You
Imagine if the following story were real. In October 2011, a million-dollar prize was announced to predict six years of decisions of the Supreme Court of the United States using crowdsourcing data. Thousands of teams from across industry and academia respond to the challenge, each making slightly different but reasonable choices about how to cast their predictions.
Katz and co ask how these teams might approach the problem. Some teams might follow the wisdom of the crowd in general, others might choose a subset of influential individuals to follow, another approach could be to weight the influence of individuals according to their accuracy in the past, and so on. These models might also evolve over time as the teams learn what kind of approach works best.
The question that Katz and co ask is how good these models can be. To find out, they create a wide range of models—some 250,000 of them—and test them on the data from FantasySCOTUS. “We exhaustively simulate models over a wide range of reasonable choices, and then analyze the performance of these models in aggregate,” the researchers say.
They then compare the performance of the models with a “null model.” In this case, the null model is a rule of thumb that lawyers use to guess the outcome of Supreme Court decisions. This rule is to assume that SCOTUS will reverse the decision of any lower court. And indeed, this is what happens around 60 percent of the time.
The analysis shows that crowdsourced models are even better, though. Katz and co say that many models consistently outperform the null model, the best predicting SCOTUS decisions with 80 percent accuracy. “We provide strong support for the claim that crowdsourcing can accurately and robustly predict the decisions of the Supreme Court of the United States,” they say.
The team plans to use the same method to predict the outcome of other legislative processes and even elections.
However, the work does not provide insight into the weaknesses of crowdsourcing wisdom. For example, the wisdom of crowds often breaks down when the opinions of individuals become correlated—when they influence each other too strongly. This can result in disastrous situations, as when large groups hold irrational or wrong opinions under the sway of groupthink.
Of course, the interesting area is between these extremes—between wisdom and folly. Which is why understanding how and when wisdom breaks down might be an interesting avenue for future work.
Ref: arxiv.org/abs/1712.03846 : Crowdsourcing Accurately and Robustly Predicts Supreme Court Decisions
Couldn't make it to EmTech Next to meet experts in AI, Robotics and the Economy?Go behind the scenes and check out our video