Skip to Content
Uncategorized

Artificial Societies and Virtual Violence

How modeling societies in silico can help us understand human inequality, revolution, and genocide.

Paul Krugman, the distinguished Princeton University economics professor and New York Times columnist, once explained the jejune motives for his choice of career. “In my early teens my secret fantasy was to become a psychohistorian,” he wrote, referring to the central gimmick, “psycho­history,” of Isaac Asimov’s Foundation trilogy. Krugman continued, “Someday there will exist a unified social science of the kind that Asimov imagined, but for the time being economics is as close to psycho­history as you can get.”

Local ethnic cleansing to genocide.

That’s risible, given the gulf between Asimov’s fantasy of a predictive calculus of human affairs and the actuality of mainstream economics–indeed, of any of the social sciences–as practiced during most of the last century. Recent decades, though, have seen new approaches. One of the most promising was described by Joshua Epstein, a senior fellow at the Brookings Institution, in Growing Artificial Societies: Social Science from the Bottom Up, a book he published in 1996 in collaboration with Robert Axtell. “Perhaps one day people will interpret the question, ‘Can you explain it?’ as asking ‘Can you grow it?’” Epstein suggested. “Artificial society modeling allows us to ‘grow’ social structures in silico demonstrating that certain sets of microspecifications are sufficient to generate the macro­phenomena of interest.”

What does this mean? And why should we care? Epstein’s claim was twofold. First, he pointed out that while almost all the patterns that interest social scientists are emergent ones–that is, complex developments arising from a lot of relatively simple interactions–disciplines such as mainstream economics conceive of societies as tending toward some notional equilibrium. Standard explanations assume, too, that societies consist of highly rational agents who, possessing full knowledge, act always in their own best interest. When it comes to how real populations of diverse actors with limited rationality actually evolve their patterns of, say, wealth distribution, Epstein noted, the stock explanations have almost nothing to say. (See “A Letter to the Editor from Joshua Epstein.”)

Epstein was hardly alone in making those criticisms. But he proposed, secondly, that computer models in themselves could effectively describe societies. In the early 1990s, Epstein and Axtell had created a simu­lation called Sugarscape, a square grid representing a two-dimensional landscape inhabited by autonomous sub­programs–agents–that were driven from square to square by crude artificial metabolisms that demanded a resource, designated “sugar.” When hundreds of these agents were programmed so that their ranges of vision and metabolic rates varied, even in simple ways, surprising patterns emerged.

Indeed, Epstein and Axtell would learn that with their models, “the trick [was] to get a lot out, while putting in as little as possible,” as Epstein writes in his latest book, Generative Social Science: Studies in Agent-Based Computational Modeling. In the early 1990s, the two men set up two regions of their Sugarscape grid to be rich in the sugar resource, so that agents quickly gravitated toward them. A few agents with superior vision and low metabolic rates accumulated large sugar stocks. Other agents, with weaker vision and high metabolic rates, subsisted or died in zones where sugar was in short supply. Essentially, Epstein and Axtell found, Sugarscape functioned as a model of a hunter-gatherer society, reproducing a common feature of human societies: skewed wealth distribution. Granted, the notion that crude automata moving around a computer grid suggest that wealth inequality is an innate feature of human existence will be disliked not only by Marxists but by most of the rest of us, given how varied we know our individual experiences to be. Nevertheless, nature is full of peculiarly consistent statistical relationships, which reoccur across dissimilar realms and which statisticians call “power laws.”

The most common power law is the Pareto distribution, named for the 19th-century Italian economist ­Vilfredo Pareto. In the late 1890s, Pareto argued that in any given society, 20 percent of the people will hold 80 percent of the wealth. But the Pareto distribution, also known as the “80-20 rule,” holds in such diverse human contexts as size of settlements (a few big cities, many smaller towns) and frequency of words in text (a few words used often, most words infrequently), as well as for natural phenomena like the size of sand particles and of meteorites. That the behavior of Sugarscape’s automata yielded power law-type distributions indicated to Epstein and Axtell that they were on to something.

In the early 1990s, Epstein gave a presentation at the Santa Fe Institute in New Mexico, a center for the study of complex adaptive systems across natural, human, and artificial contexts. “I showed one of our artificial histories set in the standard Sugarscape landscape with two sugar peaks, a sugar lowland in the middle, and sugar badlands on the sides–effectively, a simple valley representation,” Epstein told me. “I asked the audience if it reminded anybody of anything. George ­Gumerman’s hand shot up, and he said, ‘It reminds me of the Anasazi.’”

George Gumerman is an anthropologist who for decades has been a leading expert on the Anasazi, ancestors of the present-day Pueblo ­peoples who from roughly 1800 b.c.e. to 1300 c.e. inhabited Long House Valley in northeast Arizona. Epstein and Axtell decided to use their agent-based modeling to create a virtual Anasazi civilization and see how it matched up against the extensive database of settle­ment patterns and the like assembled by Gumerman and his colleagues. Epstein recalled, “We started over, building the artificial terrain from scratch, with great exactitude.” Elements like climate patterns, maize yields, fluctuations of the water table, and multitudes of other factors went into the model. “The big trick was, Could we come up with good rules for our artificial Anasazi, put them where the real ones were in 900 a.d., and let them run till they grew the true history?” Epstein remembered one session in which his team’s artificial Anasazi established a settlement exactly where Long House, the real Anasazi settlement, had been. “We just sat screaming into the air with gratification. The entire business has come an awfully long way since then. Now there’s many people doing this kind of work.”

Indeed. The website of the Journal of Artificial Societies and Social Simu­lation, for instance, lists papers with titles such as “Cascades of Failure and Extinction in Evolving Complex Systems.” Epstein’s new book collects his own papers since 1996; an accompanying CD lets readers watch runs of the models described in the text and explore the models on their own. In the projects described in the book, Epstein and his collaborators modeled, in addition to the Anasazi, the emergence of various phenomena: patterns in the timing of retirement; social classes; thoughtless conformity to social norms; patterns of smallpox infection after a bioterrorist incident; and successful, adaptive organization.

The models are fascinating. In both of the variants described in “Generating Patterns of Spontaneous Civil Violence” (see figures 1 and 2), there are regular agents as well as agents called cops, representing a central political authority. The left screen depicts regular agents’ overt behavior (blue if quiescent, red if active) and the right the underlying “emotionscape,” where agents are colored according to their level of political grievance (the darker the red, the higher the grievance). Grievance has two components: legiti­macy (L) of the state, as perceived by the agents, and hardship (H), which is physical or economic privation and varies between agents. Furthermore, agents can deceive: on the left screen, aggrieved agents can turn blue (appearing nonrebellious) when cops (always black) are near, then turn red (actively rebellious) when cops move away. Epstein also assigned varying levels of risk aversion (R) to the agents: some are more inclined to rebel than others. Agents assess their likelihood of arrest by cops before joining a rebellion, and their assessments depend on their vision (v) of what’s around them–that is, how many grid positions (north, south, east, and west) they can see. Finally, agents arrested by cops receive jail sentences (J). “Arrested agents go to jail for a random duration and emerge as aggrieved as they went in,” Epstein told me. “I always joke that those are the only two realistic assumptions in the whole model.”

Though this model may seem overly simple, it generates realistic enough patterns once the human operator sets the parameters of L and J, the agents’ and cops’ vision, and their initial densities and then lets both groups move around and interact. In variant one, “Generalized Rebellion against Central Authority” (see figure 1), high concentrations of activist, aggrieved agents can arise in zones with low cop densities. When that happens, even mildly aggrieved agents find it rational to risk rebellion. It’s for just this reason that freedom of assembly is generally the first thing curtailed under repressive regimes. Furthermore, the model displays the hallmark of a complex system: punctuated equilibrium, with long periods of relative stability broken by rebellious outbursts. In some runs, the right-hand “emotionscape” screen may be bright red with the agents’ grievance, while the left screen is entirely blue because of their public quiescence. Which would be more likely to trigger revolution: a large absolute reduction of L (legitimacy) in small increments or a smaller reduction carried out in one large step? The latter, it turns out. In the case of the large but incremental reduction, cops can pick off activist agents one by one and jail them. Conversely, a sudden, sharp reduction in legitimacy spurs multiple aggrieved agents into active rebellion at once. As Epstein noted, “Once there are 50 people rebelling, it’s a lot less risky to be the 51st.”

Variant two, “Inter-Group Violence,” is more interesting. Now agents are divided into two ethnicities, blue and green. “Legitimacy becomes each group’s appraisal of the other group’s right to exist,” Epstein explained. In this context, an agent’s going activist means that it kills a member of the opposing ethnic group. The cops are peacekeepers, and if the model is run without them and L among all agents is reduced by as little as 20 percent, ethnic cleansing quickly begins. When cops are introduced, safe havens emerge. Nonetheless, interethnic hostility continues. Ultimately, as figure 2 shows and Epstein told me, “when you drop legitimacy in this variant, it always ends with one side wiping the other out.” Cop density can be set at any level. “At low cop densities, you get rapid genocide. At high cop densities, you likewise can sometimes get rapid genocide, but also a highly variable outcome. On average, more cops makes it take longer.” Enough longer to justify the expense of extra policing? It’s all just highly uncertain, Epstein says; merely to have a surge of cops would not guarantee a good outcome.

Altogether, in fact, Epstein stressed that his models were mostly aimed at achieving explanatory power. “To explain something doesn’t mean that you can predict it,” he said. He pointed out that though we can explain lightning and earthquakes, we can’t forecast either. If we’re hoping, like Asimov, to predict the future, Epstein’s models will disappoint. In fact, because his models give widely divergent results even when their agents are programmed with very simple rules, they indicate that predicting the future will never be possible. Still, Epstein’s artificial societies do more to make plain the hidden mechanisms underlying social shifts–and their unexpected consequences–than any tool that social scientists have hitherto possessed. In the future, they and others like them could suggest how policymakers can engineer the sorts of small, cheap interventions that have large, beneficial results.

Mark Williams is a Technology Review contributing editor.

Generative Social Science: Studies in Agent-Based Computational Modeling
By Joshua M. Epstein
Princeton Studies in Complexity series
Princeton University Press, 2006, $49.50

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.