Skip to Content

A More Human Virtual Crowd

A new vision-based approach could help prevent crowd disasters.
April 20, 2011

Modeling crowd behavior can help engineers design buildings and other public spaces so as to prevent deaths and injuries during emergencies. But it is hard to design virtual crowds that realistically mimic real ones.

Watch out: European researchers created a model of pedestrian behavior based on vision. This image shows a comparison with real-life pedestrian data from a busy French sidewalk. Each pedestrian is tagged with a red or blue circle depending on the direction in which he or she is going.

European researchers have now shown that a simple model based on one cognitive factor—vision—can predict pedestrian behavior in various types of crowds. It represents significant progress in a field that has been trying to move away from purely physics-based models.

“There’s no clear way to describe the cognitive processes of each individual, but with this vision-based approach, it’s actually very simple,” says Dirk Helbing, of the Swiss Federal Institute of Technology in Zurich, who carried out the work with Mehdi Moussaïd and Guy Theraulaz, of Université Paul Sabatier in Toulouse, France.

The study, which appears in this week’s issue of Proceedings of the National Academy of Sciences, was inspired by previous research that used eye-tracking data to determine how people predict the trajectory of an airborne ball in order to catch it. Numerous other studies have suggested that walking, like catching a ball, is primarily governed by vision. So the researchers hypothesized that using visual factors, mainly line of sight and visibility, would allow them to better model crowd behavior.

The researchers gave virtual crowd members the ability to “see” their surroundings and navigate accordingly. They found that their vision-based model predicted pedestrian behavior surprisingly well for both small and large crowds as long as the physical influence of the crowd as a whole was also considered. They suggest that the model could help avert such crowd disasters as the Love Parade incident that killed 19 concertgoers in Germany last summer, by providing designers with new information about how pedestrians will attempt to move quickly through a specific space.

The model primarily indicates how vision affects pedestrians’ direction and speed—two forces that often compete when a person is navigating pedestrian traffic. The researchers predicted pedestrian trajectories using the model and then compared their predictions with data from real-life pedestrian scenarios. They found the trajectories matched up almost exactly.

To model crowd disasters, though, they had to consider involuntary as well as voluntary behaviors. What the pedestrian can see remains important, but sometimes the push and pull of the crowd can be even more so. “When the crowd becomes high-density, the simple model isn’t enough,” says Theraulaz. “You have to take into account the rules of physical contact.”

Adding a physical-force component to the vision-based model allowed the study authors to predict pedestrian behavior in different types of overcrowding situations, such as a bottleneck around a blocked exit or a pileup that forms behind a fallen pedestrian.

When the study authors applied their modified model to a real-world bottleneck disaster, they were able to predict the location of the highest-risk areas and map out how pedestrian collisions would spread once the situation became critical. “This is the most dangerous type of case,” says Helbing. “You can do video analysis afterward, but even then it’s hard to see exactly what’s going on, because people are hardly moving.”

One of the biggest advantages of the vision-based model is its versatility, says Michael Batty, an urban planning researcher at University College London, who studies crowd modeling. “It’s relevant to a whole range of pedestrian situations, and that’s what makes it more testable,” he says. The study authors suggest that the model could also be used to analyze crowd disasters in low-visibility cases, such as fires, and could help improve the design of crowd-navigating robots.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.