Skip to Content
Uncategorized

Best of 2013: Edit Wars Reveal the 10 Most Controversial Topics on Wikipedia

In July, an analysis of the most highly contested articles on Wikipedia revealed the controversies that appear invariant across languages and cultures.

Wikipedia, the encyclopaedia that anybody can edit, is one of the more extraordinary collective efforts of the crowd. Wikipedia’s own estimate is that it has some 77,000 contributors working on more than 22 million articles in 285 languages. The largest edition, the English version, alone offers over 4 million articles.

So it’s not surprising that disputes arise over the wording of these articles. Indeed, the controversy can sometimes reach war-like proportions with one editor changing the wording and another immediately changing it back again.

These so-called edit wars can be used to identify controversial topics but an interesting question is how controversy varies across languages and cultures. Given its unique position that straddles multiple languages and cultures, Wikipedia is in the perfect position to provide some answers.

Today, Taha Yasseri at the University of Oxford in the UK and a few pals have ranked the most controversial topics in 10 different languages according to the intensity of the editing wars they generate.

Continue reading…

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.