Artificial Intelligence Aims to Make Wikipedia Friendlier and Better
The nonprofit behind Wikipedia is turning to machine learning to combat a long-standing decline in the number of editors.
Software trained to know the difference between an honest mistake and intentional vandalism is being rolled out in an effort to make editing Wikipedia less psychologically bruising. It was developed by the Wikimedia Foundation, the nonprofit organization that supports Wikipedia.
One motivation for the project is a significant decline in the number of people considered active contributors to the flagship English-language Wikipedia: it has fallen by 40 percent over the past eight years, to about 30,000. Research indicates that the problem is rooted in Wikipedians’ complex bureaucracy and their often hard-line responses to newcomers’ mistakes, enabled by semi-automated tools that make deleting new changes easy (see “The Decline of Wikipedia”).
Aaron Halfaker, a senior research scientist at Wikimedia Foundation who helped diagnose that problem, is now leading a project trying to fight it, which relies on algorithms with a sense for human fallibility. His ORES system, for “Objective Revision Evaluation Service,” can be trained to score the quality of new changes to Wikipedia and judge whether an edit was made in good faith or not.
Halfaker invented ORES in hopes of improving tools that help Wikipedia editors by showing recent edits and making it easy to undo them with a single click. The tools were invented to meet a genuine need for better quality control after Wikipedia became popular, but an unintended consequence is that new editors can find their first contributions wiped out without explanation because they unwittingly broke one of Wikipedia’s many rules.
ORES can allow editing tools to direct people to review the most damaging changes. The software can also help editors treat rookie or innocent mistakes more appropriately, says Halfaker. “I suspect the aggressive behavior of Wikipedians doing quality control is because they’re making judgments really fast and they’re not encouraged to have a human interaction with the person,” he says. “This enables a tool to say, ‘If you’re going to revert this, maybe you should be careful and send the person who made the edit a message.’”
ORES is up to speed on the English, Portuguese, Turkish, and Farsi versions of Wikipedia so far. To learn to judge the quality of edits and distinguish damaging edits from innocent mistakes, it drew on data generated by Wikipedia editors who used an online tool to label examples of past edits. Some of the Wikipedians who maintain editing tools have already begun experimenting with the system.
Earlier efforts to make Wikipedia more welcoming to newcomers have been stymied by the very community that’s supposed to benefit. Wikipedians rose up in 2013 when Wikimedia made a word-processor-style editing interface the default, forcing the foundation to make it opt-in instead. To this day, the default editor uses a complicated markup language called Wikitext.
Halfaker believes his new algorithmic editing assistant will be accepted, because although it’s more sophisticated than previous software unleashed on Wikipedia, it isn’t being forced on users. “In some ways it’s weird to introduce AI and machine learning to a massive social thing, but I don’t see what we’re doing as any different to making other software changes to the site,” he says. “Every change we make affects behavior.”
Hear more about AI at EmTech MIT 2017.Register now