Adding Trust to Wikipedia, and Beyond
Tracing information back to its source could help prove trustworthiness.
The official motto of the Internet could be “don’t believe everything you read,” but moves are afoot to help users know better what to be skeptical about and what to trust.
A tool called WikiTrust, which helps users evaluate information on Wikipedia by automatically assigning a reliability color-coding to text, came into the spotlight this week with news that it could be added as an option for general users of Wikipedia. Also, last week the Wikimedia Foundation announced that changes made to pages about living people will soon need to be vetted by an established editor. These moves reflect a broader drive to make online information more accountable. And this week the World Wide Web Consortium published a framework that could help any Web site make verifiable claims about authorship and reliability of content.
WikiTrust, developed by researchers at the University of California, Santa Cruz, color-codes the information on a Wikipedia page using algorithms that evaluate the reliability of the author and the information itself. The algorithms do this by examining how well-received the author’s contributions have been within the community. It looks at how quickly a user’s edits are revised or reverted and considers the reputation of those people who interact with the author. If a disreputable editor changes something, the original author won’t necessarily lose many reputation points. A white background, for example, means that a piece of text has been viewed by many editors who did not change it and that it was written by a reliable author. Shades of orange signify doubt, dubious authorship, or ongoing controversy.
Luca de Alfaro, an associate professor of computer science at the UC Santa Cruz who helped develop WikiTrust, says that most Web users crave more accountability. “Fundamentally, we want to know who did what,” he says. According to de Alfaro, WikiTrust makes it harder to change information on a page without anyone noticing, and it makes it easy to see what’s happening on a page and analyze it.
The researchers behind WikiTrust are working on a version that includes a full analysis of all the edits made to the English-language version of Wikipedia since its inception. A demo of the full version will be released within the next couple months, de Alfaro says, though it’s still uncertain whether that will be hosted on the university’s own servers or by the Wikimedia Foundation. The principles used by WikiTrust’s algorithms could be brought onto any site with collaboratively created content, de Alfaro adds.
Creating a common language for building trust online is the goal of the Protocol for Web Description Resources (POWDER), released this week by the World Wide Web Consortium.
Powder takes a simpler approach than WikiTrust. By using Powder’s specifications, a Web site can make claims about where information came from and how it can be used. For example, a site could say that a page contains medical information provided by specific experts. It could also assure users that certain sites will work on mobile devices, or that content is offered through a Creative Commons license.
Powder is designed to integrate with third-party authentication services and to be machine-readable. Users could install a plug-in that would look for claims made through Powder on any given page, automatically check their authentication, and inform other users of the result. Search engines could also read descriptions made using Powder, allowing them to help users locate the most trustworthy and relevant information.
“From the outset, a fundamental aspect of Powder is that, if the document is to be valid, it must point to the author of that document,” says Phil Archer, a project manager for i-sieve technologies who is involved with the Powder working group. “We strongly encourage authors to make available some sort of authentication mechanism.”
Ed Chi, a senior research scientist at the Palo Alto Research Center, believes that educating users about online trust evaluation tools could be a major hurdle. “So far, human-computer interaction research seems to suggest that people are willing to do very little [to determine the trustworthiness of websites]–in fact, nothing,” he says. As an example, Chi notes the small progress that has been made in teaching users to avoiding phishing scams or to make sure that they enter credit-card information only on sites that encrypt data. “The general state of affairs is pretty depressing,” he says.
Even if Web users do learn to use new tools to evaluate the trustworthiness of information, most experts agree that this is unlikely to solve the problem completely. “Trust is a very human thing,” Archer says. “[Technology] can never, I don’t think, give you an absolute guarantee that what is on your screen can be trusted at face value.”
Become an Insider to get the story behind the story — and before anyone else.