AI Definitely Didn’t Stop Fake News about the Las Vegas Shooting
As Americans woke Monday to reports of the tragedy in Las Vegas, many were confronted not by accurate news accounts but by untruthful posts from questionable websites.
Ars Technica reports that Google promoted a 4chan post, which incorrectly identified the shooter, in its Top Stories. The item was posted to 4chan’s “pol” section, which is famously full of … provocative content. Outline reporter William Turton was told by a Google spokesperson that its Top Stories are chosen according to a combination of “authoritativeness” and “how fresh” an item is. Clearly unauthoritative, the 4chan post must have been considered very fresh—especially given that it’s hard to consider 4chan a conventional news source.
Elsewhere, Fast Company explains that factually inaccurate content also made its way onto Facebook’s Safety Check page for the Las Vegas shooting, in the form of a story from a blog called Alt-Right News. Other fake news swirled, too—Buzzfeed has a list of examples.
This is, of course, a troubling misstep at a time when tech giants are supposed to be redoubling their efforts to contain such content. Facebook and Google are involved in an ongoing investigation with Congress about the spread of Russia-linked propaganda during the 2016 presidential election. Facebook has been plagued by problems like anti-Semitic ad targeting and an inability to effectively police offensive content.
Both firms believe computation solves these kinds of problems. Mark Zuckerberg has repeatedly and emphatically argued that artificial intelligence should be able to weed out offensive content and fake news. Speaking to the Outline, a Google spokesperson pleaded that “within hours, the 4chan story was algorithmically replaced by relevant results.”
But currently, making decisions to censor fake content based on breaking news is close to impossible for AIs, because they need a large set of data to learn from, and it can’t be rounded up and processed quickly enough. Yet relying on more conventional algorithms based on measures of “authoritativeness” and “freshness” takes hours to get things right—and hours isn’t fast enough.
The solution, for now at least, is probably not technological. Facebook has admitted as much, by increasing the number of people it uses to vet offensive content. But it, and Google, may need to swell those ranks far more if they’re to avoid repeating this kind of mistake over and over.
Keep Reading
Most Popular
Geoffrey Hinton tells us why he’s now scared of the tech he helped build
“I have suddenly switched my views on whether these things are going to be more intelligent than us.”
Meet the people who use Notion to plan their whole lives
The workplace tool’s appeal extends far beyond organizing work projects. Many users find it’s just as useful for managing their free time.
Learning to code isn’t enough
Historically, learn-to-code efforts have provided opportunities for the few, but new efforts are aiming to be inclusive.
Deep learning pioneer Geoffrey Hinton has quit Google
Hinton will be speaking at EmTech Digital on Wednesday.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.