Skip to Content

AI Definitely Didn’t Stop Fake News about the Las Vegas Shooting

October 3, 2017

As Americans woke Monday to reports of the tragedy in Las Vegas, many were confronted not by accurate news accounts but by untruthful posts from questionable websites.

Ars Technica reports that Google promoted a 4chan post, which incorrectly identified the shooter, in its Top Stories. The item was posted to 4chan’s “pol” section, which is famously full of … provocative content. Outline reporter William Turton was told by a Google spokesperson that its Top Stories are chosen according to a combination of “authoritativeness” and “how fresh” an item is. Clearly unauthoritative, the 4chan post must have been considered very fresh—especially given that it’s hard to consider 4chan a conventional news source.

Elsewhere, Fast Company explains that factually inaccurate content also made its way onto Facebook’s Safety Check page for the Las Vegas shooting, in the form of a story from a blog called Alt-Right News. Other fake news swirled, too—Buzzfeed has a list of examples.

This is, of course, a troubling misstep at a time when tech giants are supposed to be redoubling their efforts to contain such content. Facebook and Google are involved in an ongoing investigation with Congress about the spread of Russia-linked propaganda during the 2016 presidential election. Facebook has been plagued by problems like anti-Semitic ad targeting and an inability to effectively police offensive content.

Both firms believe computation solves these kinds of problems. Mark Zuckerberg has repeatedly and emphatically argued that artificial intelligence should be able to weed out offensive content and fake news. Speaking to the Outline, a Google spokesperson pleaded that “within hours, the 4chan story was algorithmically replaced by relevant results.”

But currently, making decisions to censor fake content based on breaking news is close to impossible for AIs, because they need a large set of data to learn from, and it can’t be rounded up and processed quickly enough. Yet relying on more conventional algorithms based on measures of “authoritativeness” and “freshness” takes hours to get things right—and hours isn’t fast enough.

The solution, for now at least, is probably not technological. Facebook has admitted as much, by increasing the number of people it uses to vet offensive content. But it, and Google, may need to swell those ranks far more if they’re to avoid repeating this kind of mistake over and over.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.