Skip to Content
Uncategorized

AI Definitely Didn’t Stop Fake News about the Las Vegas Shooting

October 3, 2017

As Americans woke Monday to reports of the tragedy in Las Vegas, many were confronted not by accurate news accounts but by untruthful posts from questionable websites.

Ars Technica reports that Google promoted a 4chan post, which incorrectly identified the shooter, in its Top Stories. The item was posted to 4chan’s “pol” section, which is famously full of … provocative content. Outline reporter William Turton was told by a Google spokesperson that its Top Stories are chosen according to a combination of “authoritativeness” and “how fresh” an item is. Clearly unauthoritative, the 4chan post must have been considered very fresh—especially given that it’s hard to consider 4chan a conventional news source.

Elsewhere, Fast Company explains that factually inaccurate content also made its way onto Facebook’s Safety Check page for the Las Vegas shooting, in the form of a story from a blog called Alt-Right News. Other fake news swirled, too—Buzzfeed has a list of examples.

This is, of course, a troubling misstep at a time when tech giants are supposed to be redoubling their efforts to contain such content. Facebook and Google are involved in an ongoing investigation with Congress about the spread of Russia-linked propaganda during the 2016 presidential election. Facebook has been plagued by problems like anti-Semitic ad targeting and an inability to effectively police offensive content.

Both firms believe computation solves these kinds of problems. Mark Zuckerberg has repeatedly and emphatically argued that artificial intelligence should be able to weed out offensive content and fake news. Speaking to the Outline, a Google spokesperson pleaded that “within hours, the 4chan story was algorithmically replaced by relevant results.”

But currently, making decisions to censor fake content based on breaking news is close to impossible for AIs, because they need a large set of data to learn from, and it can’t be rounded up and processed quickly enough. Yet relying on more conventional algorithms based on measures of “authoritativeness” and “freshness” takes hours to get things right—and hours isn’t fast enough.

The solution, for now at least, is probably not technological. Facebook has admitted as much, by increasing the number of people it uses to vet offensive content. But it, and Google, may need to swell those ranks far more if they’re to avoid repeating this kind of mistake over and over.

Deep Dive

Uncategorized

Our best illustrations of 2022

Our artists’ thought-provoking, playful creations bring our stories to life, often saying more with an image than words ever could.

How CRISPR is making farmed animals bigger, stronger, and healthier

These gene-edited fish, pigs, and other animals could soon be on the menu.

The Download: the Saudi sci-fi megacity, and sleeping babies’ brains

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. These exclusive satellite images show Saudi Arabia’s sci-fi megacity is well underway In early 2021, Crown Prince Mohammed bin Salman of Saudi Arabia announced The Line: a “civilizational revolution” that would house up…

10 Breakthrough Technologies 2023

Every year, we pick the 10 technologies that matter the most right now. We look for advances that will have a big impact on our lives and break down why they matter.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.