Ever since Facebook’s misinformation problem was called out in the wake of the 2016 presidential election, the social network has been cautious about solving the problem. The extent of its response to date: awareness raising and third-party fact-checkers to flag questionable content. Today, though, the company has announced that it will start to use AI to detect misleading articles. Its algorithms will proactively search out fake content, then send what it finds to fact-checkers, who will continue to alert users to questionable veracity by adding warnings beneath an article in feeds. It’s worth noting that it’s still a long way from using AI to stop the spread of fake news—but then, that notion makes Mark Zuckerberg feel deeply uncomfortable. In the past, he’s called the concept of filtering out fake news “complex, both technically and philosophically,” adding that Facebook must “be extremely cautious about becoming arbiters of truth.” Good thing it still isn't, then.
Geoffrey Hinton tells us why he’s now scared of the tech he helped build
“I have suddenly switched my views on whether these things are going to be more intelligent than us.”
ChatGPT is going to change education, not destroy it
The narrative around cheating students doesn’t tell the whole story. Meet the teachers who think generative AI could actually make learning better.
Meet the people who use Notion to plan their whole lives
The workplace tool’s appeal extends far beyond organizing work projects. Many users find it’s just as useful for managing their free time.
Learning to code isn’t enough
Historically, learn-to-code efforts have provided opportunities for the few, but new efforts are aiming to be inclusive.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.