The news: In its latest Community Standards Enforcement Report, released today, Facebook detailed the updates it has made to its AI systems for detecting hate speech and disinformation. The tech giant says 88.8% of all the hate speech it removed this quarter was detected by AI, up from 80.2% in the previous quarter. The AI can remove content automatically if the system has high confidence that it is hate speech, but most is still checked by a human being first.
Behind the scenes: The improvement is largely driven by two updates to Facebook’s AI systems. First, the company is now using massive natural-language models that can better decipher the nuance and meaning of a post. These models build on advances in AI research within the last two years that allow neural networks to be trained on language without any human supervision, getting rid of the bottleneck caused by manual data curation.
The second update is that Facebook’s systems can now analyze content that consists of images and text combined, such as hateful memes. AI is still limited in its ability to interpret such mixed-media content, but Facebook has also released a new data set of hateful memes and launched a competition to help crowdsource better algorithms for detecting them.
Covid lies: Despite these updates, however, AI hasn’t played as big a role in handling the surge of coronavirus misinformation, such as conspiracy theories about the virus’s origin and fake news of cures. Facebook has instead relied primarily on human reviewers at over 60 partner fact-checking organizations. Only once a person has flagged something, such as an image with a misleading headline, do AI systems take over to search for identical or similar items and automatically add warning labels or take them down. The team hasn’t yet been able to train a machine-learning model to find new instances of disinformation itself. “Building a novel classifier for something that understands content it’s never seen before takes time and a lot of data,” Mike Schroepfer, Facebook’s CTO, said on a press call.
Why it matters: The challenge reveals the limitations of AI-based content moderation. Such systems can detect content similar to what they’ve seen before, but they founder when new kinds of misinformation appear. In recent years, Facebook has invested heavily in developing AI systems that can adapt more quickly, but the problem is not just the company’s: it remains one of the biggest research challenges in the field.
Yann LeCun has a bold new vision for the future of AI
One of the godfathers of deep learning pulls together old ideas to sketch out a fresh path for AI, but raises as many questions as he answers.
Inside a radical new project to democratize AI
A group of over 1,000 AI researchers has created a multilingual large language model bigger than GPT-3—and they’re giving it out for free.
Sony’s racing AI destroyed its human competitors by being nice (and fast)
What Gran Turismo Sophy learned on the racetrack could help shape the future of machines that can work alongside humans, or join us on the roads.
DeepMind has predicted the structure of almost every protein known to science
And it’s giving the data away for free, which could spur new scientific discoveries.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.