Mark Zuckerberg told the US Congress this week that Facebook will increasingly rely on artificial intelligence to catch hate speech spread on the platform. “I am optimistic that over a five-to-10-year period we will have AI tools that can get into some of the linguistic nuances of different types of content to be more accurate,” said the Facebook CEO, who was called to testify after the scandal around Cambridge Analytica’s misappropriation of personal data belonging to millions of users.
Facebook already employs 15,000 human moderators to screen and remove offensive content, and it plans to hire another 5,000 by the end of this year, Zuckerberg said. But right now, those moderators can only react to posts Facebook users have flagged. Using AI to identify potentially offending material would make it faster and easier to remove. But it won’t be easy, for three reasons.
Recommended for You
1. Words are easy, but meaning is hard
Language remains a huge AI challenge. It’s easy enough for a computer to catch key words or phrases, or to classify the sentiment of text, but understanding the meaning of a post would require far deeper knowledge of the world. What makes language a powerful and complex way to communicate is that it relies on common-sense knowledge, and that we use a mental model of other people to pack a lot of information into a few words (see “AI’s language problem”).
“Fake news, especially, is going to be very hard,” says Ernest Davis, a professor at NYU who specializes in the challenge of common-sense reasoning with computers. “If you look at what Snopes does, they look at a wide variety of things. And fake news is often made of half-truths.”
2. It’s an arms race
Even if progress is made in natural-language understanding, the purveyors of hate and misinformation could well adopt some of the same tools in order to evade detection.
So warns Sean Gourley, the CEO of Primer, a company that uses AI to generate reports for US intelligence agencies via In-Q-Tel, an investment fund. Speaking at an MIT Technology Review event recently, Gourley said that AI would also inevitably be used to mass-produce targeted and optimized fake news stories in the not-too-distant future.
3. Video will make things worse
We may in fact be seeing the beginnings of a far more insidious era of fake news. Researchers have demonstrated convincing-looking synthetic videos and audio created by machine learning, including tricks like having politicians appear to make speeches that never happened. The trickery has already raised the troubling prospect of fake revenge porn.
Understanding video is something AI researchers are just starting to tackle. The fake videos made this way could also prove especially difficult for an AI to catch. They are created using two neural networks that compete to generate and spot fake imagery (see “10 Breakthrough Technologies: Dueling neural networks”). The process relies on fooling one of the networks into thinking something is real, so building a system that could catch the fakes would be difficult.
Keep up with the latest in AI at EmTech Digital.
The Countdown has begun.
March 25-26, 2019
San Francisco, CA