A new machine-learning system tries to predict whether an online conversation is going to get nasty right from the get-go.
How it works: Researchers gathered more than 1,200 exchanges from the discussion sections of Wikipedia Talk pages. They went through and labeled different “linguistic cues” in the conversations, including attempts at politeness, like using “please” and “thanks,” or other phrases suggesting that debate was welcome, like “I believe” or “I think.” Using the tagged threads, they then trained a system to predict from the first comment if a conversation was going to go south.
Results: Humans were successful about 72 percent of the time at the task, compared with 61.6 percent for the algorithm. Not great, but the work uncovered some trends. For example, comments that have direct questions or start with the word “you” are signals that the conversation will end up getting heated.
Why it matters: An AI that predicts a conversation’s trajectory could help companies (cough, Twitter, cough) build tools that stop a fight or salvage online dialogue.
Meta has built a massive new language AI—and it’s giving it away for free
Facebook’s parent company is inviting researchers to pore over and pick apart the flaws in its version of GPT-3
Yann LeCun has a bold new vision for the future of AI
One of the godfathers of deep learning pulls together old ideas to sketch out a fresh path for AI, but raises as many questions as he answers.
The dark secret behind those cute AI-generated animal images
Google Brain has revealed its own image-making AI, called Imagen. But don't expect to see anything that isn't wholesome.
The hype around DeepMind’s new AI model misses what’s actually cool about it
Some worry that the chatter about these tools is doing the whole field a disservice.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.