A new machine-learning system tries to predict whether an online conversation is going to get nasty right from the get-go.
How it works: Researchers gathered more than 1,200 exchanges from the discussion sections of Wikipedia Talk pages. They went through and labeled different “linguistic cues” in the conversations, including attempts at politeness, like using “please” and “thanks,” or other phrases suggesting that debate was welcome, like “I believe” or “I think.” Using the tagged threads, they then trained a system to predict from the first comment if a conversation was going to go south.
Results: Humans were successful about 72 percent of the time at the task, compared with 61.6 percent for the algorithm. Not great, but the work uncovered some trends. For example, comments that have direct questions or start with the word “you” are signals that the conversation will end up getting heated.
Why it matters: An AI that predicts a conversation’s trajectory could help companies (cough, Twitter, cough) build tools that stop a fight or salvage online dialogue.
The inside story of how ChatGPT was built from the people who made it
Exclusive conversations that take us behind the scenes of a cultural phenomenon.
AI is dreaming up drugs that no one has ever seen. Now we’ve got to see if they work.
AI automation throughout the drug development pipeline is opening up the possibility of faster, cheaper pharmaceuticals.
GPT-4 is bigger and better than ChatGPT—but OpenAI won’t say why
We got a first look at the much-anticipated big new language model from OpenAI. But this time how it works is even more deeply under wraps.
The original startup behind Stable Diffusion has launched a generative AI for video
Runway’s new model, called Gen-1, can change the visual style of existing videos and movies.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.