Machine learning could stop an online war of words before it starts
A new machine-learning system tries to predict whether an online conversation is going to get nasty right from the get-go.
How it works: Researchers gathered more than 1,200 exchanges from the discussion sections of Wikipedia Talk pages. They went through and labeled different “linguistic cues” in the conversations, including attempts at politeness, like using “please” and “thanks,” or other phrases suggesting that debate was welcome, like “I believe” or “I think.” Using the tagged threads, they then trained a system to predict from the first comment if a conversation was going to go south.
Results: Humans were successful about 72 percent of the time at the task, compared with 61.6 percent for the algorithm. Not great, but the work uncovered some trends. For example, comments that have direct questions or start with the word “you” are signals that the conversation will end up getting heated.
Why it matters: An AI that predicts a conversation’s trajectory could help companies (cough, Twitter, cough) build tools that stop a fight or salvage online dialogue.
Deep Dive
Artificial intelligence
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
Driving companywide efficiencies with AI
Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
Generative AI deployment: Strategies for smooth scaling
Our global poll examines key decision points for putting AI to use in the enterprise.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.