Skip to Content
Silicon Valley

Twitter picks researchers to help it clean up users’ conversations

July 30, 2018

The site wants these academics to create ways to measure the “health” of conversations on the platforms. It hopes these metrics will lead to kinder tweets (or at least less racism and trolling). 

The details: Twitter said Monday that it selected two groups of academic researchers to lead the conversation health project—the conclusion of a process started in March. The company apparently received more than 230 submissions from institutions eager to work on metrics that engineers can use to help analyze and improve the social network.

The plans: The first group includes researchers from universities in the Netherlands, the US, and in Italy. They will look at the issues of “echo chambers” and impolite conversations, and produce ways to measure how much Twitter users interact with diverse viewpoints on the social network. They’ll also develop algorithms to tell the difference between “incivility” (which could simply be rude chatter) and “intolerant discourse” (such as racist language).

The second group, made up of researchers from universities in the UK and the Netherlands, will look at how showing users others who have different perspectives and backgrounds can reduce prejudice and discrimination.

Isn’t Twitter a US-based company? Well, sure. But while the US has more Twitter users than any other single country, only about a fifth of the total number of Twitter users are based in the US. So it makes a lot of sense that the company would involve researchers who are based outside the country if it wants to get a good sense of what’s going on globally with its user base.

The prognosis: Long-term effects remain to be seen, but when MIT Technology Review  talked to academic researchers inside and outside the US in March, they thought it would be tricky to measure conversational health with just a few metrics.

Twitter seems to be taking the whole process seriously, though, and recently purged tens of millions of fake accounts in another effort to show it’s trying to improve. 

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.