Microsoft has created a tool to find pedophiles in online chats

The news: Microsoft has created an automated system to detect sexual predators trying to groom children online. The tool, code-named Project Artemis, is designed to spot patterns of communication in conversations.
Rating: On the basis of words and patterns of speech, the system assigns a rating for the likelihood that one of the participants is trying to groom the other. Companies implementing the technique can set a score (for example, 8 out of 10) above which any flagged conversations are sent to a human moderator to review. The moderators could potentially identify imminent threats and report them to law enforcement. It would also provide child protection experts with more information on how pedophiles operate online. Microsoft has been using these techniques for several years for its own products, including the Xbox platform and Skype, the company’s chief digital safety officer, Courtney Gregoire, said in a blog post.
How does it work? Microsoft hasn’t explained the precise words or patterns the tool hunts for—doing so could potentially cause predators to adjust their behavior to try to mask their activities. The tool is available free for companies that provide online chat functions, through a nonprofit called Thorn, which builds technology products to defend children from sexual abuse.
The risks: The system is likely to throw up a lot of false positives, since automated systems still struggle to understand the meaning and context of language. That means social-media companies will need to invest in more moderators if they are truly committed to tackling online grooming (and victims argue it is not clear that they are). The system also assumes that messages are not encrypted and that users consent to their private communications being read, which is not necessarily a given.
Sign up here to our daily newsletter The Download to get your dose of the latest must-read news from the world of emerging tech.
Deep Dive
Artificial intelligence
The inside story of how ChatGPT was built from the people who made it
Exclusive conversations that take us behind the scenes of a cultural phenomenon.
AI is dreaming up drugs that no one has ever seen. Now we’ve got to see if they work.
AI automation throughout the drug development pipeline is opening up the possibility of faster, cheaper pharmaceuticals.
GPT-4 is bigger and better than ChatGPT—but OpenAI won’t say why
We got a first look at the much-anticipated big new language model from OpenAI. But this time how it works is even more deeply under wraps.
The original startup behind Stable Diffusion has launched a generative AI for video
Runway’s new model, called Gen-1, can change the visual style of existing videos and movies.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.