Microsoft has created a tool to find pedophiles in online chats

The news: Microsoft has created an automated system to detect sexual predators trying to groom children online. The tool, code-named Project Artemis, is designed to spot patterns of communication in conversations.
Rating: On the basis of words and patterns of speech, the system assigns a rating for the likelihood that one of the participants is trying to groom the other. Companies implementing the technique can set a score (for example, 8 out of 10) above which any flagged conversations are sent to a human moderator to review. The moderators could potentially identify imminent threats and report them to law enforcement. It would also provide child protection experts with more information on how pedophiles operate online. Microsoft has been using these techniques for several years for its own products, including the Xbox platform and Skype, the company’s chief digital safety officer, Courtney Gregoire, said in a blog post.
How does it work? Microsoft hasn’t explained the precise words or patterns the tool hunts for—doing so could potentially cause predators to adjust their behavior to try to mask their activities. The tool is available free for companies that provide online chat functions, through a nonprofit called Thorn, which builds technology products to defend children from sexual abuse.
The risks: The system is likely to throw up a lot of false positives, since automated systems still struggle to understand the meaning and context of language. That means social-media companies will need to invest in more moderators if they are truly committed to tackling online grooming (and victims argue it is not clear that they are). The system also assumes that messages are not encrypted and that users consent to their private communications being read, which is not necessarily a given.
Sign up here to our daily newsletter The Download to get your dose of the latest must-read news from the world of emerging tech.
Deep Dive
Artificial intelligence
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
Driving companywide efficiencies with AI
Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
Unpacking the hype around OpenAI’s rumored new Q* model
If OpenAI's new model can solve grade-school math, it could pave the way for more powerful systems.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.