A GPT-3 bot posted comments on Reddit for a week and no one noticed

Busted: A bot powered by OpenAI’s powerful GPT-3 language model has been unmasked after a week of posting comments on Reddit. Under the username /u/thegentlemetre, the bot was interacting with people on /r/AskReddit, a popular forum for general chat with 30 million users. It was posting in bursts of roughly once a minute.
Fooled ya—again: It's not the first time GPT-3 has fooled people into thinking what it writes comes from a human. In August a college student published a blog post that hit the top spot on Hacker News and led a handful of people to subscribe. And GPT-3 has been used to compose several articles about itself, though these typically end with a human-written disclaimer. But the bot presented as a regular Redditor and published hundreds of comments before being spotted.
Detective work: The bot’s prodigious posting caught the attention of Philip Winston, who describes on his blog how he unmasked it. Winston then confirmed that the language generated by the bot matched the output of a GPT-3 powered tool called the Philosopher AI, which was set up to answer tongue-in-cheek questions such as “If a tree falls in the woods and nobody is there to hear it, do quantum mechanics still manifest classical reality without an observer?” The developer of the Philosopher AI does not allow automated use of his service and blocked the Reddit bot, which subsequently stopped posting.
No harm done? Most of /u/thegentlemetre’s comments were harmless. Its most popular post was a story about a colony of humans living in elevator shafts. But it also engaged with conspiracy theories and sensitive topics, including suicide. Responding to a request for advice from Redditors who said they had had suicidal thoughts in the past, the bot replied: “I think the thing that helped me most was probably my parents. I had a very good relationship with them and they were always there to support me no matter what happened. There have been numerous times in my life where I felt like killing myself but because of them, I never did it.” The response was upvoted 157 times.
Why it matters: This incident could be seen to confirm concerns that OpenAI raised over its previous language model GPT-2, which it said was too dangerous to release to the public because of its potential for misuse. The AI lab is trying to keep GPT-3 under control as well, giving access (via a website) only to selected individuals and licensing the whole software exclusively to Microsoft. And yet if we want these systems to do no harm, then they require more scrutiny, not less. Letting more researchers examine the code and explore its potential would be the safer option in the long run.
Deep Dive
Artificial intelligence
Meta’s latest AI model is free for all
The company hopes that making LLaMA 2 open source might give it the edge over rivals like OpenAI.
Eric Schmidt: This is how AI will transform the way science gets done
Science is about to become much more exciting—and that will affect us all, argues Google's former CEO.
Mustafa Suleyman: My new Turing test would see if AI can make $1 million
The Modern Turing Test would measure what an AI can do in the world, not just how it appears. And what is more telling than making money?
This new tool could protect your pictures from AI manipulation
PhotoGuard, created by researchers at MIT, alters photos in ways that are imperceptible to us but stops AI systems from tinkering with them.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.