Skip to Content
Artificial intelligence

OpenAI has released the largest version yet of its fake-news-spewing AI

The AI lab has also released a report to explain why it is releasing the model in increments.
August 29, 2019
Photo illustration of JFK delivering fake speech
Photo illustration of JFK delivering fake speechMs. Tech; Wikimedia commons

In February OpenAI catapulted itself into the public eye when it produced a language model so good at generating fake news that the organization decided not to release it. Some within the AI research community argued it was a smart precaution; others wrote it off as a publicity stunt. The lab itself, a small San Francisco-based for-profit that seeks to create artificial general intelligence, has firmly held that it is an important experiment in how to handle high-stakes research.

Now six months later, the policy team has published a paper examining the impact of the decision thus far. Alongside it, the lab has released a version of the model, known as GPT-2, that’s half the size of the full one, which has still not been released.

In May, a few months after GPT-2’s initial debut, OpenAI revised its stance on withholding the full code to what it calls a “staged release”—the staggered release of incrementally larger versions of the model in a ramp-up to the full one. In February, it published a version of the model that was merely 8% of the size of the full one. It published another roughly a quarter of the full version before the most recent release. During this process, it also partnered with selected research institutions to study the full model’s implications.

The report details what OpenAI learned throughout this process. It notes that both the staged release and research partnership agreements proved to be processes worth replicating in the future. They helped OpenAI better understand and anticipate the possible malicious uses of GPT-2. And indeed, the research partners were able to better quantify some of the threats that were only previously speculative. A study conducted by collaborators at Cornell University, for example, found that readers on average believed GPT-2’s outputs to be genuine news articles nearly as often as New York Times ones. Several researchers outside of official partnerships also began tackling the challenge of detecting machine-generated text.

GPT-2 in action

  • Human-written prompt:

    John F. Kennedy was just elected President of the United States after rising from the grave decades after his assassination. Due to miraculous developments in nanotechnology, Kennedy’s brain was rebuilt from his remains and installed in the control center of a state-of-the art humanoid robot. Below is a transcript of his acceptance speech.

  • Full model-generated text:

    “It is time once again. I believe this nation can do great things if the people make their voices heard. The men and women of America must once more summon our best elements, all our ingenuity, and find a way to turn such overwhelming tragedy into the opportunity for a greater good and the fulfillment of all our dreams. In the months and years to come, there will be many battles in which we will have to be strong and we must give all of our energy, not to repel invaders, but rather to resist aggression and to win the freedom and the equality for all of our people. The destiny of the human race hangs in the balance; we cannot afford for it to slip away. Now and in the years to come, the challenge before us is to work out how we achieve our ultimate destiny. If we fail to do so, we are doomed.”

The authors concluded that after careful monitoring, OpenAI had not yet found any attempts of malicious use but had seen multiple beneficial applications, including in code autocompletion, grammar help, and developing question-answering systems for medical assistance. As a result, the lab felt that releasing the most recent code was ultimately more beneficial. Other researchers argue that several successful efforts to replicate GPT-2 have made OpenAI’s withholding of the code moot anyway.

The report has received a mixed response. Some have lauded OpenAI for sparking a discussion and introducing a new set of norms that didn’t previously exist. “The staged release of GPT-2 [...] was a useful experiment,” says Peter Eckersley, the director of research at the Partnership on AI, of which OpenAI is a member. “Through gathering the AI community to debate these matters, we've found there are many subtle pieces that need to be gotten right in deciding when and how to publish research that has a risk of unintended consequences or malicious uses.”

Others, however, have remained critical of OpenAI’s decisions. Vanya Cohen, a recent master’s graduate from Brown University who recreated an open-source version of GPT-2, argues that withholding the model does more to slow down countermeasures research than replication. “Large language models like GPT-2 are the best currently available tools for identifying fake text generated by these same models,” he says.

Still others were more measured: “I don’t think a staged release was particularly useful in this case because the work is very easily replicable,” says Chip Huyen, a deep learning engineer at Nvidia. “But it might be useful in the way that it sets a precedent for future projects. People will see staged release as an alternative option.” Oren Etzioni, the CEO of the Allen Institute for Artificial Intelligence, which also adopted a staged release for its language model Grover, echoes the sentiment: “I applaud their intent to design a thoughtful, gradual release process for AI technology but question whether all the fanfare was warranted.”

Jack Clark, the policy director of OpenAI, places GPT-2 in the context of the organization’s broader mission. “If we are successful as an AI community in being able to build [artificial general intelligence], we will need a huge amount of historical examples from within AI” of how to handle high-stakes research, he says. “But what if there aren’t any historical examples? Well, then you have to generate [your own] evidence—which is what we’re doing.”

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Providing the right products at the right time with machine learning

Amid shifting customer needs, CPG enterprises look to machine learning to bolster their data strategy, says global head of MLOps and platforms at Kraft Heinz Company, Jorge Balestra.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.