In February OpenAI catapulted itself into the public eye when it produced a language model so good at generating fake news that the organization decided not to release it. Some within the AI research community argued it was a smart precaution; others wrote it off as a publicity stunt. The lab itself, a small San Francisco-based for-profit that seeks to create artificial general intelligence, has firmly held that it is an important experiment in how to handle high-stakes research.
Now six months later, the policy team has published a paper examining the impact of the decision thus far. Alongside it, the lab has released a version of the model, known as GPT-2, that’s half the size of the full one, which has still not been released.
In May, a few months after GPT-2’s initial debut, OpenAI revised its stance on withholding the full code to what it calls a “staged release”—the staggered release of incrementally larger versions of the model in a ramp-up to the full one. In February, it published a version of the model that was merely 8% of the size of the full one. It published another roughly a quarter of the full version before the most recent release. During this process, it also partnered with selected research institutions to study the full model’s implications.
The report details what OpenAI learned throughout this process. It notes that both the staged release and research partnership agreements proved to be processes worth replicating in the future. They helped OpenAI better understand and anticipate the possible malicious uses of GPT-2. And indeed, the research partners were able to better quantify some of the threats that were only previously speculative. A study conducted by collaborators at Cornell University, for example, found that readers on average believed GPT-2’s outputs to be genuine news articles nearly as often as New York Times ones. Several researchers outside of official partnerships also began tackling the challenge of detecting machine-generated text.
GPT-2 in action
John F. Kennedy was just elected President of the United States after rising from the grave decades after his assassination. Due to miraculous developments in nanotechnology, Kennedy’s brain was rebuilt from his remains and installed in the control center of a state-of-the art humanoid robot. Below is a transcript of his acceptance speech.
Full model-generated text:
“It is time once again. I believe this nation can do great things if the people make their voices heard. The men and women of America must once more summon our best elements, all our ingenuity, and find a way to turn such overwhelming tragedy into the opportunity for a greater good and the fulfillment of all our dreams. In the months and years to come, there will be many battles in which we will have to be strong and we must give all of our energy, not to repel invaders, but rather to resist aggression and to win the freedom and the equality for all of our people. The destiny of the human race hangs in the balance; we cannot afford for it to slip away. Now and in the years to come, the challenge before us is to work out how we achieve our ultimate destiny. If we fail to do so, we are doomed.”
The authors concluded that after careful monitoring, OpenAI had not yet found any attempts of malicious use but had seen multiple beneficial applications, including in code autocompletion, grammar help, and developing question-answering systems for medical assistance. As a result, the lab felt that releasing the most recent code was ultimately more beneficial. Other researchers argue that several successful efforts to replicate GPT-2 have made OpenAI’s withholding of the code moot anyway.
The report has received a mixed response. Some have lauded OpenAI for sparking a discussion and introducing a new set of norms that didn’t previously exist. “The staged release of GPT-2 [...] was a useful experiment,” says Peter Eckersley, the director of research at the Partnership on AI, of which OpenAI is a member. “Through gathering the AI community to debate these matters, we've found there are many subtle pieces that need to be gotten right in deciding when and how to publish research that has a risk of unintended consequences or malicious uses.”
Others, however, have remained critical of OpenAI’s decisions. Vanya Cohen, a recent master’s graduate from Brown University who recreated an open-source version of GPT-2, argues that withholding the model does more to slow down countermeasures research than replication. “Large language models like GPT-2 are the best currently available tools for identifying fake text generated by these same models,” he says.
Still others were more measured: “I don’t think a staged release was particularly useful in this case because the work is very easily replicable,” says Chip Huyen, a deep learning engineer at Nvidia. “But it might be useful in the way that it sets a precedent for future projects. People will see staged release as an alternative option.” Oren Etzioni, the CEO of the Allen Institute for Artificial Intelligence, which also adopted a staged release for its language model Grover, echoes the sentiment: “I applaud their intent to design a thoughtful, gradual release process for AI technology but question whether all the fanfare was warranted.”
Jack Clark, the policy director of OpenAI, places GPT-2 in the context of the organization’s broader mission. “If we are successful as an AI community in being able to build [artificial general intelligence], we will need a huge amount of historical examples from within AI” of how to handle high-stakes research, he says. “But what if there aren’t any historical examples? Well, then you have to generate [your own] evidence—which is what we’re doing.”
Why Meta’s latest large language model survived only three days online
Galactica was supposed to help scientists. Instead, it mindlessly spat out biased and incorrect nonsense.
A bot that watched 70,000 hours of Minecraft could unlock AI’s next big thing
Online videos are a vast and untapped source of training data—and OpenAI says it has a new way to use it.
Google’s new AI can hear a snippet of song—and then keep on playing
The technique, called AudioLM, generates naturalistic sounds without the need for human annotation.
Responsible AI has a burnout problem
Companies say they want ethical AI. But those working in the field say that ambition comes at their expense.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.