The news: On September 22, Microsoft announced that it would begin exclusively licensing GPT-3, the world’s largest language model, built by San Francisco–based OpenAI. The model acts like a powerful autocomplete: it can generate essays given the starting sentence, songs given a musical intro, or even web page layouts given a few lines of HTML code. Microsoft says it will begin making use of these capabilities in its products and services, though it didn’t specify details.
What does exclusive mean? The companies say OpenAI will continue to offer its public-facing API, which allows chosen users to send text to GPT-3 or OpenAI’s other models and receive its output. Only Microsoft, however, will have access to GPT-3’s underlying code, allowing it to embed, repurpose, and modify the model as it pleases.
A long time coming: OpenAI was originally founded as a nonprofit and raised its initial billion dollars on the premise that it would pursue AI for the benefit of humanity. It asserted that it would be independent from for-profit financial incentives and thus uniquely positioned to shepherd the technology with society’s best interests in mind.
But in early 2019, it stirred controversy when it chose not to release GPT-3’s predecessor, GPT-2, and shortly after broke from its pure nonprofit status to set up a for-profit arm. At the time, many speculated that part of the organization’s motive to withhold GPT-2 might be to preserve the possibility of licensing the model in the future. In July of 2019, OpenAI accepted its second billion-dollar investment from Microsoft (split between cash and credits to Azure, Microsoft’s cloud computing platform).
Indeed, in the months following the Microsoft investment, OpenAI CEO Sam Altman's internal messaging began emphasizing the need to commercialize its technologies in order to continue supporting its work. The latest news now solidifies OpenAI’s transformation. GPT-3 likely won’t be the only model it will exclusively license to Microsoft—it’s only the first.
Why it matters: Over the past few years, there has been growing concern over the way AI concentrates power. The most advanced AI techniques require an enormous amount of computational resources, which increasingly only the wealthiest companies can afford. This gives tech giants outsize influence not only in shaping the field of research but also in building and controlling the algorithms that shape our lives.
Some experts have proposed leveling out the playing field by increasing government funding to academic labs for AI research. But this requires a level of foresight and coordination that the US government in particular has struggled to manifest. OpenAI seemed to offer an alternative solution that would rely on neither corporate nor government dollars—but that no longer seems to be the case.
Why Meta’s latest large language model survived only three days online
Galactica was supposed to help scientists. Instead, it mindlessly spat out biased and incorrect nonsense.
DeepMind’s game-playing AI has beaten a 50-year-old record in computer science
The new version of AlphaZero discovered a faster way to do matrix multiplication, a core problem in computing that affects thousands of everyday computer tasks.
A bot that watched 70,000 hours of Minecraft could unlock AI’s next big thing
Online videos are a vast and untapped source of training data—and OpenAI says it has a new way to use it.
Google’s new AI can hear a snippet of song—and then keep on playing
The technique, called AudioLM, generates naturalistic sounds without the need for human annotation.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.