Last Thursday, MIT hosted a celebration for the new Stephen A. Schwarzman College of Computing, a $1 billion effort to create an interdisciplinary hub of AI research. During an onstage conversation between Schwarzman, the CEO and cofounder of investment firm Blackstone, and the Institute’s president,Rafael Reif, Schwarzman noted, as he has before, that his leading motivation for donating the first $350 million to the college was to give the US a competitive boost in the face of China’s coordinated national AI strategy.
That prompted a series of questions about the technological race between the countries. They essentially boiled down to this: When it comes to AI, more data is better, because it is a brute-force situation. How can the US outcompete China when the latter has far more people and the former cares more about data privacy? Is it, in other words, just a lost cause for the US to try to “win”?
Here was Reif’s response: “That is the state of the art today—that you need tons of data to teach a machine.” He added, “State of the art changes with research.”
Reif’s comments served as an important reminder about the nature of the AI: throughout its history, the state of the art has evolved quickly. We could very well be one breakthrough away from a day when the technology looks nothing like what it does now. In other words, data may not always be king. (See “We analyzed 16,625 papers to figure out where AI is headed next.”)
Indeed, within the last few years, several researchers have begun to pursue new techniques that require very little data. Josh Tenenbaum, a professor of brain and cognitive sciences at MIT, for instance, has been developing probabilistic learning models, inspired by the way children quickly generalize their knowledge from exposure to just a few examples.
Reif continued to explain his vision. “Studying how the brain learns, we created the state of the art today,” he said. “We can use that state of the art now to [further] learn how the brain learns.” Given that our brains themselves do not require a lot of data to learn, the better we come to understand its processes, the more closely we will be able to mimic it in new types of algorithms.
This story originally appeared in our AI newsletter The Algorithm. To have it directly delivered to your inbox, sign up here for free.
The inside story of how ChatGPT was built from the people who made it
Exclusive conversations that take us behind the scenes of a cultural phenomenon.
AI is dreaming up drugs that no one has ever seen. Now we’ve got to see if they work.
AI automation throughout the drug development pipeline is opening up the possibility of faster, cheaper pharmaceuticals.
GPT-4 is bigger and better than ChatGPT—but OpenAI won’t say why
We got a first look at the much-anticipated big new language model from OpenAI. But this time how it works is even more deeply under wraps.
The original startup behind Stable Diffusion has launched a generative AI for video
Runway’s new model, called Gen-1, can change the visual style of existing videos and movies.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.