Progress in artificial intelligence causes some people to worry that software will take jobs such as driving trucks away from humans. Now leading researchers are finding that they can make software that can learn to do one of the trickiest parts of their own jobs—the task of designing machine-learning software.
In one experiment, researchers at the Google Brain artificial intelligence research group had software design a machine-learning system to take a test used to benchmark software that processes language. What it came up with surpassed previously published results from software designed by humans.
In recent months several other groups have also reported progress on getting learning software to make learning software. They include researchers at the nonprofit research institute OpenAI (which was cofounded by Elon Musk), MIT, the University of California, Berkeley, and Google’s other artificial intelligence research group, DeepMind.
If self-starting AI techniques become practical, they could increase the pace at which machine-learning software is implemented across the economy. Companies must currently pay a premium for machine-learning experts, who are in short supply.
Jeff Dean, who leads the Google Brain research group, mused last week that some of the work of such workers could be supplanted by software. He described what he termed “automated machine learning” as one of the most promising research avenues his team was exploring.
“Currently the way you solve problems is you have expertise and data and computation,” said Dean, at the AI Frontiers conference in Santa Clara, California. “Can we eliminate the need for a lot of machine-learning expertise?”
One set of experiments from Google’s DeepMind group suggests that what researchers are terming “learning to learn” could also help lessen the problem of machine-learning software needing to consume vast amounts of data on a specific task in order to perform it well.
The researchers challenged their software to create learning systems for collections of multiple different, but related, problems, such as navigating mazes. It came up with designs that showed an ability to generalize, and pick up new tasks with less additional training than would be usual.
The idea of creating software that learns to learn has been around for a while, but previous experiments didn’t produce results that rivaled what humans could come up with. “It’s exciting,” says Yoshua Bengio, a professor at the University of Montreal, who previously explored the idea in the 1990s.
Bengio says the more potent computing power now available, and the advent of a technique called deep learning, which has sparked recent excitement about AI, are what’s making the approach work. But he notes that so far it requires such extreme computing power that it’s not yet practical to think about lightening the load, or partially replacing, machine-learning experts.
Google Brain’s researchers describe using 800 high-powered graphics processors to power software that came up with designs for image recognition systems that rivaled the best designed by humans.
Otkrist Gupta, a researcher at the MIT Media Lab, believes that will change. He and MIT colleagues plan to open-source the software behind their own experiments, in which learning software designed deep-learning systems that matched human-crafted ones on standard tests for object recognition.
Gupta was inspired to work on the project by frustrating hours spent designing and testing machine-learning models. He thinks companies and researchers are well motivated to find ways to make automated machine learning practical.
“Easing the burden on the data scientist is a big payoff,” he says. “It could make you more productive, make you better models, and make you free to explore higher-level ideas.”
Why Meta’s latest large language model survived only three days online
Galactica was supposed to help scientists. Instead, it mindlessly spat out biased and incorrect nonsense.
A bot that watched 70,000 hours of Minecraft could unlock AI’s next big thing
Online videos are a vast and untapped source of training data—and OpenAI says it has a new way to use it.
Responsible AI has a burnout problem
Companies say they want ethical AI. But those working in the field say that ambition comes at their expense.
Biotech labs are using AI inspired by DALL-E to invent new drugs
Two groups have announced powerful new generative models that can design new proteins on demand not seen in nature.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.