As a student, Galileo famously observed a lamp swinging in Pisa Cathedral and timed its swing against his pulse. He concluded that the period was constant and independent of its amplitude.
Galileo went on to suggest that a pendulum could control a clock and later designed such a machine, although the first clock of this type was built by Huygens some 15 years after Galileo’s death.
In making this discovery, Galileo’s genius was to ignore all the messy details that were otherwise present in the cathedral—air resistance, temperature, flickering light, noise, other people, and so on. He considered a simple model of a swinging lamp using only its period, focusing on the salient detail.
For many historians, Galileo’s approach represents the earliest stage in the evolution of the scientific method, the same process that has produced flight, quantum theory, electronic computing, general relativity, and even artificial intelligence.
In recent years, AI systems have begun to find interesting patterns in data themselves and even derived certain laws of physics as a result. But in these cases, the AI always studied a special data set that had been isolated from real-world distractions. The ability of these AI systems is a long way from the ability of humans such as Galileo.
And that raises an interesting question: is it possible to design an AI system that develops theories the way Galileo did, zeroing in on the information it needs to explain different aspects of the world it observes?
Today we get an answer, thanks to the work of Tailin Wu and Max Tegmark at MIT in Cambridge, Massachusetts. These guys have developed an AI system that copies Galileo’s approach and some of the other tricks that physicists have learned over the centuries. Their system—called the AI Physicist—is capable of teasing out several laws of physics in mystery worlds deliberately constructed to simulate the complexity of our universe.
Wu and Tegmark begin by identifying a significant weakness of modern AI systems. When given a big data set, they typically look for a single theory that governs the entire thing. But that becomes increasingly difficult the bigger and more messy the data set becomes.
Indeed, the inside of a cathedral would be a virtually impossible environment for any current AI system to mine for laws of physics.
To cope with this problem, physicists use a number of thought processes to simplify the problem. The first is to develop theories that describe only a small part of the data set. That produces multiple theories that all describe different aspects of the data—like quantum mechanics and relativity, for example.
Wu and Tegmark have developed the AI Physicist to treat big data sets in the same way.
Another general rule that physicists use is Occam’s Razor—the idea that simpler explanations are better. That’s why physicists generally discount theories requiring a prime mover to create the universe, or the Earth or life itself: the supposed existence of a prime mover raises an additional set of question about its nature and origin.
AI systems are well known for producing overly complex models to describe the data they are trained on. So Wu and Tegmark also teach their system to prefer simpler theories over more complex ones. They do this using a straightforward measure of complexity based on the amount of information the theory encapsulates.
Another famous physicists’ trick is to look for ways to unify theories. If one theory can do the job of two, it is probably better. This has driven physicists’ quest to find the one law that rules them all (although there is little in the way of actual evidence that such a theory exists).
A final principle that has helped physicists fare well is lifelong learning: the idea that if a particular approach worked in the past, it might work on future problems. So Wu and Tegmark’s AI Physicist remembers learned solutions and tries them on future problems.
Armed with these techniques, Wu and Tegmark put their AI Physicist through its paces. They do this by devising 40 mystery worlds governed by laws of physics that vary from one location to another. So a ball thrown into one of these worlds might initially fall under the force of gravity into a region governed by an electromagnetic potential, then into a region governed by a harmonic potential, and so on.
The question that Wu and Tegmark ask is whether their AI Physicist can derive the relevant laws of physics simply by looking at the movement of the ball over time. And they compare the behavior of the AI Physicist with that of a “Newborn Physicist” that uses the same approach but without the benefit of lifelong learning, as well as with a conventional neural network.
It turns out that both the AI Physicist and the Newborn Physicist can derive the relevant laws. “Both agents are able to solve above 90% of all the 40 mystery worlds,” they say.
The main advantage of the AI Physicist over the Newborn agent is that it learns more quickly using less of the data. “This is much like an experienced scientist can solve new problems way faster than a beginner by building on prior knowledge about similar problems,” say Wu and Tegmark.
And their system is significantly better than a conventional neural network. “Our [AI Physicist] typically learns faster and produces mean-squared prediction errors about a billion times smaller than a standard feedforward neural net of comparable complexity,” they say.
That’s impressive work that suggests AI systems could have a significant impact on the way science proceeds. Of course, the real test will be to let the AI Physicist loose on a real environment, such as the inside of Pisa Cathedral, and see whether it derives the principle behind mechanical clocks.
Or perhaps to let it loose on other complex data sets, such as those that regularly baffle economists, biologists, and climate scientists. There is surely low-hanging fruit here for a system capable of gathering it.
And if the AI Physicist is successful, historians of science may well look back on it as one of the first steps in a new era of evolution for the scientific method beyond Galileo and his human colleagues. There’s no telling where that may take us.
Ref: arxiv.org/abs/1810.10525 : Toward an AI Physicist for Unsupervised Learning
DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.
“This is a profound moment in the history of technology,” says Mustafa Suleyman.
What to know about this autumn’s covid vaccines
New variants will pose a challenge, but early signs suggest the shots will still boost antibody responses.
Human-plus-AI solutions mitigate security threats
With the right human oversight, emerging technologies like artificial intelligence can help keep business and customer data secure
Next slide, please: A brief history of the corporate presentation
From million-dollar slide shows to Steve Jobs’s introduction of the iPhone, a bit of show business never hurt plain old business.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.