The laws of physics, among the greatest discoveries of humankind, have emerged over many centuries in a process often influenced by the prominent thinkers of the time. This process has had a profound influence on the evolution of science and gives the impression that some laws could not have been discovered without the knowledge of earlier ages.
Quantum mechanics, for example, is built on classical mechanics using various mathematical ideas that were prominent at the time.
But perhaps there is another way of discovering the laws of physics that does not depend on the understanding we have already gained about the universe.
Today Raban Iten, Tony Metger, and colleagues at ETH Zurich in Switzerland say they have developed just such a method and used it to discover laws of physics in an entirely novel way. And they say it may be possible to use this method to find wholly new formulations of physical laws.
First, some background. The laws of physics are simple representations that can be interrogated to provide information about more complex scenarios. Imagine setting a pendulum in motion and asking where the base of the pendulum will be at some point in the future. One way to answer this is by measuring the position of the pendulum as it swings. This data can then be used as a kind of look-up table to find the answer. But the laws of motion provide a much easier way of discovering the answer: simply plug values for the various variables into the appropriate equation. That gives the correct answer too. That’s why the equation can be thought of as a compressed representation of reality.
This immediately suggests how neural networks might find these laws. Given some observations from an experiment—a swinging pendulum, for example—the goal is to find some simpler representation of this data.
The idea from Iten, Metger, and co is to feed this data into the machine so it learns how to make an accurate prediction of the position. Once the machine has learned this, it can then predict the position from any initial set of conditions. In other words, it has learned the relevant law of physics.
To find out whether this works, the researchers feed data from a swinging-pendulum experiment into a neural network they call SciNet. They go on to repeat this for experiments that include the collision of two balls, the results of a quantum measurement on a qubit, and even the positions of the planets and sun in the night sky.
The results make for interesting reading. Using the pendulum data, SciNet is able to predict the future frequency of the pendulum with an error of less than 2 percent.
What’s more, Iten, Metger, and co are able to interrogate SciNet to see how it arrives at the answer. This doesn’t reveal the precise equation, unfortunately, but it does show that the network uses only two variables to come up with the solution. That’s exactly the same number as in the relevant laws of motion.
But that isn’t all. SciNet also provides accurate predictions of the angular momentum of two balls after they have collided. That’s only possible using the conservation of momentum, a version of which SciNet appears to have discovered. It also predicts the measurement probabilities when a qubit is interrogated, clearly using some representation of the quantum world.
Perhaps most impressive is that the network learns to predict the future position of Mars and the sun using the initial position as seen from Earth. That’s only possible using a heliocentric model of the solar system, an idea that humans took centuries to hit on.
And indeed, an interrogation of SciNet suggest it is has learned just such a heliocentric representation. “SciNet stores the angles of the Earth and Mars as seen from the Sun in the two latent neurons—that is, it recovers the heliocentric model of the solar system,” say the researchers.
That’s impressive work, but it needs to be placed in perspective. This may be the first demonstration that an artificial neural network can compress data in a way that reveals aspects of the laws of physics. But it is not the first time that a computational approach has derived these laws.
A few years ago, computer scientists at Cornell University used a genetic algorithm that exploits the process of evolution to derive a number of laws of physics from experimental data. These included conservation laws for energy and momentum. The system even spat out the equation itself, not just a hint about how it was calculating, as SciNet does.
Clearly, evolutionary algorithms have the upper hand in the process of discovering the laws of physics using raw experimental data. (Given that evolution is the process that produced biological neural networks in the first place, it is arguable that it will forever be the more powerful approach.)
There is an interesting corollary to all this. It has taken humanity centuries to discover the laws of physics, often in ways that have depended crucially on previously discovered laws. For example, quantum mechanics is based in classical mechanics. Could there be better laws that can be derived from experimental data without any prior knowledge of physics?
If so, this machine-learning approach or the one based on evolution should be exactly what’s need to find them.
Ref: arxiv.org/abs/1807.10300 : Discovering physical concepts with neural networks