If you’re ever casually analyzing the cosmological phenomenon that is gravitational lensing in a hurry, you’re best off using neural networks. That’s certainly what researchers from SLAC National Accelerator Laboratory and Stanford University found: their analysis of the distortions in spacetime using AI are 10 million times faster than the methods they used to use.
Gravitational lensing is the effect that’s observed when a massive object in space, like a cluster of galaxies, bends light that’s emitted from, say, a more distant galaxy. When observed by telescopes, it causes distortions in images—and analysis of those distortions can help astronomers work out the mass of the object that caused the effect. And, perhaps, even shed a little light on the distribution of dark matter in the universe.
The problem: comparing recorded images to simulations of gravitational lenses used to take weeks of human effort. Now, writing in Nature, the team explains that it’s built neural networks that are trained to recognize different lenses, by studying half a million computer simulations of their appearance. Turned on real images, the AI can work out what kind of lens—and therefore the type of mass—that affected the observed light as well as human analysis, but almost instantly.
Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.