MIT Technology Review Subscribe

AI Lets Astrophysicists Analyze Images 10 Million Times Faster

If you’re ever casually analyzing the cosmological phenomenon that is gravitational lensing in a hurry, you’re best off using neural networks. That’s certainly what researchers from SLAC National Accelerator Laboratory and Stanford University found: their analysis of the distortions in spacetime using AI are 10 million times faster than the methods they used to use.

Gravitational lensing is the effect that’s observed when a massive object in space, like a cluster of galaxies, bends light that’s emitted from, say, a more distant galaxy. When observed by telescopes, it causes distortions in images—and analysis of those distortions can help astronomers work out the mass of the object that caused the effect. And, perhaps, even shed a little light on the distribution of dark matter in the universe.

Advertisement

The problem: comparing recorded images to simulations of gravitational lenses used to take weeks of human effort. Now, writing in Nature, the team explains that it’s built neural networks that are trained to recognize different lenses, by studying half a million computer simulations of their appearance. Turned on real images, the AI can work out what kind of lens—and therefore the type of mass—that affected the observed light as well as human analysis, but almost instantly.

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

“Analyses that typically take weeks to months to complete, that require the input of experts and that are computationally demanding, can be done by neural nets within a fraction of a second, in a fully automated way,” said Stanford’s Laurence Perreault Levasseur in a statement. “And, in principle, on a cell phone’s computer chip.”

The technology underlying this sort of AI image recognition has become increasingly common in many applications over recent years, from social networks spotting faces to search engines recognizing objects in photographs. But scientists demand utmost rigor, and while neural networks have been applied to astrophysics problems before, according to a statement by Stanford’s Roger Blandford, they have they have done so  “with mixed outcomes.”

Now, says Blandford, there’s “considerable optimism that this will become the approach of choice for many more data processing and analysis problems in astrophysics.” 

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement