Skip to Content

AI Lets Astrophysicists Analyze Images 10 Million Times Faster

August 30, 2017

If you’re ever casually analyzing the cosmological phenomenon that is gravitational lensing in a hurry, you’re best off using neural networks. That’s certainly what researchers from SLAC National Accelerator Laboratory and Stanford University found: their analysis of the distortions in spacetime using AI are 10 million times faster than the methods they used to use.

Gravitational lensing is the effect that’s observed when a massive object in space, like a cluster of galaxies, bends light that’s emitted from, say, a more distant galaxy. When observed by telescopes, it causes distortions in images—and analysis of those distortions can help astronomers work out the mass of the object that caused the effect. And, perhaps, even shed a little light on the distribution of dark matter in the universe.

The problem: comparing recorded images to simulations of gravitational lenses used to take weeks of human effort. Now, writing in Nature, the team explains that it’s built neural networks that are trained to recognize different lenses, by studying half a million computer simulations of their appearance. Turned on real images, the AI can work out what kind of lens—and therefore the type of mass—that affected the observed light as well as human analysis, but almost instantly.

“Analyses that typically take weeks to months to complete, that require the input of experts and that are computationally demanding, can be done by neural nets within a fraction of a second, in a fully automated way,” said Stanford’s Laurence Perreault Levasseur in a statement. “And, in principle, on a cell phone’s computer chip.”

The technology underlying this sort of AI image recognition has become increasingly common in many applications over recent years, from social networks spotting faces to search engines recognizing objects in photographs. But scientists demand utmost rigor, and while neural networks have been applied to astrophysics problems before, according to a statement by Stanford’s Roger Blandford, they have they have done so  “with mixed outcomes.”

Now, says Blandford, there’s “considerable optimism that this will become the approach of choice for many more data processing and analysis problems in astrophysics.” 

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.