Skip to Content

AI Lets Astrophysicists Analyze Images 10 Million Times Faster

August 30, 2017

If you’re ever casually analyzing the cosmological phenomenon that is gravitational lensing in a hurry, you’re best off using neural networks. That’s certainly what researchers from SLAC National Accelerator Laboratory and Stanford University found: their analysis of the distortions in spacetime using AI are 10 million times faster than the methods they used to use.

Gravitational lensing is the effect that’s observed when a massive object in space, like a cluster of galaxies, bends light that’s emitted from, say, a more distant galaxy. When observed by telescopes, it causes distortions in images—and analysis of those distortions can help astronomers work out the mass of the object that caused the effect. And, perhaps, even shed a little light on the distribution of dark matter in the universe.

The problem: comparing recorded images to simulations of gravitational lenses used to take weeks of human effort. Now, writing in Nature, the team explains that it’s built neural networks that are trained to recognize different lenses, by studying half a million computer simulations of their appearance. Turned on real images, the AI can work out what kind of lens—and therefore the type of mass—that affected the observed light as well as human analysis, but almost instantly.

“Analyses that typically take weeks to months to complete, that require the input of experts and that are computationally demanding, can be done by neural nets within a fraction of a second, in a fully automated way,” said Stanford’s Laurence Perreault Levasseur in a statement. “And, in principle, on a cell phone’s computer chip.”

The technology underlying this sort of AI image recognition has become increasingly common in many applications over recent years, from social networks spotting faces to search engines recognizing objects in photographs. But scientists demand utmost rigor, and while neural networks have been applied to astrophysics problems before, according to a statement by Stanford’s Roger Blandford, they have they have done so  “with mixed outcomes.”

Now, says Blandford, there’s “considerable optimism that this will become the approach of choice for many more data processing and analysis problems in astrophysics.” 

Keep Reading

Most Popular

What to know about this autumn’s covid vaccines

New variants will pose a challenge, but early signs suggest the shots will still boost antibody responses.

DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.

“This is a profound moment in the history of technology,” says Mustafa Suleyman.

Human-plus-AI solutions mitigate security threats

With the right human oversight, emerging technologies like artificial intelligence can help keep business and customer data secure

Next slide, please: A brief history of the corporate presentation

From million-dollar slide shows to Steve Jobs’s introduction of the iPhone, a bit of show business never hurt plain old business.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.