Skip to Content
Uncategorized

AI Lets Astrophysicists Analyze Images 10 Million Times Faster

August 30, 2017

If you’re ever casually analyzing the cosmological phenomenon that is gravitational lensing in a hurry, you’re best off using neural networks. That’s certainly what researchers from SLAC National Accelerator Laboratory and Stanford University found: their analysis of the distortions in spacetime using AI are 10 million times faster than the methods they used to use.

Gravitational lensing is the effect that’s observed when a massive object in space, like a cluster of galaxies, bends light that’s emitted from, say, a more distant galaxy. When observed by telescopes, it causes distortions in images—and analysis of those distortions can help astronomers work out the mass of the object that caused the effect. And, perhaps, even shed a little light on the distribution of dark matter in the universe.

The problem: comparing recorded images to simulations of gravitational lenses used to take weeks of human effort. Now, writing in Nature, the team explains that it’s built neural networks that are trained to recognize different lenses, by studying half a million computer simulations of their appearance. Turned on real images, the AI can work out what kind of lens—and therefore the type of mass—that affected the observed light as well as human analysis, but almost instantly.

“Analyses that typically take weeks to months to complete, that require the input of experts and that are computationally demanding, can be done by neural nets within a fraction of a second, in a fully automated way,” said Stanford’s Laurence Perreault Levasseur in a statement. “And, in principle, on a cell phone’s computer chip.”

The technology underlying this sort of AI image recognition has become increasingly common in many applications over recent years, from social networks spotting faces to search engines recognizing objects in photographs. But scientists demand utmost rigor, and while neural networks have been applied to astrophysics problems before, according to a statement by Stanford’s Roger Blandford, they have they have done so  “with mixed outcomes.”

Now, says Blandford, there’s “considerable optimism that this will become the approach of choice for many more data processing and analysis problems in astrophysics.” 

Keep Reading

Most Popular

Here’s how a Twitter engineer says it will break in the coming weeks

One insider says the company’s current staffing isn’t able to sustain the platform.

Technology that lets us “speak” to our dead relatives has arrived. Are we ready?

Digital clones of the people we love could forever change how we grieve.

How to befriend a crow

I watched a bunch of crows on TikTok and now I'm trying to connect with some local birds.

Starlink signals can be reverse-engineered to work like GPS—whether SpaceX likes it or not

Elon said no thanks to using his mega-constellation for navigation. Researchers went ahead anyway.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.