MIT Technology Review Subscribe

To spot fire damage from space, point this AI at satellite imagery

A new deep-learning algorithm studies aerial photographs after fires to identify damage.

How it works: From satellite images taken before and after the California wildfires of 2017, researchers created a data set of buildings that were either damaged or left unscathed.

Advertisement

The results: They tweaked a pre-trained ImageNet neural network and got it to spot damaged buildings with an accuracy of up to 85 percent.

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

Why it matters: After a disaster, pinpointing the hardest-hit areas could save lives and help with relief efforts. The researchers also released the data set to the public, which could improve other research that requires satellite images, like conservation and developmental aid work.

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement