To spot fire damage from space, point this AI at satellite imagery
A new deep-learning algorithm studies aerial photographs after fires to identify damage.
How it works: From satellite images taken before and after the California wildfires of 2017, researchers created a data set of buildings that were either damaged or left unscathed.
The results: They tweaked a pre-trained ImageNet neural network and got it to spot damaged buildings with an accuracy of up to 85 percent.
Why it matters: After a disaster, pinpointing the hardest-hit areas could save lives and help with relief efforts. The researchers also released the data set to the public, which could improve other research that requires satellite images, like conservation and developmental aid work.
Deep Dive
Artificial intelligence
Large language models can do jaw-dropping things. But nobody knows exactly why.
And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.
Google DeepMind’s new generative model makes Super Mario–like games from scratch
Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.
What’s next for generative video
OpenAI's Sora has raised the bar for AI moviemaking. Here are four things to bear in mind as we wrap our heads around what's coming.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.