Skip to Content

Mapping Disasters in 3-D

Software based on PhotoSynth can model the scene of a disaster.

Imagine a building has collapsed. A team of first responders rushes to the scene and rapidly begins surveying the area for survivors. They draw makeshift maps of the area, so that incoming teams know what’s what. But newcomers don’t always understand the depictions and each minute is crucial to save survivors.

Robin Murphy from Texas A&M University (TAMU) lab and colleagues have a solution: deploy several small unmanned air vehicles (SUAVs), such as AirRobot quadrotors, to take snapshots of the rubble. The pictures are then uploaded to a software program called RubbleViewer, which quickly builds a three-dimensional map of the area that users can intuitively navigate. More efficient than drawing by hand, this system is also cheaper and more portable than the alternative–using helicopter-mounted lasers to map the rubble.

Pictures from the SUAVs are combined using the algorithms behind the panorama-making software PhotoSynth. RubbleViewer extracts information from PhotoSynth’s data points to create the three-dimensional map. It’s like putting a blanket over a bunch of needle points, says Maarten van Zomeren, a graduate at the Delft University of Technology in the Netherlands who helped developed the technology under supervisor and assistant professor Stijn Oomes.

While PhotoSynth has been coupled to applications like Live Search Maps and Google Maps to create enhanced, location-embedded panoramic views, RubbleViewer is designed to be fast and easy to build, taking about half an hour to create a topographic 3-D map of an area. What’s more, viewers can click on a spot to annotate the map (showing the location of possible survivors, for example) or call up the real photos tied to the spot. See the video below for more.

The program is still a prototype, but expert reviews will come out next month.

Murphy intends to combine RubbleViewer with quadrotors and land-based, search-and-rescue robots to create an easy-to-use first-responders system. Murphy is also working with the Sketch Recognition Lab at TAMU to develop electronic tablets for responders to use. “Because it’s an emergency scenario it’s really important that people don’t have to learn anything but can interact with the world in a way that’s natural or intuitive to them,” says lab director and assistant professor Tracy Hammond. “We have to enable as opposed to constraining them with technologies.”

The team plans to carry out the first tests of the combined system by the end of the summer.

Keep Reading

Most Popular

Geoffrey Hinton tells us why he’s now scared of the tech he helped build

“I have suddenly switched my views on whether these things are going to be more intelligent than us.”

ChatGPT is going to change education, not destroy it

The narrative around cheating students doesn’t tell the whole story. Meet the teachers who think generative AI could actually make learning better.

Meet the people who use Notion to plan their whole lives

The workplace tool’s appeal extends far beyond organizing work projects. Many users find it’s just as useful for managing their free time.

Learning to code isn’t enough

Historically, learn-to-code efforts have provided opportunities for the few, but new efforts are aiming to be inclusive.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.