Skip to Content

Mapping Disasters in 3-D

Software based on PhotoSynth can model the scene of a disaster.

Imagine a building has collapsed. A team of first responders rushes to the scene and rapidly begins surveying the area for survivors. They draw makeshift maps of the area, so that incoming teams know what’s what. But newcomers don’t always understand the depictions and each minute is crucial to save survivors.

Robin Murphy from Texas A&M University (TAMU) lab and colleagues have a solution: deploy several small unmanned air vehicles (SUAVs), such as AirRobot quadrotors, to take snapshots of the rubble. The pictures are then uploaded to a software program called RubbleViewer, which quickly builds a three-dimensional map of the area that users can intuitively navigate. More efficient than drawing by hand, this system is also cheaper and more portable than the alternative–using helicopter-mounted lasers to map the rubble.

Pictures from the SUAVs are combined using the algorithms behind the panorama-making software PhotoSynth. RubbleViewer extracts information from PhotoSynth’s data points to create the three-dimensional map. It’s like putting a blanket over a bunch of needle points, says Maarten van Zomeren, a graduate at the Delft University of Technology in the Netherlands who helped developed the technology under supervisor and assistant professor Stijn Oomes.

While PhotoSynth has been coupled to applications like Live Search Maps and Google Maps to create enhanced, location-embedded panoramic views, RubbleViewer is designed to be fast and easy to build, taking about half an hour to create a topographic 3-D map of an area. What’s more, viewers can click on a spot to annotate the map (showing the location of possible survivors, for example) or call up the real photos tied to the spot. See the video below for more.

The program is still a prototype, but expert reviews will come out next month.

Murphy intends to combine RubbleViewer with quadrotors and land-based, search-and-rescue robots to create an easy-to-use first-responders system. Murphy is also working with the Sketch Recognition Lab at TAMU to develop electronic tablets for responders to use. “Because it’s an emergency scenario it’s really important that people don’t have to learn anything but can interact with the world in a way that’s natural or intuitive to them,” says lab director and assistant professor Tracy Hammond. “We have to enable as opposed to constraining them with technologies.”

The team plans to carry out the first tests of the combined system by the end of the summer.

Keep Reading

Most Popular

computation concept
computation concept

How AI is reinventing what computers are

Three key ways artificial intelligence is changing what it means to compute.

still from Embodied Intelligence video
still from Embodied Intelligence video

These weird virtual creatures evolve their bodies to solve problems

They show how intelligence and body plans are closely linked—and could unlock AI for robots.

seeing is believing concept
seeing is believing concept

Our brains exist in a state of “controlled hallucination”

Three new books lay bare the weirdness of how our brains process the world around us.

We reviewed three at-home covid tests. The results were mixed.

Over-the-counter coronavirus tests are finally available in the US. Some are more accurate and easier to use than others.

Stay connected

Illustration by Rose WongIllustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.