MIT Technology Review Subscribe

Mapping Disasters in 3-D

Software based on PhotoSynth can model the scene of a disaster.

Imagine a building has collapsed. A team of first responders rushes to the scene and rapidly begins surveying the area for survivors. They draw makeshift maps of the area, so that incoming teams know what’s what. But newcomers don’t always understand the depictions and each minute is crucial to save survivors.

Robin Murphy from Texas A&M University (TAMU) lab and colleagues have a solution: deploy several small unmanned air vehicles (SUAVs), such as AirRobot quadrotors, to take snapshots of the rubble. The pictures are then uploaded to a software program called RubbleViewer, which quickly builds a three-dimensional map of the area that users can intuitively navigate. More efficient than drawing by hand, this system is also cheaper and more portable than the alternative–using helicopter-mounted lasers to map the rubble.

Advertisement

Pictures from the SUAVs are combined using the algorithms behind the panorama-making software PhotoSynth. RubbleViewer extracts information from PhotoSynth’s data points to create the three-dimensional map. It’s like putting a blanket over a bunch of needle points, says Maarten van Zomeren, a graduate at the Delft University of Technology in the Netherlands who helped developed the technology under supervisor and assistant professor Stijn Oomes.

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

While PhotoSynth has been coupled to applications like Live Search Maps and Google Maps to create enhanced, location-embedded panoramic views, RubbleViewer is designed to be fast and easy to build, taking about half an hour to create a topographic 3-D map of an area. What’s more, viewers can click on a spot to annotate the map (showing the location of possible survivors, for example) or call up the real photos tied to the spot. See the video below for more.

The program is still a prototype, but expert reviews will come out next month.

Murphy intends to combine RubbleViewer with quadrotors and land-based, search-and-rescue robots to create an easy-to-use first-responders system. Murphy is also working with the Sketch Recognition Lab at TAMU to develop electronic tablets for responders to use. “Because it’s an emergency scenario it’s really important that people don’t have to learn anything but can interact with the world in a way that’s natural or intuitive to them,” says lab director and assistant professor Tracy Hammond. “We have to enable as opposed to constraining them with technologies.”

The team plans to carry out the first tests of the combined system by the end of the summer.

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement