Skip to Content

Mapping Disasters in 3-D

Software based on PhotoSynth can model the scene of a disaster.

Imagine a building has collapsed. A team of first responders rushes to the scene and rapidly begins surveying the area for survivors. They draw makeshift maps of the area, so that incoming teams know what’s what. But newcomers don’t always understand the depictions and each minute is crucial to save survivors.

Robin Murphy from Texas A&M University (TAMU) lab and colleagues have a solution: deploy several small unmanned air vehicles (SUAVs), such as AirRobot quadrotors, to take snapshots of the rubble. The pictures are then uploaded to a software program called RubbleViewer, which quickly builds a three-dimensional map of the area that users can intuitively navigate. More efficient than drawing by hand, this system is also cheaper and more portable than the alternative–using helicopter-mounted lasers to map the rubble.

Pictures from the SUAVs are combined using the algorithms behind the panorama-making software PhotoSynth. RubbleViewer extracts information from PhotoSynth’s data points to create the three-dimensional map. It’s like putting a blanket over a bunch of needle points, says Maarten van Zomeren, a graduate at the Delft University of Technology in the Netherlands who helped developed the technology under supervisor and assistant professor Stijn Oomes.

While PhotoSynth has been coupled to applications like Live Search Maps and Google Maps to create enhanced, location-embedded panoramic views, RubbleViewer is designed to be fast and easy to build, taking about half an hour to create a topographic 3-D map of an area. What’s more, viewers can click on a spot to annotate the map (showing the location of possible survivors, for example) or call up the real photos tied to the spot. See the video below for more.

The program is still a prototype, but expert reviews will come out next month.

Murphy intends to combine RubbleViewer with quadrotors and land-based, search-and-rescue robots to create an easy-to-use first-responders system. Murphy is also working with the Sketch Recognition Lab at TAMU to develop electronic tablets for responders to use. “Because it’s an emergency scenario it’s really important that people don’t have to learn anything but can interact with the world in a way that’s natural or intuitive to them,” says lab director and assistant professor Tracy Hammond. “We have to enable as opposed to constraining them with technologies.”

The team plans to carry out the first tests of the combined system by the end of the summer.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.