Edit

Computing

Ultrasharp 3-D Maps

A missile-targeting technology is adapted to process aerial photos into 3-D city maps sharper than Google Earth’s.

Technology originally developed to help missiles home in on targets has been adapted to create 3-D color models of cityscapes that capture the shapes of buildings to a resolution of 15 centimeters or less. Image-processing software distills the models from aerial photos captured by custom packages of multiple cameras.

Pixel perfect: Using aerial photos, image-processing software created this 3-D model of San Francisco, accurate to 15 centimeters.

The developer is C3 Technologies, a spinoff from Swedish aerospace company Saab. C3 is building a store of eye-popping 3-D models of major cities to license to others for mapping and other applications. The first customer to go public with an application is Nokia, which used the models for 20 U.S. and European cities for an upgraded version of its Ovi online and mobile mapping service released last week. “It’s the start of the flying season in North America, and we’re going to be very active this year,” says Paul Smith, C3’s chief strategy officer.

Although Google Earth shows photorealistic buildings in 3-D for many cities, many are assembled by hand, often by volunteers, using a combination of photos and other data in Google’s SketchUp 3-D drawing program.

C3’s models are generated with little human intervention. First, a plane equipped with a custom-designed package of professional-grade digital single-lens reflex cameras takes aerial photos. Four cameras look out along the main compass points, at oblique angles to the ground, to image buildings from the side as well as above. Additional cameras (the exact number is secret) capture overlapping images from their own carefully determined angles, producing a final set that contains all the information needed for a full 3-D rendering of a city’s buildings. Machine-vision software developed by C3 compares pairs of overlapping images to gauge depth, just as our brains use stereo vision, to produce a richly detailed 3-D model.

“Unlike Google or Bing, all of our maps are 360° explorable,” says Smith, “and everything, every building, every tree, every landmark, from the city center to the suburbs, is captured in 3-D—not just a few select buildings.”

C3’s approach has benefits relative to more established methods of modeling cityscapes in 3-D, says Avideh Zakhor, a UC Berkeley professor whose research group developed technology licensed by Google for its Google Earth and Street View projects. Conventionally, a city’s 3-D geometry is captured first with an aerial laser scanner—a technique called LIDAR—and then software adds detail.

“The advantage of C3’s image-only scheme is that aerial LIDAR is significantly more expensive than photography, because you need powerful laser scanners,” says Zakhor. “In theory, you can cover more area for the same cost.” However, the LIDAR approach still dominates because it is more accurate, she says. “Using photos alone, you always need to manually correct errors that it makes,” says Zakhor. “The 64-million-dollar question is how much manual correction C3 needs to do.”

Smith says that C3’s technique is about “98 percent” automated, in terms of the time it takes to produce a model from a set of photos. “Our computer vision software is good enough that there is only some minor cleanup,” he says. “When your goal is to map the entire world, automation is essential to getting this done quickly and with less cost.” He claims that C3 can generate richer models than its competitors, faster.

Images of cities captured by C3 do appear richer than those in Google Earth, and Smith says the models will make mapping apps more functional as well as better-looking. “Behind every pixel is a depth map, so this is not just a dumb image of the city,” says Smith. On a C3 map, it is possible to mark an object’s exact location in space, whether it’s a restaurant entrance or 45th-story window.

C3 has also developed a version of its camera package to gather ground-level 3-D imagery and data from a car, boat, or Segway. This could enable the models to compete with Google’s Street View, which captures only images. C3 is working on taking the technology indoors to map buildings’ interiors and connect them with its outdoor models.

Smith says that augmented-reality apps allowing a phone or tablet to blend the virtual and real worlds are another potential use. “We can help pin down real-world imagery very accurately to solve the positioning problem,” he says. However, the accuracy of cell phones’ positioning systems will first have to catch up with that of C3’s maps. Cell phones using GPS can typically locate themselves to within tens of meters, not tens of centimeters.

Uh oh–you've read all five of your free articles for this month.

Insider Online Only

$19.95/yr US PRICE

Computing

From the latest smartphones to advances in quantum computing, the hardware behind today's digital age is rapidly changing.

You've read of free articles this month.