Sometimes, there’s no such thing as too much visual information. An astronomer, for instance, parsing images of distant galaxies, will never complain about a picture that is too high-resolution. Neither will a microbiologist, who may need to zoom into the microscopic universe to learn more about what makes a cell work, or fail. We have the technology to take massively high resolution images today–so-called gigapixel images, those containing a billion or more pixels–but what we’ve lacked, until now, was a suitable and intuitive way to navigate those images.
Samuel Cox, a masters student in digital imaging at Lincoln University, offers what may be a solution, reports The Engineer. Cox isn’t an astronomer or a biomedical student, but the system he devised might someday apply to those fields. Cox, an artist first and foremost, decided he wanted a more interactive way to experience photography. He explored London with a 16-megapixel DSLR camera. Using a robotic tripod, he would take some 300 photographs per scene, in a precise and grid-like fashion, overlapping them by about 30 percent. The whole process would take 45 minutes for each scene. Then Cox would use software to merge those smaller images into one massive one.
Megapixel panoramas like these are nothing new, of course; what is somewhat novel, however, is the means of experiencing them that Cox then created. He made a Microsoft Kinect hack that enabled viewers to swipe and zoom their way through the richly detailed images. “I used a Kinect for the depth tracking,” Cox told PCWorld. “No other webcam device really offers that technical ability.”
Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.