Holograms in Motion
The newest 3-D video displays herald an interactive future for imaging.
A half-meter-long protein floats in midair, several centimeters in front of a monitor. It looks like an oversize curled ribbon from a birthday package. As three molecular biologists maneuver around the image, studying the complex molecule from different angles, it begins to fold, slowly twisting and interlocking into a tangled knot. Its shape is a clue to the function it performs in the human body: some proteins produce chemical reactions or behave like a kind of scaffolding for cells, while others help with cell division. Creation of a drug that encourages or blocks a protein’s action-say, preventing cancerous cells from dividing-could lead to more effective treatments. One of the researchers uses a stylus to prod the protein at several points. As she does so, the protein refolds itself, revealing a location that could be targeted with a drug to inhibit the protein’s function.
This kind of interactive science is on the way, and it will be made possible by a new generation of 3-D video displays. The technology enlists the power of holograms-or reasonable facsimiles thereof-to dish up startlingly realistic images that appear to pop out of the screen. Imagine the 3-D scenes produced by the venerable View-Master toy cranked up to “11” on the reality dial. But the new 3-D video images won’t require special viewing devices. Users won’t have to don the headgear or eyewear that tends to be distracting and can cause eyestrain, as they do with current so-called 3-D displays.
No. Three-dimensional holographic video images will be generated by a computer rather than being fixed in a static medium; they will be shown in full-motion color and, with input from a user, changed on the fly. What’s more, viewers who move around a holographic video image will be able to see it moving from every side-a phenomenon important to realism and one that many conventional eyeglass-based systems cannot replicate.
The mainstream of doctors, scientists, researchers, and new-product developers who already rely on high-end computer displays to visualize their work will see dramatic differences in this new technology. Currently their work is constrained by the flat, two-dimensional images of conventional displays. No matter how cleverly the screens are dressed up, they can’t convey all the nuances, intricacies, and immediacy of real objects in the 3-D world. Because the new video holograms produce fully 3-D images that float in space near the viewing screen, they can be examined from different angles by multiple viewers. Geophysicists examining high-resolution images of rock formations will be able to predict the location of hidden oil deposits with greater accuracy. Industrial designers will be able to modify a sports car’s body using the tip of a stylus, instantly establishing the change’s effect on overall design. Military commanders will be able to visualize the best battlefield scenario. Surgeons will be better able to determine the safest approach for removing a brain tumor without ever wielding a knife. “Someday we’re going to wonder how we used to put up with 2-D images,” says Stephen Benton, who heads the Spatial Imaging Group at the MIT Media Lab.
The group is one of two pioneering research teams leading the charge to perfect and commercialize the new generation of 3-D displays. Benton, a renowned founding member of the lab, is the inventor of the rainbow holographic images that appear on many credit cards and magazine covers. The other team, at New York University’s Media Research Lab, is working on a less expensive version called 3-D autostereo display, which could become a commercial product within the next few years. The NYU effort is being led by Ken Perlin, a multimedia legend who won a Technical Achievement Award from the Academy of Motion Picture Arts and Sciences in 1996 for his development of a sound and texture technique that is widely used in films today.
The two media labs lead the quest, but they are not alone in their pursuit. In December 2000 Ford Motor and London-based QinetiQ launched Holographic Imaging, an R&D company in Royal Oak, MI, to create interactive imaging workstations for car designers. And several Japanese groups also have entered the fray, including teams at Sony, NHK Laboratories, and Nihon University. “Twelve years ago everyone thought this was completely impossible,” says Benton. “Now there’s real competition.”
The first systems produced by these efforts will likely be specialized applications in fields such as surgical planning and automobile design. But versions cheap enough to serve as home entertainment applications should quickly follow-after all, millions of video game players would give their left control-pad thumbs to step into a fully 3-D version of Mario’s world-perhaps forever rendering obsolete the two-dimensional views to which most screens have been limited. In short, sums up NYU’s Ken Perlin, “All the reasons for putting up with the artifice of things being flat will go away.”
Crystal Clear Holographic Video
Many research teams are working to innovate holographic video, but Benton’s Spatial Imaging Group at MIT has long been at the field’s forefront. Here, various students and staff have been looking at the problem from every angle, so to speak, for 13 years. In recent years the main sponsors of the research have been the U. S. Navy, which believes its wartime decision-makers would benefit from looking at a 3-D representation of a battle landscape, and Honda, which hopes its car designers will be able to produce 3-D images of proposed new models rapidly. “When we first approached Honda, we were amazed to find out they had already been thinking of holography,” says Benton.
The MIT effort has from the beginning focused on true holographic video, which not only holds out the promise of the highest-quality 3-D video images, but also provides the most daunting technical challenges. At its core are the basic steps of creating a standard hologram: A laser beam is split in two. One half is directed at an object-let’s say, an apple. The presence of the apple distorts the pattern of light waves in the beam, modulating it. That beam is then made to intersect with its other half in light-sensitive material. When the two beams overlap, their differing patterns of light waves interfere with each other, etching a diffraction pattern of microscopic lines onto the light-sensitive material. The diffraction pattern works like a complicated lens. When a laser beam illuminates it, the microscopic lines reflect the light in a way that produces a
3-D image of the apple.
Instead of light and mirrors, Benton and his team use specially developed computer algorithms. The algorithms calculate the kinds of microscopic lines necessary for a certain hologram, convert them into sound waves, and then send the waves into a stack of tellurium-oxide crystals that have the unique property of distorting temporarily when sound waves pass through them. That distortion forms the microscopic lines of the diffraction pattern that make up a hologram. A laser beam passing through that pattern conveys the image from the crystals to a view screen (see “MIT’s Mark II Holographic Video,” below).
MIT’s Mark II Holographic Video Display produces surprisingly pleasing and lifelike 3-D images. In one demo, a red prototype sports car designed by Honda instantly appears to hover brightly in miniature a half-meter or so in front of the observer, all of the car’s graceful lines perfectly discernible from different angles. Perhaps it’s partly because of the novelty of the experience, but the mild flicker and shimmering image bars hardly distract attention from the intense realism of the effect.
Benton’s group is continually making refinements in three core areas: hardware and software for the display, realism and image quality, and interactivity. Wendy Plesniak, a Media Lab researcher and consultant who as a student helped develop computing algorithms for the holographic video device, added a feature that could ultimately lead to an industrial designer’s dream machine: a haptic, or force feedback, interface that makes it possible to “sculpt” the projected image with a real-life, handheld tool. As the user pokes, prods, and carves with a stylus, the holographic image changes as if it were clay on a potter’s wheel, and the user senses resistance as if she were really working the clay.
Plesniak says the degree of sensation and control afforded by combining a haptic interface with holography “would provide a complete path in digital prototyping.” In one demonstration, she uses the stylus to carve a red drum-shaped object as if it were rotating on a lathe; in another, a sheetlike image becomes dimpled when prodded. In general, the image produced by the system is brilliant, seems lifelike, and looks for all the world as if it is floating in space right in front of the user. “With most 3-D systems it takes a while for the 3-D effect to come in, and you never get as much depth as the math says you should,” says Benton. “But you don’t have those problems with holograms.”
The system has some way to go, though, before it’s likely to be commercialized. The biggest problem is that making a video hologram requires crunching enormous amounts of data. That may not be surprising, given that a hologram provides not just a single view of an image, but all views from any number of angles. Still the diffraction pattern from just one high-resolution hologram can easily use up more than a terabyte of data-enough to fill 1,600 compact discs. A moderately flicker-free holographic video would require at least 20 such holograms per second. Clearly, churning through 20 terabytes worth of information every second would require extraterrestrial technology: today’s fastest PCs operate at one- hundred-thousandth that rate. As a result, the Mark II accepts a number of compromises in image quality in order to bring the computing requirements down to a manageable 16 megabytes per second. The system uses a single color, makes only 10.16-by-12.7-centimeter images, and generates a flickering frame-update rate of about seven images per second. In addition, because the image is stripped of the information needed to accommodate an observer’s view of the top or bottom, the image changes only as the observer moves from side to side. “It’s amazing how few people notice that nothing changes when you look over or under it,” says Benton.
A hardware remake that is in the works should bring the system much closer to commercialization. The goals for the overhaul include switching to a parallel-microprocessor arrangement capable of churning out the high processing speeds needed to achieve larger image size, greater resolution, and a faster frame rate.
In addition, the group hopes to make the jump to an ultrahigh-resolution display screen based on microelectromechanical systems. That technology would employ thousands of tiny mirrors and laser beams-each one creating one pixel of a whole diffraction pattern. Such displays aren’t expected to exist for at least a few years, but Benton notes that his group doesn’t plan on seeing its work bear commercial fruit for at least another four years anyway. “Holography is hard,” he says with a sigh. “That’s why it’s one of the longest-range projects at the Media Lab.”
Meanwhile, at NYU’s Center for Advanced Technology, the other early leader in the race to produce this new wave of 3-D, Perlin’s group is enlisting a nonholographic technique capable of providing dynamic, angle-adjusted images that look like those produced by holographic systems. Furthermore, the images are not conjured up by using complexly modified laser light. Instead they are displayed on a relatively ordinary monitor in an approach Perlin calls “a holographic interface.” The group pulls this off by taking advantage of the fact that most of the vast and costly processing and display horsepower needed to produce holographic video ultimately goes to waste: a hologram provides more images than those that meet the viewers’ eyes; it also provides dazzling, angle-adjusted images to the many thousands of locations at which there are no eyeballs to appreciate them. Each of these distinct unperceived images have to be computed, transmitted, and displayed, because there is no practical way to limit holographic coverage to an observer’s specific viewing angles. “It’s like wielding an elephant gun to shoot a fly,” says Perlin. His system, therefore, displays images tailored to an observer’s precise position.
Though NYU’s NY3D technology doesn’t enlist holography, it provides an observer with much the same viewing experience as a holographic system: The mechanism is stereoscopic, providing the left and right eyes with different images, and the images change with viewing angle. And of course, no eyewear is needed.
Coaxing hologram-like images from a plain screen requires two tricks. The first comes in the form of a transparent liquid-crystal display (LCD) that alters the view of the image being shown on a monitor. The display sits a half-meter in front of the monitor. On it, black stripes about three centimeters wide flash on and off, blocking vertical swaths of the image-let’s say, a ball-on the monitor behind it. The effect is not obvious to the viewer, because the stripes shift 180 times per second. The speed is too fast for the viewer’s brain to register the location of each stripe and at the same time, gives the monitor a chance to fill in the missing swaths for each eye. The result is that each eye sees a slightly different image through the gaps in the shutter stripes-which produce a stereoscopic sensation of depth (“NYU’s NY3D System,” this page). All this works fine-as long as the viewer’s eyeballs are located exactly where the system expects them to be, each eye lining up with the appropriate image swaths on the monitor. To ensure that this is the case, Perlin’s system employs a second trick, actively tracking the observer’s eyes with two small cameras mounted above the monitor. Moreover, a set of infrared light-emitting diodes (LEDs) next to the cameras give the viewer an unobtrusive case of red-eye-the back-of-the-eye glow that has long been the bane of amateur photographers. The cameras can easily isolate the viewer’s bright pupils, enabling them to track the eyes and adjust the location of the shifting stripes so that they always block the image in a way that sustains the stereoscopic effect.
Of course, a hologram’s realism doesn’t come merely from its stereoscopic properties; holographic images can be inspected from all angles as the viewer’s head moves around them. By virtue of its eye-locating capabilities, the NYU system can readily track head motion and almost immediately alter the images on the monitor as needed. And indeed, a system demo that displays a rotating skeletal foot confirms not only that it provides a clear, fully 3-D image, but also that it allows one person to appraise the image from different angles-including from above or below. (The group is also working on a system that would simultaneously provide 3-D views to multiple observers, such as a team of surgeons debating the best approach to a difficult procedure or a group of video game players competing on a shared monitor.) The result is so realistic, says Joel Kollin, a researcher at the Center for Advanced Technology, that eventual purchasers of the display may want simply to hang it on the wall, where it would present images-say, a Fiji beach or a Paris boulevard-that actually change with respect to the viewer’s angle. “It would be just like looking out a window,” he says. As an MIT Media Lab student in the late 1980s, Kollin was largely responsible for building that group’s first holographic video system.
With the recent rise of competition from groups at Sony, Ford, and other companies, such a system may well be affordable enough to allow for some elementary applications within the next few years (see “Companies Working in Three Dimensions,” below). Because that system needs to calculate and display only the views signaled by the viewer’s position at any given moment, it requires only the crunch power of an ordinary PC. The LCD screen, the eye-tracking LEDs, a high-quality monitor, and the software shouldn’t add much to the total price. Perlin predicts that early-production versions aimed at specialized markets such as surgical planning will be out within three years and will be priced in the vicinity of $5,000, while the first fully holographic systems are likely to command tens of thousands of dollars. Even better, says Perlin, a few years after the first systems appear, mass-market versions of the window display will probably sell for only a few hundred dollars more than an ordinary monitor, making it a reality for the average household. Perlin, who has spun off a company to commercialize the technology, says that the venture, NY3D, already is in discussions with several large companies, including Philips and IBM, that are interested in acquiring rights to produce the display.
But while Perlin’s pseudoholographic approach has a terrific cost edge and, at least for now, certain performance advantages over true holographic systems, it also has a few drawbacks. The system occasionally has trouble locking onto the viewer’s glowing eyes, and rapid head movements can confuse it, causing the user to experience a temporary loss of the 3-D effect. On top of that, its image, which is subject to a number of mildly distracting artifacts, including vertical bars, wavering, and ghosting, falls a bit short of the crisp realism of a real holographic image. Much of that gap will be narrowed as the system moves from raw prototype to a commercial version, but even Perlin admits that a true holographic system would be challenging to match for image quality. “We’ll certainly have commercial holographic displays, but it could take 20 or 30 years,” he says.
Fear that the holographic route could take a decade or more to reach perfection explains why even the MIT Media Lab is covering its bases: it is developing a nonholographic system that works much like the one at NYU. For his part, Benton concedes it’s possible that the real value of true holographic video, at least in the near future, may be in setting a “standard of realism” for pseudoholographic systems.
Until that standard is set, both teams will continue moving forward. For his part, Perlin has started researching what would widely be considered the ultimate in full-motion 3-D: a system that projects holograms into thin air-along the lines of R2-D2’s projection of Princess Leia in the opening minutes of the original Star Wars film. Perlin believes that ultrahigh-frequency sound waves could be employed to cause air to bend light enough to form such holograms. His students have already begun proof-of-concept experiments, but he acknowledges that a working system is likely decades away and could be “ridiculously expensive.”
In the meantime, there is reason to hope that pseudoholographic 3-D systems will become so cheap and effective that they could end up in many homes before the end of the decade. Then we’ll all have the luxury of fretting about whether there is anything worth watching on them. “The big problem with television isn’t that it’s flat,” Benton says. “It’s that they canceled Twin Peaks after two seasons.”
Meet the Experts in AI, Robotics and the Economy at EmTech Next.Learn more and register