In a dark room down the hall from Michael Bove’s office at MIT’s Media Lab is an apparatus with a white screen the size of a CD jewel box. When Bove sits in a chair opposite the machine and flips a switch, an image of a human rib cage seems to leap out a few inches beyond the screen. The image is produced by the Mark II, a 14-year-old holographic-video system that takes up most of the room. But its vividness is one of the inspirations for Bove’s own project: to bring 3-D video displays to consumer and medical markets.
Bove’s new system, which is called Mark III, is scheduled to be completed by the end of the summer. It can run on a standard PC with a graphics card and will be small enough to fit on top of a desk. (In contrast, an earlier version of Mark II required whole racks of computers.) Although Bove doesn’t yet have any manufacturing partners, he predicts that a product based on Mark III’s design would cost just a couple of hundred dollars to manufacture and could become standard in doctor’s offices as a way to view magnetic resonance images and computed tomography scans in 3-D detail. It would also be within the price range of gamers and technology enthusiasts.
The development of holographic video at MIT dates back to the late 1980s, when researchers put together Mark I, a proof-of-concept system with a low-resolution display. But Mark I and Mark II were destined never to leave the lab. They were, Bove says, “loud, finicky, and a general pain in the neck to work with.” And while numerous researchers in the United States, Japan, Korea, and the United Kingdom have invested time and money in holographic video, no one has yet found a way to build a system that is compact, inexpensive, and easy to use.
In 2004, Bove, who is the head of the Consumer Electronics Lab at MIT, started exploring the possibility of making holographic video practical for consumers. Thanks to ever-more-powerful PCs, small, ultrabright lasers, and other compact optoelectronic devices, he says, a consumer-friendly system is now within reach. And, he says, “there’s more and more 3-D information that’s kicking around” and could easily be projected holographically. Many video games, for example, are now based on sophisticated 3-D models of the virtual world–models that have to be flattened out for the 2‑D screens of PCs or game machines. Similarly, the 3-D data in hospitals’ large stores of magnetic resonance images and computed tomography scans has to be rendered as 2-D cross sections in order for doctors and patients to interpret it.
The Media Lab’s video holograms appear to float above a piece of frosted glass. An electronic device behind the glass, called a light modulator, reproduces interference patterns that encode information about the pictured object. Laser light striking the modulator scatters just as it would if it were reflecting off the object at different angles.
A holographic video begins with a computed 3-D model of some moving object or scene. This model “can be thought of as having a whole lot of points on its surface at different depths that change over time,” Bove says. To make that model holographic, a computer needs to figure out the intensity of the light that would be reflected from each point on the object to the point where the viewer’s eyes will be. “You need to create a diffraction pattern that reconstructs all the different intensities for all the different angles,” Bove says. He found that graphics chips in today’s PCs are adept at doing this sort of 3‑D rendering, computing the diffraction patterns, and combining them into a single video output.