Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

The surgeon studies the face of a teenage boy whose upper jaw and cheek were destroyed by cancer years ago. Lifting his gloved right hand, he points to an area just below one of the patient’s eyes. As if by magic, an incision appears in the boy’s cheek, revealing the area of tissue and bone to be rebuilt. Pointing again, the surgeon begins a complicated procedure for transplanting bone and tissue from the boy’s hip to his face.

In the past, plastic surgeons had to be in the operating room to try procedures like these. Now some are using an experimental computer visualization tool called the Immersive Workbench, developed by researchers from Stanford University and NASA Ames Research Center, to plan and practice difficult operations. The software program combines data from CT scans, magnetic resonance images, and ultrasound to create high-resolution pictures of individual patients and display them in a virtual environment.

Unlike other software tools developed to visualize the results of plastic surgery, which rely on standard physical models of men and women, the Immersive Workbench generates images that depict the specific deformities or injuries of particular patients. The latest prototype of the software goes further, letting doctors wearing tracked-shutter glasses and special gloves test specific surgical approaches in rapid succession to see which produces the best results.

“The whole idea is to be able to interact with the virtual environment in the same way as you interact with a patient in real life-in a way that requires almost no training for the user,” says project director Dr. Michael Stephanides of Stanford University’s Division of Plastic Surgery.

The project started in 1991, when Stanford researchers began developing two-dimensional graphic renderings of patients from imaging data. Three years ago, Stephanides asked NASA Ames to create sophisticated software for building three-dimensional patient portraits from data collected in CT scans. At that time, NASA Ames engineers were spending most of their time creating visualizations of biological systems for space-related applications, but the lab’s collaboration with Stanford has led to the creation of NASA Ames’s Biocomputation Center, a new national center for research in virtual environments for surgical planning.

Plastic surgery offers a particularly rigorous challenge for software engineers and medical researchers developing virtual reality (VR) tools, since computerized renderings of patients must look almost exactly as they do in the real world. It is no small task to display human body parts at the necessary high resolution, says Kevin Montgomery, the leader of the NASA Ames group participating in this project. According to Montgomery, a 3D rendering of a human face and head contains 8 million tiny image slices that must be updated at a speed of 10 frames per second-processing demands that approach the theoretical limit of current computers; as a result, the NASA Ames researchers had to find ingenious ways to discard much of the raw data from patient images. Nonetheless, Montgomery’s group has been able to generate highly resolved images detailing such subtle features as small ridges of tissue, the impression of a vein beneath the skin on a human scalp, and the fine detail of a patient’s inner ear.

Doctors have already used the Immersive Workbench to plan some 15 surgeries involving reconstruction of bony defects in the skeleton of the face and skull. But Montgomery and Stephanides caution that the tool is still in the experimental stage. They expect clinical deployment in three to five years, when the next generation of processors and graphics cards makes $10,000 desktop computers as fast and powerful as the $100,000 graphical workstations now needed to run the software. Between now and then, the researchers hope to improve the program by creating a more intuitive graphical user interface, depicting virtual surgical instruments more accurately, and developing the capacity to update patient images in near-real time as doctors practice their procedures.

When hardware costs are no longer a limiting factor, Stephanides believes VR technology will replace current surgical planning methods and become an important tool for educating doctors in medical schools.

0 comments about this story. Start the discussion »

Tagged: Communications

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me
×

A Place of Inspiration

Understand the technologies that are changing business and driving the new global economy.

September 23-25, 2014
Register »