Skip to Content
Uncategorized

Practice Makes Perfect

March 1, 1998

The surgeon studies the face of a teenage boy whose upper jaw and cheek were destroyed by cancer years ago. Lifting his gloved right hand, he points to an area just below one of the patient’s eyes. As if by magic, an incision appears in the boy’s cheek, revealing the area of tissue and bone to be rebuilt. Pointing again, the surgeon begins a complicated procedure for transplanting bone and tissue from the boy’s hip to his face.

In the past, plastic surgeons had to be in the operating room to try procedures like these. Now some are using an experimental computer visualization tool called the Immersive Workbench, developed by researchers from Stanford University and NASA Ames Research Center, to plan and practice difficult operations. The software program combines data from CT scans, magnetic resonance images, and ultrasound to create high-resolution pictures of individual patients and display them in a virtual environment.

Unlike other software tools developed to visualize the results of plastic surgery, which rely on standard physical models of men and women, the Immersive Workbench generates images that depict the specific deformities or injuries of particular patients. The latest prototype of the software goes further, letting doctors wearing tracked-shutter glasses and special gloves test specific surgical approaches in rapid succession to see which produces the best results.

“The whole idea is to be able to interact with the virtual environment in the same way as you interact with a patient in real life-in a way that requires almost no training for the user,” says project director Dr. Michael Stephanides of Stanford University’s Division of Plastic Surgery.

The project started in 1991, when Stanford researchers began developing two-dimensional graphic renderings of patients from imaging data. Three years ago, Stephanides asked NASA Ames to create sophisticated software for building three-dimensional patient portraits from data collected in CT scans. At that time, NASA Ames engineers were spending most of their time creating visualizations of biological systems for space-related applications, but the lab’s collaboration with Stanford has led to the creation of NASA Ames’s Biocomputation Center, a new national center for research in virtual environments for surgical planning.

Plastic surgery offers a particularly rigorous challenge for software engineers and medical researchers developing virtual reality (VR) tools, since computerized renderings of patients must look almost exactly as they do in the real world. It is no small task to display human body parts at the necessary high resolution, says Kevin Montgomery, the leader of the NASA Ames group participating in this project. According to Montgomery, a 3D rendering of a human face and head contains 8 million tiny image slices that must be updated at a speed of 10 frames per second-processing demands that approach the theoretical limit of current computers; as a result, the NASA Ames researchers had to find ingenious ways to discard much of the raw data from patient images. Nonetheless, Montgomery’s group has been able to generate highly resolved images detailing such subtle features as small ridges of tissue, the impression of a vein beneath the skin on a human scalp, and the fine detail of a patient’s inner ear.

Doctors have already used the Immersive Workbench to plan some 15 surgeries involving reconstruction of bony defects in the skeleton of the face and skull. But Montgomery and Stephanides caution that the tool is still in the experimental stage. They expect clinical deployment in three to five years, when the next generation of processors and graphics cards makes $10,000 desktop computers as fast and powerful as the $100,000 graphical workstations now needed to run the software. Between now and then, the researchers hope to improve the program by creating a more intuitive graphical user interface, depicting virtual surgical instruments more accurately, and developing the capacity to update patient images in near-real time as doctors practice their procedures.

When hardware costs are no longer a limiting factor, Stephanides believes VR technology will replace current surgical planning methods and become an important tool for educating doctors in medical schools.

Keep Reading

Most Popular

images created by Google Imagen
images created by Google Imagen

The dark secret behind those cute AI-generated animal images

Google Brain has revealed its own image-making AI, called Imagen. But don't expect to see anything that isn't wholesome.

biomass with Charm mobile unit in background
biomass with Charm mobile unit in background

Inside Charm Industrial’s big bet on corn stalks for carbon removal

The startup used plant matter and bio-oil to sequester thousands of tons of carbon. The question now is how reliable, scalable, and economical this approach will prove.

AGI is just chatter for now concept
AGI is just chatter for now concept

The hype around DeepMind’s new AI model misses what’s actually cool about it

Some worry that the chatter about these tools is doing the whole field a disservice.

Peter Reinhardt
Peter Reinhardt

How Charm Industrial hopes to use crops to cut steel emissions

The startup believes its bio-oil, once converted into syngas, could help clean up the dirtiest industrial sector.

Stay connected

Illustration by Rose WongIllustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.