Skip to Content
Uncategorized

Practice Makes Perfect

March 1, 1998

The surgeon studies the face of a teenage boy whose upper jaw and cheek were destroyed by cancer years ago. Lifting his gloved right hand, he points to an area just below one of the patient’s eyes. As if by magic, an incision appears in the boy’s cheek, revealing the area of tissue and bone to be rebuilt. Pointing again, the surgeon begins a complicated procedure for transplanting bone and tissue from the boy’s hip to his face.

In the past, plastic surgeons had to be in the operating room to try procedures like these. Now some are using an experimental computer visualization tool called the Immersive Workbench, developed by researchers from Stanford University and NASA Ames Research Center, to plan and practice difficult operations. The software program combines data from CT scans, magnetic resonance images, and ultrasound to create high-resolution pictures of individual patients and display them in a virtual environment.

Unlike other software tools developed to visualize the results of plastic surgery, which rely on standard physical models of men and women, the Immersive Workbench generates images that depict the specific deformities or injuries of particular patients. The latest prototype of the software goes further, letting doctors wearing tracked-shutter glasses and special gloves test specific surgical approaches in rapid succession to see which produces the best results.

“The whole idea is to be able to interact with the virtual environment in the same way as you interact with a patient in real life-in a way that requires almost no training for the user,” says project director Dr. Michael Stephanides of Stanford University’s Division of Plastic Surgery.

The project started in 1991, when Stanford researchers began developing two-dimensional graphic renderings of patients from imaging data. Three years ago, Stephanides asked NASA Ames to create sophisticated software for building three-dimensional patient portraits from data collected in CT scans. At that time, NASA Ames engineers were spending most of their time creating visualizations of biological systems for space-related applications, but the lab’s collaboration with Stanford has led to the creation of NASA Ames’s Biocomputation Center, a new national center for research in virtual environments for surgical planning.

Plastic surgery offers a particularly rigorous challenge for software engineers and medical researchers developing virtual reality (VR) tools, since computerized renderings of patients must look almost exactly as they do in the real world. It is no small task to display human body parts at the necessary high resolution, says Kevin Montgomery, the leader of the NASA Ames group participating in this project. According to Montgomery, a 3D rendering of a human face and head contains 8 million tiny image slices that must be updated at a speed of 10 frames per second-processing demands that approach the theoretical limit of current computers; as a result, the NASA Ames researchers had to find ingenious ways to discard much of the raw data from patient images. Nonetheless, Montgomery’s group has been able to generate highly resolved images detailing such subtle features as small ridges of tissue, the impression of a vein beneath the skin on a human scalp, and the fine detail of a patient’s inner ear.

Doctors have already used the Immersive Workbench to plan some 15 surgeries involving reconstruction of bony defects in the skeleton of the face and skull. But Montgomery and Stephanides caution that the tool is still in the experimental stage. They expect clinical deployment in three to five years, when the next generation of processors and graphics cards makes $10,000 desktop computers as fast and powerful as the $100,000 graphical workstations now needed to run the software. Between now and then, the researchers hope to improve the program by creating a more intuitive graphical user interface, depicting virtual surgical instruments more accurately, and developing the capacity to update patient images in near-real time as doctors practice their procedures.

When hardware costs are no longer a limiting factor, Stephanides believes VR technology will replace current surgical planning methods and become an important tool for educating doctors in medical schools.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.