Skip to Content

New Tools for Minimally Invasive Surgery

Combining imaging technologies with tracking systems for surgical instruments will allow clinicians to navigate the body.

Minimally invasive surgery can seem like a dream – or a nightmare. On one hand, such procedures require only a small incision, reducing trauma to the body, shortening recovery time, and costing less for hospitals. On the other hand, without the ability to see directly the target inside the body, a clinician is only as good as the imaging technologies used to guide the procedure.

A workstation for minimally invasive surgeries brings together images from CT and ultrasound to give a better view of a patient’s anatomy, while tracking the position of surgical instruments in the body. (Photo courtesy of Philips Research)

Now Philips Research is developing an image-guidance workstation that would generate more information than current systems and help surgeons navigate better during minimally invasive procedures. The technology brings together images from computed tomography (CT) scans and ultrasound, and uses an electromagnetic tracking system to pinpoint the position of surgical instruments within the body.

“When we look at how [non-invasive surgical] procedures are done today, we see somewhat of a primitive landscape,” says Guy Shechter, senior research scientist at Philips Research. He says most clinicians use only one type of imaging technology, even though each type has its flaws.

Shechter’s team has been working for two years to develop an image-guidance workstation, and partnered with Brad Wood, an interventional radiologist at the National Institutes of Health, to test the technology in patients.

They have focused on a minimally invasive procedure called radio-frequency ablation (RFA) as a treatment for tumors. During this procedure, cancerous tissue is heated with electricity until it dies. The procedure has become a popular alternative for treating certain cancers without full-blown surgery. But delivering the current to the right location is crucial.

A patient undergoing a radio-frequency ablation procedure would typically undergo several CT scans, first to locate the tumor, then multiple scans to ensure that the ablation probe is inserted in the right place. CT scans provide the clearest picture of the body’s anatomy, but because of the radiation dose, they cannot be performed while a clinician is present.

The Philips workstation eliminates the need for multiple scans by bringing together CT and ultrasound imaging, along with tools that track the position of the surgical instruments.

The patient is first scanned using CT to create a three-dimensional image. Then an electromagnetic tracking system locates the position of the ablation needles in the body (much like a GPS navigation system locating an object in space). This information is then calibrated to information from the CT. “You can now see where your needle is relative to your CT image,” Shechter says.

During the procedure, the patient’s progress is monitored in real time with ultrasound. The position of the ultrasound probe is also tracked electromagnetically and matched to the relevant slice of the pre-acquired CT image. And both images are brought together on one monitor, and can be viewed side-by-side or overlaid. According to Ramin Shahidi, head of the Image Guidance Laboratories at the Stanford University School of Medicine, this joining of the two imaging technologies helps to overcome a major problem in most minimally invasive procedures: disruptive movement.

In the past, Shahidi’s and other groups have introduced techniques for creating a model of the patient’s anatomy taken from CT scans before the surgery, which is then used to help guide the operation. This technique works well in surgeries of the head, neck, and knees, where the structures are rigid.

But Shahidi says that it fails when looking at soft tissue, such as the liver, intestines, breast, or prostate, where the anatomy can easily move or change. “The anatomical information that we use for guidance at eight in the morning doesn’t apply to a surgery at ten in the morning,” he says. “The dynamic nature of ultrasound, when married to CT, would address that huge limitation.”

Philips worked with the NIH team to complete a small pilot study of 20 patients testing the technology for radiofrequency ablation of soft tissue biopsies in the liver, kidney, lungs, and spine. They are now continuing to improve it in preparation for larger trials. Helen Routh, vice president of Philips Research, says that the workstation is still a few years away from the market.

Stanford’s Shahidi says Philips’ technique is the only one he’s seen “that has a really critical potential for minimally invasive soft tissue visualization.” He adds that the tool could become extremely useful for radiologists, who are increasingly performing minimally invasive procedures in place of surgeons.

Radiologists, Shahidi notes, are much more comfortable than surgeons with using and interpreting imaging technologies, yet they lack the surgeon’s knowledge of navigating the body. This kind of technology, he says, would increase their comfort level and “give the radiologists something that they’ve been missing: how to get from point A to point B.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.