Skip to Content

Surgical Robots Get a Sense of Touch

Johns Hopkins researchers have developed a feedback system for surgical robots, which lack subtle sensations.
December 18, 2006

Robotic surgical systems have become a staple in operating rooms, advancing the field of minimally invasive surgery. These computer-assisted tools help surgeons conduct more-precise in-depth procedures. The robots are often praised for their dexterity, advanced visualization technologies, and mechanical stamina. But there is one important aspect the robots are missing: a sense of touch, also known as haptics.

“It always helps to be able to feel what you are doing, to feel the tissue tension and to feel the force when manipulating a suture,” says Domenico Savatta, chief of minimally invasive and robotic urology surgery at Newark Beth Israel Medical Center. “Haptics would make it easier to learn robotic surgery, operate on things that are very delicate, and be an overall advantage to have in the system.”

Currently, such systems primarily rely on visual cues: magnified 3-D models showing surgeons how the tissue is stretched so they can estimate the amount of force being applied by the robot. The only thing that can be truly felt is resistance in the instruments if they hit an immovable object like bone, or if they are pulling so hard on tissue or a suture that they don’t move naturally.

Researchers at Johns Hopkins are working to overcome this limitation by using computer-based haptic feedback. Their goal is to understand the forces of the robot interacting with the patient and to use motors on the master robot–controlled by the surgeon–to create forces that are equal to those being applied to the patient, explains Allison Okamura, the head researcher on the project and an associate professor of mechanical engineering at Johns Hopkins.

“We want the surgeon to feel as though they are directly operating on the patient rather than having a robot mediate the interaction,” says Okamura. “We call this idea ‘transparency,’ to make it feel as though the robot isn’t even there.”

To develop such technology, Okamura and her team are working with the da Vinci surgical system made by Intuitive Surgical; it’s the only robot approved by the FDA for conducting surgical procedures. The da Vinci is particularly useful in laparoscopic surgical procedures, such as the removal of the gallbladder or prostate. It also makes it possible to perform minimally invasive procedures for general noncardiac surgical procedures inside the chest.

The da Vinci system is teleoperated, meaning the master robot and the patient robot are separate machines, although often they are located in the same room. The surgeon sits at a computer console (which works much like a fancy joystick) to manipulate the master robot while looking at a video display of a 3-D model of the surgery site. The surgeon’s movements of the master robot direct the motions of the robotic tools that actually perform the operation on the patient.

Okamura’s team uses two different techniques to incorporate haptic feedback: a physical force sensor on the robotic tools determines how much force is being applied, while a mathematical computer model estimates the forces between the patient and the robot. All this information is relayed to the operator as torque applied to the master robot’s joystick.

Okamura is exploring both techniques to determine which is the more reliable and accurate at producing fine force feedback on the operator’s fingers so that he or she can actually feel suture tension and pressure on tissue.


Ideally, Okamura says, she’d like to figure out the interaction forces without using a force sensor at all, because the sensors have to be sterile and biocompatible. They also have to be small enough and cheap enough for medical applications.

If Okamura gets her way, a complex computer model would handle all the feedback by estimating the forces made by the movement of the robotic tools. “Developing some sort of generic or even surgery-specific interface that gives you many different kinds of tactile or force feedback–like textures on your fingers, fine force feedback on your fingertips, and gross force feedback on your arms–is a really huge challenge,” says Mark Ottensmeyer, lead investigator in tissue-property measurement and modeling and a developer of electromechanical systems for medical/surgical training at the Simulation Group. “You have to have some separate actuator, like a motor, electromechanical, hydraulic, or pneumatic device. Then you have to get a computer to control it all. So it really is a systems type of thing.”

Engineers at the Center for Advanced Surgical and Interventional Technology (CASIT) are developing a haptic feedback system for robot-assisted surgeries as well. Their system works by mounting arrays of pneumatic balloons onto the master robot and attaching force sensors to the patient-side robotic tools. The sensor measures the force being applied to the tissue and translates it into pressure on the actuator attached to the joystick touching the surgeon’s hand, explains Martin Culjat, the engineering research director at CASIT.

“The addition of tactile information will enable surgeons to ‘feel’ tissue characteristics, appropriately tension sutures, identify pathologic conditions, and will enable expansion of MIS to other surgical procedures and simulations,” says Culjat.

However, these technologies still need many years of research before they will be ready for commercial systems. In the interim, Okamura’s team has developed a system using visual haptics, called sensory substitution. As the surgeon ties a suture, dots on the surgical instruments change color relative to the applied force. Requiring no changes to the control system, this approach is easier to implement and will probably get into the operating room faster than devices that send haptic feedback to the surgeon’s hand, says Okamura.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.