Tactile Sensor as Sensitive as Human Skin
Researchers have created a thin-film tactile sensor that, in some ways, is as sensitive as the human finger. When pressed against a textured object, the film creates a topographical map of the surface, by sending out both an electrical signal and a visual signal that can be read with a small camera. The spatial resolution of these “maps” is as good as that achieved by human touch.
The sensor was built by Ravi F. Saraf, professor of chemical engineering at the University of Nebraska, who hopes it will be used to improve minimally invasive surgeries in which physicians rely on endoscopes; it could also help robots grip objects by allowing them to “feel” an object with great sensitivity. Saraf likes to demonstrate the sensor by creating “stress images” of a penny. In the images, Lincoln’s portrait – large ears, heavy brow, and even the folds in his jacket – are clearly visible.
[For an example of a “tactile” image taken using this nanoparticle film, click here.]
Indeed, a tactile sensor comparable to human skin is the holy grail of robotics, haptics, and sensing research, says Mandayam A. Srinivasan, senior research scientist in MIT’s mechanical engineering department and founder of the Touch Lab. The thin film sensor does not have the same robustness, flexibility, or ability to sense temperature as the human finger. But it’s a big step forward in spatial resolution. “We have all been trying to get high-resolution tactile arrays,” says Srinivasan; “this one is an order of magnitude better.”
Saraf says the sensor has a high enough resolution (40 micrometers horizontally and about 5 micrometers vertically) to “feel” single cells, and therefore could help surgeons find the perimeter of a tumor during surgical procedures. Cancer cells – in particular, breast cancer cells – have levels of pressure that are different from normal cells, and should feel “harder” to Saraf’s sensor.
The 100-nanometer-thick film is built on an electrode-coated glass backing. On top of the glass is the heart of the sensor: five alternating layers of gold and cadmium sulfite nanoparticles, separated from each other by polymer sheets. The device is topped off with an electrode-coated, flexible plastic sheet. Because the nanoparticles self-assemble, it should be relatively cheap to make large swaths of the film. “It’s just dip and dry,” Saraf says.
When the plastic covering the sensor is pressed, the nanoparticle layers move closer to one another, allowing a measurable electrical current to flow. The sensor also sends a signal using light. When electrons hop between the nanoparticle layers, the cadmium sulfite nanoparticles glow. This light is picked up by a small camera on the other side of the glass. Both the electrical current and light are proportional to the pressure on the sensor. When recording with the camera, the nanofilm can take about 5-10 readings per second; when recording the electrical current, it can take about 20-50, says Saraf.
Saraf has demonstrated the sensor on a glass backing, but he says the film could also be made on flexible polymer sheets. Such flexibility would be necessary to wrap the sensor around a robot’s “finger” or the tube of an endoscope (a camera inserted through a small cut that allows surgeons to operate inside the body). Saraf says such an endoscope could take concurrent visual and tactile images, helping surgeons “feel” which tissues the scope is up against.
Srinivasan says an ideal sensor “essentially has to be what skin is: flexible, with the ability to sense dynamically and with high spatial resolution, and physically robust – it shouldn’t break, it shouldn’t wear out.” Human skin obtains highly sensitive tactile readings by having sensors with different strengths. Some cells are good at sensing vibration or movement over time (which is essential for feeling something slipping from your grip); other cells accurately sense a point of pressure smaller than a micrometer.
High touch sensitivity “is extremely important” for robotics, says Robert Platt, a robotics engineer at the NASA-Johnson Space Center who works on the hands for Robonaut, a humanoid robot. To perform the most basic human tasks – dexterous grasping, walking on two legs, climbing, even crawling – robots “need to be cognizant of and controlling the forces they’re applying,” he says. To pick up a glass of water, for example, a robot needs to dynamically sense the forces exerted by its “hand.” Such a task requires high sensitivity – not only being able to feel where on its fingers a stress has been applied, but also in what direction that force is moving. This information can inform the robot whether an object is slipping or not, for example.
The high spatial sensitivity of Saraf’s sensor would not be enough to help a robot hold a glass of water, though, because the sensor can’t tell the direction of pressure. Further research will reveal whether or not nanoparticle layers can sense this kind of tactile information, Saraf says. For now, though, it is “a promising approach,” says NASA’s Platt.
Keep Reading
Most Popular
The inside story of how ChatGPT was built from the people who made it
Exclusive conversations that take us behind the scenes of a cultural phenomenon.
How Rust went from a side project to the world’s most-loved programming language
For decades, coders wrote critical systems in C and C++. Now they turn to Rust.
Design thinking was supposed to fix the world. Where did it go wrong?
An approach that promised to democratize design may have done the opposite.
Sam Altman invested $180 million into a company trying to delay death
Can anti-aging breakthroughs add 10 healthy years to the human life span? The CEO of OpenAI is paying to find out.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.