Skip to Content

Robots That Sense Before They Touch

Intel researchers are using electric-field sensors to build pre-touch technology into robots to help them size up objects and people they encounter.
September 17, 2007

At Intel’s research labs, in Seattle, a robotic arm approaches three plastic bottles, two of which are filled with water, one of which is empty. Without touching the bottles, the sensors at the end of the arm scan them, collecting information about their conductive properties. After each bottle has been sensed, the arm returns to the empty bottle and, as programmed, knocks it off the table.

Gentle grabber: The top image is a computer-aided design illustration of a robotic hand enabled with electric-field sensing. The bottom image is the actual gripper. When a conductive object comes near the sensors, the hand detects it, and an algorithm estimates the object’s shape and position.

The demonstration showcases technology, called pre-touch, that is currently under development at Intel. The researchers have incorporated the sensors into a robotic hand as well, allowing mechanical fingers to adjust to the size and shape of an object that they encounter (see video). The goal, explains Josh Smith, senior research scientist at Intel Research Seattle, is to “improve the ability of robots to grasp objects in unstructured human environments.”

Currently, robotic arms and hands routinely grab and hold objects on factory floors, where the uncertainty has been engineered away, Smith says. By adding pre-touch to a robot, it can sense the shape and size of unfamiliar objects at close range and react accordingly. Smith hopes that by improving this close-range interaction, robots will be more useful in homes, able to bring an elderly person a glass of water, for example, or pick up objects on a floor before the Roomba vacuums.

The way that Smith’s pre-touch sensors work is fairly straightforward. Each sensor consists of simple electrodes that can be made of copper and aluminum foil; in the case of a robotic hand, an electrode is at the tip of the thumb and each finger. When the researchers apply an oscillating voltage to the electrode in, say, the thumb, it creates an electric field that in turn induces a current in the electrodes of the fingers. When a conducting object–metal, or anything with water in it, such as an apple or a person–comes close to the sensors, it reduces the induced current in the fingers’ electrodes. This change in the electric field is detected by the sensors. Specialized algorithms process the data and instruct the robotic fingers to move around the object appropriately.

Multimedia

  • Watch the robotic hand in action.

Sensors used in the Intel robotic hands are known as electric-field (EF) proximity sensors. While Smith was a student at MIT, he developed EF sensors similar to those in his robots to determine the position of a person sitting in a car–a piece of information critical to helping make airbags deploy more safely. Now, EF sensors have been incorporated into all cars with side airbags made by Honda.

Much of Smith’s EF sensing research now involves developing algorithms that can make sense of the data, as EF signals tend to be complex, especially when an object or robot is in motion. Just a single measurement, made at one time by a stationary sensor and object, isn’t very difficult to understand, says Smith, but it’s challenging to decode the signals of a moving object or sensor.

Part of the decoding process includes having a robot sweep over an object and collect EF information from it, Smith explains. The algorithm then compares, in near real time, this data to a series of prerecorded signals that describe various shapes, sizes, and orientations of the object. When the algorithm finds a reasonably certain match, it adjusts the robotic fingers so that they can grasp the object.

EF sensing isn’t the only form of sensing that robots use. Often, a machine will use a video camera to detect objects at a long range. And robotic cars, such as those built for the Urban and Grand Challenges, sponsored by the Defense Advanced Research Projects Agency, use laser range finders that shine an infrared beam onto objects and use the reflected light to build maps of their environment. Both options are relatively expensive, and video, in particular, becomes limited at close range as a robot’s hand covers an object.

“One of the major problems in robotics has to do with the ability of a robot to interact and touch and feel and manipulate an object,” says Oussama Khatib, a professor of computer science at Stanford, in Palo Alto, CA. Khatib says that while Intel’s research looks like a promising approach to close-proximity sensing, it still needs to be integrated more completely in robots. “This is something that is important and significant if we can prove its robustness and its ability to be integrated with robotic systems and human environment in an effective way,” he says. Khatib adds that future proximity-sensing robots will most likely have a number of sensors that measure different aspects of their environment, which will require algorithms that can integrate all the disparate signals.

Smith agrees that ultimately, proximity sensing will rely on numerous sensors. EF sensing has its limits: it can’t see insulating objects such as thin plastic, thin pieces of wood, and paper. (As insulating objects become thicker, they become more perceivable.) Smith and his team are exploring other sensors, such as those that measure the reflection of light. But, he says, in many instances, EF sensors have advantages over optical sensors: they are less affected by different textures, and the data usually has fewer random fluctuations or noise.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.