At Intel’s research labs, in Seattle, a robotic arm approaches three plastic bottles, two of which are filled with water, one of which is empty. Without touching the bottles, the sensors at the end of the arm scan them, collecting information about their conductive properties. After each bottle has been sensed, the arm returns to the empty bottle and, as programmed, knocks it off the table.
The demonstration showcases technology, called pre-touch, that is currently under development at Intel. The researchers have incorporated the sensors into a robotic hand as well, allowing mechanical fingers to adjust to the size and shape of an object that they encounter (see video). The goal, explains Josh Smith, senior research scientist at Intel Research Seattle, is to “improve the ability of robots to grasp objects in unstructured human environments.”
Currently, robotic arms and hands routinely grab and hold objects on factory floors, where the uncertainty has been engineered away, Smith says. By adding pre-touch to a robot, it can sense the shape and size of unfamiliar objects at close range and react accordingly. Smith hopes that by improving this close-range interaction, robots will be more useful in homes, able to bring an elderly person a glass of water, for example, or pick up objects on a floor before the Roomba vacuums.
The way that Smith’s pre-touch sensors work is fairly straightforward. Each sensor consists of simple electrodes that can be made of copper and aluminum foil; in the case of a robotic hand, an electrode is at the tip of the thumb and each finger. When the researchers apply an oscillating voltage to the electrode in, say, the thumb, it creates an electric field that in turn induces a current in the electrodes of the fingers. When a conducting object–metal, or anything with water in it, such as an apple or a person–comes close to the sensors, it reduces the induced current in the fingers’ electrodes. This change in the electric field is detected by the sensors. Specialized algorithms process the data and instruct the robotic fingers to move around the object appropriately.
Watch the robotic hand in action.
Sensors used in the Intel robotic hands are known as electric-field (EF) proximity sensors. While Smith was a student at MIT, he developed EF sensors similar to those in his robots to determine the position of a person sitting in a car–a piece of information critical to helping make airbags deploy more safely. Now, EF sensors have been incorporated into all cars with side airbags made by Honda.
Much of Smith’s EF sensing research now involves developing algorithms that can make sense of the data, as EF signals tend to be complex, especially when an object or robot is in motion. Just a single measurement, made at one time by a stationary sensor and object, isn’t very difficult to understand, says Smith, but it’s challenging to decode the signals of a moving object or sensor.
Part of the decoding process includes having a robot sweep over an object and collect EF information from it, Smith explains. The algorithm then compares, in near real time, this data to a series of prerecorded signals that describe various shapes, sizes, and orientations of the object. When the algorithm finds a reasonably certain match, it adjusts the robotic fingers so that they can grasp the object.
EF sensing isn’t the only form of sensing that robots use. Often, a machine will use a video camera to detect objects at a long range. And robotic cars, such as those built for the Urban and Grand Challenges, sponsored by the Defense Advanced Research Projects Agency, use laser range finders that shine an infrared beam onto objects and use the reflected light to build maps of their environment. Both options are relatively expensive, and video, in particular, becomes limited at close range as a robot’s hand covers an object.
“One of the major problems in robotics has to do with the ability of a robot to interact and touch and feel and manipulate an object,” says Oussama Khatib, a professor of computer science at Stanford, in Palo Alto, CA. Khatib says that while Intel’s research looks like a promising approach to close-proximity sensing, it still needs to be integrated more completely in robots. “This is something that is important and significant if we can prove its robustness and its ability to be integrated with robotic systems and human environment in an effective way,” he says. Khatib adds that future proximity-sensing robots will most likely have a number of sensors that measure different aspects of their environment, which will require algorithms that can integrate all the disparate signals.
Smith agrees that ultimately, proximity sensing will rely on numerous sensors. EF sensing has its limits: it can’t see insulating objects such as thin plastic, thin pieces of wood, and paper. (As insulating objects become thicker, they become more perceivable.) Smith and his team are exploring other sensors, such as those that measure the reflection of light. But, he says, in many instances, EF sensors have advantages over optical sensors: they are less affected by different textures, and the data usually has fewer random fluctuations or noise.
How Facebook and Google fund global misinformation
The tech giants are paying millions of dollars to the operators of clickbait pages, bankrolling the deterioration of information ecosystems around the world.
DeepMind says it will release the structure of every protein known to science
The company has already used its protein-folding AI, AlphaFold, to generate structures for the human proteome, as well as yeast, fruit flies, mice, and more.
Inside the machine that saved Moore’s Law
The Dutch firm ASML spent $9 billion and 17 years developing a way to keep making denser computer chips.
This is what happens when you see the face of someone you love
The moment we recognize someone, a lot happens all at once. We aren’t aware of any of it.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.