Industrial robots can speed up many manufacturing tasks, but typically they’ve been isolated from people for safety reasons. Making robots safer and capable of understanding basic linguistic and behavioral cues has been a big challenge. Here are some projects that address these issues.
Cornell researcher Ashutosh Saxena wants people to be able to use casual language to give robots instructions. At the university’s Robot Learning Lab, Saxena is programming robots to know their environment via three-dimensional scanning technology and understand basic commands. Researchers give the example of telling a robot to cook noodles. Normally, that would require a rigid set of instructions covering everything from where the stove is to how to turn it on. If one detail is missing, the robot would be unable to carry out the task. With Saxena’s technology, the robot could understand slight variations of the same command, like “take the pot” or “carry the pot,” and use visual cues to trace a path to the stove or sink just from seeing its surroundings. Details of the research are outlined in two papers: “Tell Me Dave: Context-Sensitive Grounding of Natural Language to Mobile Manipulation Instructions” and “Synthesizing Manipulation Sequences for Under-Specified Tasks Using Unrolled Markov Random Fields.”
One task that’s ideal for robots at work or at home is passing objects to their human counterparts. In some situations, a robot might also need to tell a person where to put an object after this exchange. But how could the machine provide this information in a noisy room or when the person is already having another conversation? One way is for a robot to indicate where an object should go by turning its eyes and gazing in the proper direction. To account for the fact that people would likely be looking down at the robot’s hand when receiving an object rather directly at the machine, researchers at Yale and Carnegie Mellon programmed a delay into the handover process, so a person looks at the robot’s face for the cue about placement before receiving the object. The researchers explain this concept in a paper presented in March at the ACM/IEEE International Conference on Human-Robot Interaction in Germany.
As robots and humans start working together in more and more situations, some researchers are focused on the psychological effects of increased automation. MIT researchers thought that humans would be happiest when they had partial control to schedule work during these interactions, even though the robots can much more quickly use algorithms to schedule the work. As it turned out, the researchers were wrong: the people on the team were happier when they relinquished control of the scheduling tasks to the robots as long as it meant that the team could be efficient. The research was presented in July at the Robotics Science and Systems conference in Berkeley, California.
Even if robots can complete some tasks more quickly than humans, there are still many improvements to be made in how they process language and sense movement. Much work remains before people and robots can efficiently collaborate, with each doing what they are best at.
Do you have a big question? Send suggestions to email@example.com.
Embracing CX in the metaverse
More than just meeting customers where they are, the metaverse offers opportunities to transform customer experience.
Identity protection is key to metaverse innovation
As immersive experiences in the metaverse become more sophisticated, so does the threat landscape.
The modern enterprise imaging and data value chain
For both patients and providers, intelligent, interoperable, and open workflow solutions will make all the difference.
Scientists have created synthetic mouse embryos with developed brains
The stem-cell-derived embryos could shed new light on the earliest stages of human pregnancy.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.