Industrial robots can speed up many manufacturing tasks, but typically they’ve been isolated from people for safety reasons. Making robots safer and capable of understanding basic linguistic and behavioral cues has been a big challenge. Here are some projects that address these issues.
Cornell researcher Ashutosh Saxena wants people to be able to use casual language to give robots instructions. At the university’s Robot Learning Lab, Saxena is programming robots to know their environment via three-dimensional scanning technology and understand basic commands. Researchers give the example of telling a robot to cook noodles. Normally, that would require a rigid set of instructions covering everything from where the stove is to how to turn it on. If one detail is missing, the robot would be unable to carry out the task. With Saxena’s technology, the robot could understand slight variations of the same command, like “take the pot” or “carry the pot,” and use visual cues to trace a path to the stove or sink just from seeing its surroundings. Details of the research are outlined in two papers: “Tell Me Dave: Context-Sensitive Grounding of Natural Language to Mobile Manipulation Instructions” and “Synthesizing Manipulation Sequences for Under-Specified Tasks Using Unrolled Markov Random Fields.”
One task that’s ideal for robots at work or at home is passing objects to their human counterparts. In some situations, a robot might also need to tell a person where to put an object after this exchange. But how could the machine provide this information in a noisy room or when the person is already having another conversation? One way is for a robot to indicate where an object should go by turning its eyes and gazing in the proper direction. To account for the fact that people would likely be looking down at the robot’s hand when receiving an object rather directly at the machine, researchers at Yale and Carnegie Mellon programmed a delay into the handover process, so a person looks at the robot’s face for the cue about placement before receiving the object. The researchers explain this concept in a paper presented in March at the ACM/IEEE International Conference on Human-Robot Interaction in Germany.
As robots and humans start working together in more and more situations, some researchers are focused on the psychological effects of increased automation. MIT researchers thought that humans would be happiest when they had partial control to schedule work during these interactions, even though the robots can much more quickly use algorithms to schedule the work. As it turned out, the researchers were wrong: the people on the team were happier when they relinquished control of the scheduling tasks to the robots as long as it meant that the team could be efficient. The research was presented in July at the Robotics Science and Systems conference in Berkeley, California.
Even if robots can complete some tasks more quickly than humans, there are still many improvements to be made in how they process language and sense movement. Much work remains before people and robots can efficiently collaborate, with each doing what they are best at.
Do you have a big question? Send suggestions to firstname.lastname@example.org.
10 Breakthrough Technologies 2024
Every year, we look for promising technologies poised to have a real impact on the world. Here are the advances that we think matter most right now.
AI for everything: 10 Breakthrough Technologies 2024
Generative AI tools like ChatGPT reached mass adoption in record time, and reset the course of an entire industry.
Scientists are finding signals of long covid in blood. They could lead to new treatments.
Faults in a certain part of the immune system might be at the root of some long covid cases, new research suggests.
What’s next for AI in 2024
Our writers look at the four hot trends to watch out for this year
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.