Robots aren’t generally meant to get confused, but modeling confusion might help make them more useful workmates.
As part of an effort to explore ways for humans and robots to work together more naturally and effectively, a team of researchers at Brown University has developed a robot that measures its own confusion and then asks for help if it feels it needs it.
The work is important because it’s so easy for confusion to arise in everyday interactions. So making relations with a robot as natural as possible means figuring out ways of coping with this. The robot takes a command and measures with what certainty it can respond. And when it isn’t sure what’s being asked of it, it requests help.
Previous work by the Brown University team allowed a robot to read both speech and hand gesture cues to infer what’s being asked of it.
The researchers have shown that this is more effective than voice commands alone. If, however, a human asks for a wrench but there are two wrenches near each other, the robot will now decide if the situation is too uncertain and ask for further information, pointing to one and saying, “This one?”
This is the latest step toward mimicking the way two people hold a conversation, says Stefanie Tellex, an assistant professor at Brown University and the lead researcher on the project.
“That interactive, collaborative process is what allows humans to be so effective when they’re talking to each other and making plans,” Tellex says.
Indeed, Tellex says clarifying misunderstandings may be especially important for human-robot interactions. “We realized that robots were kind of limited because they can’t see as well as a person; they can’t hear as well as a person; they can’t understand as well as a person,” she says. “[But] despite encountering many more errors in understanding, they were losing out on this opportunity to try to make things better using this feedback process.”
The researchers tested the robot by introducing it to volunteers asked to get the robot to perform simple tasks, like picking up a wrench, but who were given no specific instructions on how to operate it.
This worked so well that the testers often assumed the robot was more capable than it really was, believing perhaps that it was tracking their gaze or had more sophisticated language skills.
Jim Boerkoel, an assistant professor at Harvey Mudd College in Claremont, California, who specializes in human-robot interaction, says misunderstanding can often lead to frustration.
“Not only is asking for help critical for the short-term efficiency of human-robot tasks, as we've seen in this application, but it can also have long-term benefits by engendering trust and transparency in the robotic system,” Boerkoel says. “For instance, asking for help communicates to the human the robot’s intent and an understanding in its own limitations.”
How Facebook and Google fund global misinformation
The tech giants are paying millions of dollars to the operators of clickbait pages, bankrolling the deterioration of information ecosystems around the world.
This new startup has built a record-breaking 256-qubit quantum computer
QuEra Computing, launched by physicists at Harvard and MIT, is trying a different quantum approach to tackle impossibly hard computational tasks.
This scientist now believes covid started in Wuhan’s wet market. Here’s why.
How a veteran virologist found fresh evidence to back up the theory that covid jumped from animals to humans in a notorious Chinese market—rather than emerged from a lab leak.
DeepMind says it will release the structure of every protein known to science
The company has already used its protein-folding AI, AlphaFold, to generate structures for the human proteome, as well as yeast, fruit flies, mice, and more.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.