Skip to Content
Uncategorized

Can Humans Benefit from Robots in the Workplace?

Researchers are working to make robots better at understanding commands and collaborating.
October 15, 2014

Industrial robots can speed up many manufacturing tasks, but typically they’ve been isolated from people for safety reasons. Making robots safer and capable of understanding basic linguistic and behavioral cues has been a big challenge. Here are some projects that address these issues.

Now that collaborative robots like ABB’s Yumi have been designed to work side-by-side with humans, researchers are focused on making these interactions more natural and productive.

Human Language

Cornell researcher Ashutosh Saxena wants people to be able to use casual language to give robots instructions. At the university’s Robot Learning Lab, Saxena is programming robots to know their environment via three-dimensional scanning technology and understand basic commands. Researchers give the example of telling a robot to cook noodles. Normally, that would require a rigid set of instructions covering everything from where the stove is to how to turn it on. If one detail is missing, the robot would be unable to carry out the task. With Saxena’s technology, the robot could understand slight variations of the same command, like “take the pot” or “carry the pot,” and use visual cues to trace a path to the stove or sink just from seeing its surroundings. Details of the research are outlined in two papers: “Tell Me Dave: Context-Sensitive Grounding of Natural Language to Mobile Manipulation Instructions” and “Synthesizing Manipulation Sequences for Under-Specified Tasks Using Unrolled Markov Random Fields.”

Controlling Collaboration

One task that’s ideal for robots at work or at home is passing objects to their human counterparts. In some situations, a robot might also need to tell a person where to put an object after this exchange. But how could the machine provide this information in a noisy room or when the person is already having another conversation? One way is for a robot to indicate where an object should go by turning its eyes and gazing in the proper direction. To account for the fact that people would likely be looking down at the robot’s hand when receiving an object rather directly at the machine, researchers at Yale and Carnegie Mellon programmed a delay into the handover process, so a person looks at the robot’s face for the cue about placement before receiving the object. The researchers explain this concept in a paper presented in March at the ACM/IEEE International Conference on Human-Robot Interaction in Germany.

Tailoring Teamwork

As robots and humans start working together in more and more situations, some researchers are focused on the psychological effects of increased automation. MIT researchers thought that humans would be happiest when they had partial control to schedule work during these interactions, even though the robots can much more quickly use algorithms to schedule the work. As it turned out, the researchers were wrong: the people on the team were happier when they relinquished control of the scheduling tasks to the robots as long as it meant that the team could be efficient. The research was presented in July at the Robotics Science and Systems conference in Berkeley, California.

The takeaway:

Even if robots can complete some tasks more quickly than humans, there are still many improvements to be made in how they process language and sense movement. Much work remains before people and robots can efficiently collaborate, with each doing what they are best at.

Do you have a big question? Send suggestions to questionoftheweek@technologyreview.com.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.