Skip to Content

How You Could Help Your Future Robot Coworker

Humans and robots will probably find some interesting symbiotic working relationships.
September 26, 2012

It’s easy to assume that super-smart workplace robots will simply roll up steal jobs from unfortunate human workers in the coming decades (just see the comments in my story from last week, “This Robot Could Revolutionize Manufacturing”). The reality is likely to be more complicated and interesting though. In all likelihood, and certainly in the near term, humans and robots will probably find some curious new collaborative working relationships.

Baxter, from Rethink Robotic, and similar manufacturing robots in the works, are just the start of this. While conventional industrial robots follow preprogrammed commands closely and operated behind safety barriers, Baxter needs constant feedback from both its environment and from nearby workers. And while there are some things that Baxter can do much better than a person, like grabbing items from a conveyor belt for days on end, there are plenty of things it can’t do without some gentle guidance, like figure out what to do when a production run changes.

A paper (PDF) published recentlyby the robotics startup Willow Garage Labs hints at an even more collaborative future. It describes experiments involving a tele-operated robot arm that show how a balance of autonomy and human control could be the best way to carry out certain jobs. This is an area of research known as “human-in-the-loop control.”

The researchers tried four different modes of robot teleoperation for some simple grasping tasks (shown above), each with a slightly different degree of robot autonomy. The first mode gave the human operator complete remote-control of the arm; the second asked the controller to specify waypoints for the task; the third saw the human controller specify just the final grasping position; and the fourth and final mode involved simply indicating the general area for grasping and letting the robot do the rest.

The Willow Garage researchers, which included Leila Takayama, who we named a TR35 winner this year for her work on human-robot interaction, found the two modes to be most efficient the least likely to result in mistakes. This is because it’s difficult to precisely control an arm that has more joints and degrees of freedom than your own. 

A completely autonomous robot could, of course, pick up an object, but it would require a lot of intelligence for it to determine, in response to commands or some general set of objectives, which object it should look for in the first place. You can imagine how this scenario might work in a manufacturing setting, for example. A remote operator could help guide various robot towards particular goals but then let it take care of the details. 

Willow Garage is also exploring home-help robots that might work the same way. And, today, a Willow Garage spinout, Suitable Technologies, launched a very simple workplace telepresence system (see “Beam Yourself to Work in a Remote-controlled Body”).

The video below shows a remarkable teleoperated humanoid robot developed researchers in Japan and demonstrated at this year’s Siggraph conference in Los Angeles. It might look cool, but if the Willow Garage research is any indication, future human-robot relationships are likely to be a fair bit more complicated.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.