MIT Technology Review Subscribe

A Robot With Your Face

A telepresence robot due on sale next year hopes to do a better job of being you than any previous bot.
With the 21st century well under way and no signs of teleportation becoming possible, telepresence robots are our best chance at instantly being somewhere else. The pitch goes that you can jump onto (into?) your desktop computer and instantly move around, see and speak in a distant location.
Two such robots are already on the market–from Anybots and VGo–and now a third is set to join them. A robot research lab Willow Garage has spun off an independent company, Suitable Technologies, to develop its prototype telepresence robot Texai into a product.
It’s slated to go on sale next year and sets out to solve a major problem as seen with the two robots already on the market: while a person inhabiting an Anybot or VGo robot gets a good(ish) view of their prosthetic body’s surroundings and the people around it, those people don’t get a good view of the operator’s face.
Anybots’ robot displays only a still photo of the current user, while VGo’s machines have a very small, low resolution screen about four feet off the ground. “Those are really spy bots,” Steve Cousins told me when I visited Willow Garage yesterday, pointing out that the people you’re interacting with can’t see you very well. Texai’s big selling point over the competition will be that a user’s face is clearly visible to the people his robot-double interacts with, enabling true, two-way communication, said Cousins.
It certainly seems plausible that this would make interacting via a robot a smoother experience. In my experience using a VGo to work in Technology Review’s Massachusetts HQ from California, these machines struggle to meet the high expectations placed on something trying to fill the role of a person, as I noted in this review:

“My robot body could do some of the basic things I would do in person: move around the office to talk and listen, see and be seen. But it couldn’t do enough. In a group conversation, I would clumsily spin around attempting to take in the voices and body language outside my narrow range of vision. When I walked alongside people, I sometimes blundered into furniture, or neglected to turn when they did. Coworkers were tolerant at first, but they got frustrated with my mistakes.”

Perhaps if my distant colleagues had been able to see my facial expressions clearly the experience would have been easier for all. (Read about my experience.) But filling a big screen requires a big picture, which means bigger bandwidth. Anybots’ founder told me they decided not to add operator video to their robots because Internet connections just aren’t reliable enough to flawlessly send high quality video in two directions as well as a robot’s commands.
Willow Garage has tested Texai extensively, with one of their engineers commuting to their office using one for nearly a year now. But I’m guessing their broadband connection is higher quality than in most homes and businesses. As I found out, connection woes are much more painful when they afflict your (robot) body, not just your Skype call.
Advertisement
This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in
This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement