Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

With the 21st century well under way and no signs of teleportation becoming possible, telepresence robots are our best chance at instantly being somewhere else. The pitch goes that you can jump onto (into?) your desktop computer and instantly move around, see and speak in a distant location.

Two such robots are already on the market–from Anybots and VGo–and now a third is set to join them. A robot research lab Willow Garage has spun off an independent company, Suitable Technologies, to develop its prototype telepresence robot Texai into a product.

It’s slated to go on sale next year and sets out to solve a major problem as seen with the two robots already on the market: while a person inhabiting an Anybot or VGo robot gets a good(ish) view of their prosthetic body’s surroundings and the people around it, those people don’t get a good view of the operator’s face.

Anybots’ robot displays only a still photo of the current user, while VGo’s machines have a very small, low resolution screen about four feet off the ground. “Those are really spy bots,” Steve Cousins told me when I visited Willow Garage yesterday, pointing out that the people you’re interacting with can’t see you very well. Texai’s big selling point over the competition will be that a user’s face is clearly visible to the people his robot-double interacts with, enabling true, two-way communication, said Cousins.

It certainly seems plausible that this would make interacting via a robot a smoother experience. In my experience using a VGo to work in Technology Review’s Massachusetts HQ from California, these machines struggle to meet the high expectations placed on something trying to fill the role of a person, as I noted in this review:

“My robot body could do some of the basic things I would do in person: move around the office to talk and listen, see and be seen. But it couldn’t do enough. In a group conversation, I would clumsily spin around attempting to take in the voices and body language outside my narrow range of vision. When I walked alongside people, I sometimes blundered into furniture, or neglected to turn when they did. Coworkers were tolerant at first, but they got frustrated with my mistakes.”

Perhaps if my distant colleagues had been able to see my facial expressions clearly the experience would have been easier for all. (Read about my experience.) But filling a big screen requires a big picture, which means bigger bandwidth. Anybots’ founder told me they decided not to add operator video to their robots because Internet connections just aren’t reliable enough to flawlessly send high quality video in two directions as well as a robot’s commands.

Willow Garage has tested Texai extensively, with one of their engineers commuting to their office using one for nearly a year now. But I’m guessing their broadband connection is higher quality than in most homes and businesses. As I found out, connection woes are much more painful when they afflict your (robot) body, not just your Skype call.

2 comments. Share your thoughts »

Tagged: Computing, robotics, Willow Garage

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me