A telepresence robot due on sale next year hopes to do a better job of being you than any previous bot.
June 23, 2011
With the 21st century well under way and no signs of teleportation becoming possible, telepresence robots are our best chance at instantly being somewhere else. The pitch goes that you can jump onto (into?) your desktop computer and instantly move around, see and speak in a distant location.
Two such robots are already on the market–from Anybots and VGo–and now a third is set to join them. A robot research lab Willow Garage has spun off an independent company, Suitable Technologies, to develop its prototype telepresence robot Texai into a product.
It’s slated to go on sale next year and sets out to solve a major problem as seen with the two robots already on the market: while a person inhabiting an Anybot or VGo robot gets a good(ish) view of their prosthetic body’s surroundings and the people around it, those people don’t get a good view of the operator’s face.
Anybots’ robot displays only a still photo of the current user, while VGo’s machines have a very small, low resolution screen about four feet off the ground. “Those are really spy bots,” Steve Cousins told me when I visited Willow Garage yesterday, pointing out that the people you’re interacting with can’t see you very well. Texai’s big selling point over the competition will be that a user’s face is clearly visible to the people his robot-double interacts with, enabling true, two-way communication, said Cousins.
It certainly seems plausible that this would make interacting via a robot a smoother experience. In my experience using a VGo to work in Technology Review’s Massachusetts HQ from California, these machines struggle to meet the high expectations placed on something trying to fill the role of a person, as I noted in this review:
“My robot body could do some of the basic things I would do in person: move around the office to talk and listen, see and be seen. But it couldn’t do enough. In a group conversation, I would clumsily spin around attempting to take in the voices and body language outside my narrow range of vision. When I walked alongside people, I sometimes blundered into furniture, or neglected to turn when they did. Coworkers were tolerant at first, but they got frustrated with my mistakes.”
Perhaps if my distant colleagues had been able to see my facial expressions clearly the experience would have been easier for all. (Read about my experience.) But filling a big screen requires a big picture, which means bigger bandwidth. Anybots’ founder told me they decided not to add operator video to their robots because Internet connections just aren’t reliable enough to flawlessly send high quality video in two directions as well as a robot’s commands.
Willow Garage has tested Texai extensively, with one of their engineers commuting to their office using one for nearly a year now. But I’m guessing their broadband connection is higher quality than in most homes and businesses. As I found out, connection woes are much more painful when they afflict your (robot) body, not just your Skype call.
A pioneer of modern robotics talks about deep learning and what robots will really be doing in a future not far away. Hear from John Dabiri on what drives his work on the design of novel wind turbines and wind farms.