Thanks to iRobot, the idea of having a robot vacuum your floors no longer seems futuristic—the company has now sold more than 10 million of its Roombas around the world. But most housework is still far beyond the capabilities of any robot on the market. Engineers in iRobot’s R&D labs are hoping to change that by developing technology to enable robots to understand and interact with their environment. The company’s chief technology officer, Paolo Pirjanian, met with MIT Technology Review’s Tom Simonite last week to explain.
You say you want robots to take on more tasks around the home. What technologies must you develop to do that?
The missing link in robotics is low-cost manipulation. Manipulation is most successful in industry where they use very high precision motors and rigid links between everything, and grippers that wouldn’t be safe in the home. Low-cost means tens of thousands of dollars [in that world]. We’re working on making manipulation much cheaper, for example using plastic parts, not steel, that can tolerate less precision (see “Cheaper Joints and Digits Bring the Robot Revolution Closer”).
Navigation is also a key area, because it allows robots to move around freely and intelligently. In the consumer space the state of the art is Northstar, used by our Braava robot. It projects infrared spots onto the ceiling that act as guidance markers. The next generation that we’re working on uses a camera combined with inertial sensors like in a cell phone. It uses photos as landmarks for navigation, and that can extend to larger areas, even outdoors.
We’re also being helped by the availability of low-cost 3-D sensors. If you combine photos with a 3-D map of a room you get something like a CAD model or a video game environment. That can enable more autonomy for a robot because it can understand things like where a door or chair leg is; it could allow robots to understand the environment all the way down to the level of individual objects. That kind of map also provides a common language for the robot and human to talk through. I can say: “Stay out of this room,” “mop the kitchen on Tuesdays,” or even “find this book.”
Can you really make robots smart enough to do that?
A high-fidelity map will require a lot of storage. And it’s not possible to conceive a system that would let a robot understand hundreds of thousands of objects. But the cloud can have all that knowledge. A robot can use the cloud to start learning things about its environment. For example, this object is a cup and so I have to grab it like this; it looks like it’s glass so I need to grip it tight enough so it doesn’t slip but not too hard so it breaks.
What might robots built with this technology do in our homes?
They are most valuable for what you might call chores—things that we have to do over and over again. Consumer research tells us that laundry is the number one household task that people spend their time on, so a laundry robot would be on top of the list. But that is a ways off. Before that we might look at moving from the Roomba to other surfaces and things we have to clean—windows, for example, or the bath and the shower. Through our government and defense business we have a lot of experience with things that work in rugged outdoor environments, so you can imagine us going into the backyard.
If you look at our enterprise teleconference robot Ava, which can navigate on its own, you can also imagine a robot that you use to stay in touch with or assist and monitor people bound to their homes. If I had a grandmother that lived in Florida and hadn’t heard from her, I could ask the robot to find her and call me so I can help. Or perhaps a health-care service does that.
Gain the insight you need on robotics at EmTech MIT.