Housekeeping robots are still the stuff of science fiction, but not for want of hardware: there’s almost no task too precise or delicate for a robot that knows in advance what it’s supposed to do. The problem lies in teaching robots to deal with the unknown. That’s precisely what Andrew Ng, an assistant professor of computer science, set out to do when he founded the Stanford Artificial Intelligence Robot (STAIR) project a few years ago.
Previous robots have had some ability to improvise–many could locate familiar objects in unfamiliar environments, for example. But Ng has gone a step further: STAIR can deduce how to pick up an object it’s never seen before. Using traditional machine-learning techniques, Ng trained STAIR on a database of pictures of objects such as wine glasses, coffee mugs, and pencils, as seen from different perspectives. Each object was correlated with information about the best place to grasp it: the stem of the wine glass, the middle of the pencil. After its training, STAIR could generalize those associations to adapt to new situations–lifting, among other things, a lunch box by its handle and a piece of intricate lab equipment by its metal stem. It was even able to remove dishes from a dishwasher and place them on a drying rack.
The STAIR team has made other advances–its innovative system for robotic depth perception even spawned a side project, software that converts static 2-D photographs into 3-D images. But despite this progress, Ng knows that building a general-purpose household robot is beyond the means of any one lab. So he’s developing an open-source robotics operating system that will let researchers integrate a robot’s sensor systems and functional components in new ways, without having to write code from scratch. –Larry Hardesty