A Robot Takes Stock
Image-processing and machine-learning algorithms could help stores to manage inventory.
The short figure creeping around the Carnegie Mellon University campus store in a hooded sweatshirt recently isn’t some shoplifter, but a robot taking inventory. Andyvision, as it’s called, scans the shelves to generate a real-time interactive map of the store, which customers can browse via an in-store screen. At the same time, the robot performs a detailed inventory check, identifying each item on the shelves, and alerting employees if stock is low or if an item has been misplaced.
The prototype has been rolling around the floors of the store since mid-May. This Tuesday, Priya Narasimhan, a professor at CMU who heads the Intel Science and Technology Center in Embedded Computing, demonstrated the system to attendees at an Intel Research Labs event in San Francisco.
While making its rounds, the robot uses a combination of image-processing and machine-learning algorithms; a database of 3-D and 2-D images showing the store’s stock; and a basic map of the store’s layout—for example, where the T-shirts are stacked, and where the mugs live. The robot has proximity sensors so that it doesn’t run into anything.
None of the technologies it uses are new in themselves, says Narasimhan. It’s the combination of different types of algorithms running on a low-power system that makes the system unique. The map generated by the robot is sent to a large touch-screen system in the store and a real-time inventory list is sent to iPad-carrying staff.
The robot uses a few different tricks to identify items. It looks for barcodes and text; and uses information about the shape, size, and color of an object to determine its identity. These are all pretty conventional computer-vision tasks, says Narasimhan. But the robot also identifies objects based on information about the structure of the store, and items belong next to each other. “If an unidentified bright orange box is near Clorox bleach, it will infer that the box is Tide detergent,” she says.
Narasimhan’s group developed the system after interviewing retailers about their needs. Stores lose money when they run low on a popular item, and when a customer puts down a jar of salsa in the detergent aisle where it won’t be found by someone who wants to buy it; or when customers ask where something is and clerks don’t know. So far, the robotic inventory system seems to have helped increase the staff’s knowledge of where everything is. By the fall, Narasimhan expects to learn whether it has also saved the store money.
Narasimhan thinks computer-vision inventory systems will be easier to implement than wireless RFID tags, which don’t work well in stores with metal shelves and need to be affixed to every single item, often by hand. A computer vision system doesn’t need to be carried on a robot; the same job could be done by cameras mounted in each aisle of a store.
Ruzena Bajcsy, a professor at the University of California, Berkeley, who researches computer vision and robotics, says others are working on similar automated inventory systems. The biggest challenge for such a system, she says, is whether it “can deal with different illuminations and adapt to different environments.”
After its initial test at the campus store, Narasimhan says, the Carnegie Mellon system will be put to this test in several local stores sometime next year.
Keep up with the latest in AI at EmTech Digital.
The Countdown has begun.
March 25-26, 2019
San Francisco, CA