Skip to Content
MIT News magazine

Machines for Living

Holly Yanco, SM ‘94, PhD ‘00, develops robots to help people in the home and in the field.

In Holly Yanco’s office, robots of every sort–windup, stuffed, and model, ranging from C-3PO to Bender–vie for space with the hundreds of brightly colored Pez dispensers that line the three crowded shelves, the counters, and the top of the whiteboard.

Nestled among the toys is a small, worn wheel, slightly larger than a doughnut. Yanco, who directs the robotics lab at the University of Massachusetts, Lowell, holds it up. “This is the end of my thesis,” she says with a chuckle. As a doctoral candidate at MIT in the late 1990s, Yanco developed a robotic wheelchair she called “Wheelesley.” (She’d started working on it as a visiting lecturer at Wellesley, her alma mater.) Wheelesley’s stereo vision system and distance sensors allow it to automatically identify and navigate around obstacles such as poles and steep drop-offs like curbs or stairs.”I was testing the robot over by Building 34, and the castor just cracked,” she recalls. “This was about a week before my defense.” A fellow grad student helped her push the wheelchair back across campus, and she found a vendor to overnight a replacement part. She keeps the cracked wheel on the shelf to remind her students that technical difficulties encountered by doctoral candidates–and obstacles faced by people with mobility issues–are part of life. But it’s also a reminder that such problems can be overcome.

Yanco’s no-nonsense approach has served her well in her effort to develop both robotic aids for people with disabilities and remote-controlled robots that can help do such things as look for disaster survivors. Both kinds of robots, Yanco says, require intuitive user controls, sophisticated sensors, and intelligent mapping capabilities. And to increase the chances that such robots will be truly useful outside the lab, she thinks carefully about finding the right balance between human control and robot autonomy.

“When you’re driving down the street, there are a lot of things you do with small corrections that you don’t even think about,” she explains. “You’re thinking about the higher level–do I need to turn at the next intersection. But when you have someone in a wheelchair who has trouble with fine motor control, all of the really tiny controls are just as much effort as the large controls. If you can … just have them think about the larger stuff, it can make it easier for them to drive the chair.” For example, Yanco says, a wheelchair user who wants to go to a room on the third floor of a hotel should need only to say the room number, and the robot should be able to manage each smaller step: going from the lobby to the elevator, pushing the elevator button, entering the elevator, pushing the floor button, finding the room, opening the door, and entering.

Yanco soldered her first robot at a computer camp the summer after eighth grade. As kids, she and her brother, now a math teacher, played whatever video games they could get their hands on; when Pac-Man got too easy, they played it with their feet. At Wellesley, she double-majored in philosophy and computer science and took advantage of the opportunity to cross-register at MIT, where she took classes in electrical engineering and computer vision; she ultimately completed her undergraduate thesis under an MIT advisor.

As a master’s candidate at MIT, Yanco started thinking about wheelchairs after meeting David Miller, a visiting research scientist who organized the first robotic wheelchair exhibit at the 1995 International Joint Conference on Artificial Intelligence. That’s where Yanco introduced an early version of Wheelesley. Although she intended to work on robotic vision for her PhD, she gravitated back to wheelchairs when her thesis advisor, robotics guru Rodney Brooks, observed that she seemed more excited when she talked about Wheelesley. After earning her PhD, she taught at Boston College for two semesters before UMass Lowell recruited her to start its robotics lab. Wheelesley, of course, went along for the ride.

Before Wheelesley, most robotic wheelchairs were usable only indoors, and many relied on known maps. But Yanco wanted to create a chair that could understand where a user wanted to go and avoid outdoor obstacles to get there. She’s developing a new version of Wheelesley (known as Wheeley) that will be able to learn the most direct route to any location by updating an internal map, just as humans do. Two cameras serve as Wheeley’s eyes, collecting the information it uses to build its map. (Positioned about four inches apart, the cameras work in stereo to give the robot depth perception.) Software designed by one of Yanco’s collaborators lets it interpret the optical characters that appear in signs, including numbers, letters, and punctuation; the ability to interpret arrows and other symbols is in the works. Soon Wheeley will be able to recognize door handles and elevator buttons as well.

In addition to enhancing Wheeley’s intelligent mapping, ­Yanco’s group is working on improving robotic arm attachments that can recognize and grasp objects. Though such devices already exist, they are not only expensive–the Assistive Robotic Manipulator (ARM) made by the Dutch company Exact Dynamics costs $15,000 and up–but also hard to manage. Those who want to use the ARM must undergo several training sessions and study a thick manual detailing a complex series of joystick movements. Yanco envisions a robot that can figure out how to retrieve any object the user indicates.

At Lowell, Yanco and doctoral candidate Kate Tsui have developed an intuitive system for controlling the ARM. A user sitting in a wheelchair outfitted with the attachment and a touch screen will be able to touch an image of an object on the screen to tell the robotic arm to retrieve the object. Yanco has also added two color cameras, one on the ARM’s shoulder and a smaller one between the two fingers of its gripper. Images from the cameras, displayed on a separate color touch screen, show what’s in front of the attachment. In the lab, the ARM is mounted on a tripod about waist high in front of three wooden shelves containing nine objects, such as a coffee mug, a bottle of Advil, and several cups of different colors. When Tsui taps the image of a blue plastic cup on-screen, the 32-inch arm slowly unfurls, rotating at its shoulder and wrist, and extends so that its gripper hovers near the correct cup, which it identifies by color. Software developed by Yanco’s colleagues at the University of Central Florida enables the ARM to grasp and retrieve the object.Yanco’s group has started testing the touch screen and arm at the Crotched Mountain Rehabilitation Center in New Hampshire, where wheelchair users report that it is easy to use.

Yanco’s group is also developing a low-cost arm attachment that can open doors. A prototype of the new arm’s gripper, which can open, close, and turn in either direction, is powered by only one motor, to keep the arm within the $1,000-to-$2,000 range. The prototype opens a door when a user points a beam of light at it with a joystick-controlled laser. But ultimately, Yanco hopes to make it smart enough to reach for doorknobs automatically.

Yanco’s work on intuitive human controls and environmental mapping is also applicable to robots designed to look for survivors of disasters in dangerous terrain. Her lab has created a sophisticated yet easy-to-use system to manage the rugged ATRV-Jr (known as Junior) developed by iRobot, a company that Brooks cofounded and that’s behind the robotic vacuum cleaner Roomba. About knee-high and equipped with four bulky wheels and two cameras, Junior is controlled with a joystick and a keyboard. Users who don’t understand the controls may damage the robot or the cameras, or fail to use them effectively–a danger that’s amplified when a collision could cause rubble to collapse. So Junior needed an intuitive navigation interface.

The urban search and rescue (USAR) interface that Yanco’s group developed looks like a video-game display, with five boxes on the operator’s computer screen; two show the views from the robot’s front and rear cameras, and the user can use a joystick to tilt or pan. A third box displays a feed from a thermal camera, which allows the user to “see” in dark or dust-filled environments. The robot also uses a laser measurement sensor (accurate at up to 80 meters) and a sonar ring (accurate at up to 2 meters) to build a map of the immediate area and any obstacles it contains. The map shows up in a gray box, with a red blob marking Junior’s path, so the user can drive the robot even without visual feedback from the front and rear cameras. Another, zoomed-out version of this map in the corner of the screen shows the big picture. In tests at a simulation arena in Gaithersburg, MD, Yanco found that when the robot’s controllers used the new interface, the robot bumped into things far less often than it did when they used the existing interface.

Yanco also works with a device that resembles those deployed at the World Trade Center after 9/11, the first disaster site where search-and-rescue robots were used. This smaller, resilient searcher (called VGTV, for “variable-geometry tracked vehicle”) is a folding bot with three wheels on each side. The wheels, which are connected by a tread, can form a triangle or flatten in order to climb over most kinds of objects and get through tight spaces. A rotating camera allows the bot to see forward and backward, an advantage in tight corridors where it can’t turn around. A long tether provides power and transmits video data collected by the robot. Yanco’s group turned the interface, which originally displayed lists of numbers indicating the tilt of the camera and the shape of the robot, into a display that shows the robot’s shape and path graphically. Yanco’s lab is also coupling a tabletop touch screen to both the VGTVs and Junior so that users can steer the bots by moving their fingers along the video display.

“I think we’re at this place where we’re really going to see these robotics grow even faster than we already have,” says Yanco, who has a good vantage on the field’s future as chair of the annual New England “Botball” tournament for middle- and high-school students. She also cofounded the Artbotics project, which encourages high-school and college students to pursue computing through the design and construction of interactive art projects for a local museum.

When she started working with robots in the 1990s, Yanco says, researchers didn’t think about actually deploying their inventions in the real world; they just hoped their robots would work the next day. But advances in computer technology and cameras have changed all that. “Now we have robots out in the field. There are robots in Iraq, Afghanistan, in houses vacuuming,” she says. “I think it’s been a really exciting time for robotics.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.