Skip to Content

A Robotic Helping Hand

Georgia Tech’s prototype robot responds to instructions given with an ordinary laser pointer.

A new robot from Georgia Tech understands commands given using a simple tool: an off-the-shelf laser pointer. In a demonstration video, a person reclining in a chair flicks on a green laser and trains it on a cordless phone on the floor a few feet away. A thin, five-foot-seven-inch robot called Elevated Engagement, or El-E for short, fixes on the phone, wheels over, grips it, and brings it back to the user in a robotic version of fetch.

Ready to fetch: Georgia Tech’s new home-assistance robot stands at about human height, with two camera eyes that can home in on the spot projected by a laser pointer. Its roughly two-foot-long sensor-equipped arm can extend down to the ground or up to tables to pick up lightweight items.

While companion robots have been making their way into our homes for a while, from Furbys and Tamagotchi digital pets to the therapeutic Paro baby seal, El-E is a step closer to an automated robot that can, say, clean up an entire house or do the dishes. Many obstacles still remain, however–in areas like navigation, grasping, and communication.

El-E, built by the Healthcare Robotics Lab at Georgia Tech, is the first robot to be guided by laser pointing, a method more exact than human gesture or speech, which robotics researchers have tried in the past. According to the project’s principal investigator, Charles Kemp, the approach was partially inspired by quadriplegics who communicate with helper monkeys via lasers. “It’s a point-and-click interface,” says Kemp. Users point the laser at what they want and then at where they want it to go: to themselves, to another person, or onto another surface.

What’s more, El-E is also the first robot to autonomously retrieve objects from surfaces of varying heights in an unmapped environment. While robots have reached for objects on tables and shelves before, they always needed to know the layout of a static setting. El-E, by contrast, can work in a new room with no map and interact with new or moved tables by using its own, built-in laser to detect surfaces.

Multimedia

  • See Georgia Tech’s new El-E assistant robot in action.

Kemp’s lab is developing the robot in collaboration with Julie Jacko, director of the Institute for Health Informatics and a professor at the University of Minnesota, and Jonathan Glass, who directs a center at Emory that researches amyotrophic lateral sclerosis, or ALS (Lou Gehrig’s disease). The team used several off-the-shelf components to build most of the bot and added the novel laser-pointer interface at its “head.” The first part of the interface is a camera coupled to a hyperbolic mirror that makes it omnidirectional so that it can see any object illuminated by the pointer. The robot swivels its two “eyes”–high-resolution cameras–until they are facing the spot made by the laser pointer. The robot then triangulates information from the cameras to estimate the object’s position in three-dimensional space. Once El-E locates an object, it declares its success by saying the word “ding” and wheeling over to it.

“The use of a laser pointer on El-E opens up a brand new way for people to interact with robots,” says Andrew Ng, a professor of computer science at Stanford University, who has followed Kemp’s work closely. “I think this is a way of interacting with robots that will prove useful on many more applications.”

To begin the process of picking up an object, El-E uses its laser range finder to figure out if the object is on the floor or on an elevated surface. If it’s on the floor, El-E moves toward the object and lowers its laser range finder to scan across the floor. If the object is elevated, El-E uses the range finder to identify the edge of the surface of the table or desk where the object is resting. Once it docks with the table, El-E scans the surface and utilizes a camera on its hand to look down and visually segment the object, assuming the table has a uniform visual texture. So far, El-E can correctly pick out an object among others, as long as they’re spaced out. The team hasn’t yet tested objects that are clustered or overlapping.

Fast learner: El-E is able to retrieve objects that a human indicates with a handheld laser. Here, El-E hands its creator Charles Kemp a cloth towel, traditionally a difficult object for robots to pick up because of its malleability. Using visual and tactile sensors on its arm and hand, the robot can grasp items that it has not encountered before.

The hand gripper descends and orients in the best way to grip the object, while sensors in the fingertips prevent the hand from either crushing the object or letting it slip out of its grasp. If El-E can’t grab the object the first time it tries, its feedback system kicks in to reevaluate the object’s position with the laser range finder and try other orientations with the gripper. “Having the robot know that it’s failed and trying different strategies improves the performance significantly,” says Kemp. While robots can do complex tasks in rigidly controlled environments like a car factory, unpredictable situations and objects are, right now, a robot’s bane. In the past, researchers have had robots memorize the shape of objects. Currently, a host of researchers are teaching robots to identify new objects in different ways. Ng’s Stair robot, for example, relies on machine-learning techniques, through which it is shown how to pick up types of objects until it eventually devises its own strategies. (See “Your Robotic Personal Assistant.”)

So far, El-E can pick up cups, bottles, phones, and dish towels (tricky for robots because of their shapelessness). When El-E has grasped an object, it utters a whimsical “Bob’s your uncle” and follows the laser back to the user or to a designated surface. Using standard face-detection software, El-E proclaims “Life-form detected” as it offers the object.

El-E is a “very compelling demonstration of what is possible today,” says Josh Smith, a senior research scientist at Intel Research Seattle, who works on robotic gripping. He adds that El-E is just the beginning. Researchers still have to figure out how robots can grasp more complexly shaped or heavier objects, as well as objects in cluttered environments or stored among many identical items (forks in a drawer, for example). The continued development of robot fetching is essential, Smith says, because it is a “building block from which you can build many other personal robot applications” and potentially even military ones.

“I think that someday in the future, home robots will be as common in our houses as cars are in our garage today, and as indispensible,” says Ng, who is in the process of teaching his Stair robot to microwave a frozen burrito. While the hardware and machinery for making fully functional robot assistants exist, the software needs to be improved before people have robots tidying up their houses, Ng says.

Aside from making everyday life easier for people, robot helpers would be able to let the elderly and those with motor impairments live more independently. For example, El-E could retrieve a fallen prescription bottle or phone–something that might be impossible for a person with severe disabilities. The next step for El-E is to work with people with ALS, a devastating neurodegenerative disorder that severely impairs mobility. “There’s an enormous opportunity to improve people’s quality of life and make an impact on health care,” says Kemp. Ongoing studies with the Emory ALS center this summer will test El-E with ALS patients to see if it can successfully meet their needs. Eventually, the team hopes to have El-E flick light switches and open doors.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.