Skip to Content
Humans and technology

Seeing the mind of a robot in augmented reality

Roboticist Stefanie Tellex seeks new ways for robots and humans to work together.
Justin Saglio

 

When the movers came to the Brown University robotics lab of Stefanie Tellex last week, her students watched with interest. Look how they deftly teamed to pick up a couch using body language, eye contact, and just a few commands, like “1-2-3 … lift.

Can robots and humans work together just as smoothly? That’s the goal of research in the Tellex lab, which is trying to give both robots and humans the tools to understand each other a little better and work together more fluidly in real environments.

Some robots, like the Roomba vacuum cleaner, really need only one command—clean or stop. “That is the right interface for a Roomba, but we are seeing robots move beyond a single function. We’d like to be able to tell them anything that is within the robot’s physical capabilities,” says Tellex. “I am working on a system where you talk to the robot like a person. You say ‘Put the crate there’ and the robot figures it out.’”

That’s a hard problem, not least because there a lot of ways to describe what you want done. (I only have to think of what happens when my wife and I—no expert movers—try to reposition our own couch.)  

In work presented last year, Tellex’s team used a voice interface to see if a person and a grasping robot could work together to pick from a group of similar objects on a table—including bowls, markers, and spoons. A command like “Can I have that bowl?” could leave the robot in doubt. So they programmed the robot to ask some clarifying questions, like “This one?”

The Brown group invited 16 volunteers into the lab and found that with such a mini-dialogue, the robots got the job done about 25 percent faster and with better accuracy. People also thought the robot was a lot smarter than it actually is. “It was so good people thought the system could understand phrases like ‘to the left of,’ even when it didn’t. But it would ask a question, so it seemed like it understood,” says Tellex.

The next step of the project, led by PhD student David Whitney, is to combine verbal commands with the augmented-reality headset HoloLens.

During MIT Technology Review’s EmTech Next conference I tried the setup, which shows the user ghostly purple-colored versions of the robot depicting what actions it plans to make—so you can fix or fine-tune them if needed.

Here, the goal is to get inside the robot’s mind, says Whitney. “Good movers use a lot of body language, but robots don’t look like us,” he says. “So this is a way to visualize information about the robot—what is it thinking?"

Deep Dive

Humans and technology

Unlocking the power of sustainability

A comprehensive sustainability effort embraces technology, shifting from risk reduction to innovation opportunity.

Building a data-driven health-care ecosystem

Harnessing data to improve the equity, affordability, and quality of the health care system.

Let’s not make the same mistakes with AI that we made with social media

Social media’s unregulated evolution over the past decade holds a lot of lessons that apply directly to AI companies and technologies.

People are worried that AI will take everyone’s jobs. We’ve been here before.

In a 1938 article, MIT’s president argued that technical progress didn’t mean fewer jobs. He’s still right.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.