Skip to Content
Humans and technology

Seeing the mind of a robot in augmented reality

Roboticist Stefanie Tellex seeks new ways for robots and humans to work together.
Justin Saglio


When the movers came to the Brown University robotics lab of Stefanie Tellex last week, her students watched with interest. Look how they deftly teamed to pick up a couch using body language, eye contact, and just a few commands, like “1-2-3 … lift.

Can robots and humans work together just as smoothly? That’s the goal of research in the Tellex lab, which is trying to give both robots and humans the tools to understand each other a little better and work together more fluidly in real environments.

Some robots, like the Roomba vacuum cleaner, really need only one command—clean or stop. “That is the right interface for a Roomba, but we are seeing robots move beyond a single function. We’d like to be able to tell them anything that is within the robot’s physical capabilities,” says Tellex. “I am working on a system where you talk to the robot like a person. You say ‘Put the crate there’ and the robot figures it out.’”

That’s a hard problem, not least because there a lot of ways to describe what you want done. (I only have to think of what happens when my wife and I—no expert movers—try to reposition our own couch.)  

In work presented last year, Tellex’s team used a voice interface to see if a person and a grasping robot could work together to pick from a group of similar objects on a table—including bowls, markers, and spoons. A command like “Can I have that bowl?” could leave the robot in doubt. So they programmed the robot to ask some clarifying questions, like “This one?”

The Brown group invited 16 volunteers into the lab and found that with such a mini-dialogue, the robots got the job done about 25 percent faster and with better accuracy. People also thought the robot was a lot smarter than it actually is. “It was so good people thought the system could understand phrases like ‘to the left of,’ even when it didn’t. But it would ask a question, so it seemed like it understood,” says Tellex.

The next step of the project, led by PhD student David Whitney, is to combine verbal commands with the augmented-reality headset HoloLens.

During MIT Technology Review’s EmTech Next conference I tried the setup, which shows the user ghostly purple-colored versions of the robot depicting what actions it plans to make—so you can fix or fine-tune them if needed.

Here, the goal is to get inside the robot’s mind, says Whitney. “Good movers use a lot of body language, but robots don’t look like us,” he says. “So this is a way to visualize information about the robot—what is it thinking?"

Deep Dive

Humans and technology

Why embracing complexity is the real challenge in software today

In the midst of industry discussions about productivity and automation, it’s all too easy to overlook the importance of properly reckoning with complexity.

Turning medical data into actionable knowledge

Technology can transform patient care by bringing together data from a variety of sources

Enabling enterprise growth with data intelligence

It's becoming more critical for organizations to organize data and put data infrastructure at the forefront of their data strategy, says Bharti Patel, SVP of product engineering at Hitachi Vantara.

AI gains momentum in core manufacturing services functions

More use cases means customers and employees experience AI’s ability to automate tasks, prioritize work, and empower the user.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.