Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

If robots are to become a common sight in homes and public spaces, they will need to respond more intuitively to human actions and behave in ways that are easier for humans to understand. This week, at the 2009 IEEE Human-Robot Interaction (HRI) conference, in La Jolla, CA, researchers will present recent progress toward these twin goals.

Several research teams are exploring ways for robots to both recognize and mimic the subtle, nonverbal side of human communication: eye movements, physical contact, and gestures. Mastering these social subtleties could help machines convey meanings to supplement speech and better respond to human needs and commands. This could be crucial if robots are ever to fulfill their potential as personal assistants, teaching aides, and health-care helpers, say those involved.

Scientists from Carnegie Mellon University will present details of experiments involving a robot that uses eye movement to help guide the flow of a conversation with more than one person. Developed in collaboration with researchers from Japan’s Osaka University and from ATR Intelligent Robotics and Communication Laboratory, this trick could prove particularly useful for robots that act as receptionists in buildings or malls, or as guides for museums or parks, the scientists say.

“The goal is [to] use human communication mechanisms in robots so that humans interpret behaviors correctly and respond to them in an appropriate way,” says Bilge Mutlu, a member of the team from Carnegie Mellon. After all, Mutlu notes, “we don’t want to create an antisocial, shy robot.”

The robot used for the experiments, called Robovie, was developed previously at ATR. To give Robovie the ability to combine gaze with speech, the researchers first developed a model of the way that people use their eyes during a conversation or a discussion. They studied the social-cognition literature to develop predictive models, and then refined these models by collecting data from laboratory observations. Finally, the group incorporated this data into the software that controls Robovie in different conversational settings.

0 comments about this story. Start the discussion »

Credits: Bilge Mutlu

Tagged: Computing, robotics, robots, HCI, human-robot interaction, social cues

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me