Skip to Content
Uncategorized

Eye Robot Aims to Crack Secret of Nonverbal Communication

Japanese robot communicates using eye movements alone.

Social referencing is the ability to communicate with nonverbal signals. Children, in particular, learn much from the expressions of adults in new situations-whether to be frightened, happy sad etc. Nonverbal communication is important for everybody but in its purest form, perfected by many a primary school teacher, it is possible to control young children with eyebrow movements alone (a skill sadly lacking in many workplaces).

Now nonverbal communication is being roboticised by Yoichi Yamazaki and his pals at the Tokyo Institute of Technology.

The team has built an “eye robot” consisting of nothing more than a pair of eyeballs capable of conveying a wide range of nonverbal signals. “The proposed system provides a user friendly interface so that humans and robots communicate in natural fashion,” say the team.

It’s not hard to create expressions with synthetic eyes. The difficulty for a computer is in knowing what kind of message this expression conveys and when to use it. The team has worked this out by setting up the device to produce expressions at random and then asking viewers to evaluate each expression.

Using the results of these questionnaires, Yamazaki and co have created a “mentality space” for expressions. Users talk to the eye robot which evaluates the conversation using a speech recognition program and then selects an appropriate eye expression from this space.

Clearly, eye expression is an important part of the nonverbal communication that goes on between humans. Crack this code and the team could have a winner on its hands. But while it is relatively straightforward to make eyes that look happy or sad, it will be much harder to create synthetic eyeballs that can hold their own in a nonverbal conversation.

Anyone feel a new kind of Turing test coming on?

Ref: arxiv.org/abs/0904.1631: Intent Expression Using Eye Robot for Mascot Robot System

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.