Skip to Content
Uncategorized

How Babies Know What Robots Are Thinking

New research tells us something about infants’ theory of mind, as well as how to build robots humans instinctively recognize as sentient
February 2, 2011

Computer scientists don’t usually see their labs filling up with dozens of mothers and their infants, but that’s exactly what happened to Rajesh Rao as he embarked on one of his most recent experiments. In order to discover what it takes to make an infant engage with a robot as if it were a sentient being, he had to get his hands on the real thing.

At one year of age, infants typically begin to follow the gaze of the adults in their line of sight. It’s a useful way to recognize what’s important and discern which words attach to which objects. Indeed, there’s even a theory in evolutionary biology that holds that the highly visible whites of humans’ eyes may have evolved in order to facilitate gaze following as an important mechanism of social interaction.

In order to test whether or not gaze-following is important for discerning the sentience of an artificial being, Rao allowed babies to watch adults interact with HOAP-2, a humanoid robot from Fujitsu Laboratories. In four different conditions, he tested “normal interaction”, when the adult followed the gaze of the robot, and “passive conditions”, when the robot did nothing or the gaze of the adult and the robot did not sync.

When adults interacted with the robot by following its gaze as if were another adult, babies subsequently followed the robot’s gaze. The robot did not talk and had a limited range of gestures (some of which it used in other experimental conditions), which suggests that gaze following is a unique identifier for babies – and humans – about the mental capacity of otherwise “inanimate” beings.

The research both provides a uniquely-controlled method for picking apart which features of a humanoid tell an infant that it has some level of awareness, and suggest that for social robots to interact with humans in a natural manner, gaze following must be a part of their repertoire of interactions.

Cited:

“Social” robots are psychological agents for infants: A test of gaze following (pdf)

Neural Networks, October/November 2010

Follow Mims on Twitter or contact him via email.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.