Skip to Content

What Happens When Robots Become Role Models

Children can find their behavior shaped by robotic companions—so let’s not screw them up.
February 22, 2017

When you spend a lot of time with someone, their characteristics can rub off on you. But what happens when that someone is a robot?

As artificial intelligence systems become increasingly human, their abilities to influence people also improve. New Scientist reports that children who spend time with a robotic companion appear to pick up elements of its behavior. New experiments suggest that when kids play with a robot that’s a real go-getter, for instance, the child acquires some of its unremitting can-do attitude.

Other researchers are seeking to take advantage of similar effects in adults. A group at the Queensland University of Technology is enrolling a small team of pint-sized humanoid Nao robots to coach people to eat healthy. It hopes that chatting through diet choices with a robot, rather than logging calorie consumption on a smartphone, will be more effective in changing habits. It could work: as our own Will Knight has found out in the past, some conversational AI interfaces can be particularly compelling.

So as personal robots increasingly enter the home, robots may not just do our bidding—they might also become role models, too. And that means we must tread carefully, because while the stories above hint at the possibilities of positive reinforcement from automatons, others hint at potential negative effects.

Some parents, for instance, have complained that Amazon’s Alexa personal assistant is training their children to be rude. Alexa doesn’t need people to say please and thank you, will tolerate answering the same question over and over, and remains calm in the face of tantrums. In short: it doesn’t prime kids for how to interact with real people.

The process can flow both ways, of course. Researchers at Stanford University recently developed a robot that was designed to roam sidewalks, monitor humans, and learn how to behave with them naturally and appropriately. But as we’ve seen in the case of Microsoft’s AI chatbot, Tay—which swiftly became rude and anti-Semitic when it learned from Twitter users—taking cues from the crowd doesn’t always play out well.

In reality, there isn’t yet a fast track to creating robots that are socially intelligent—it remains one of the large unsolved problems of AI. That means that roboticists must instead carefully choose the traits they wish to be present in their machines, or else risk delivering armies of bad influence into our homes.

(Read more: New Scientist, Brisbane Times, “Personal Robots: Artificial Friends with Limited Benefits,” “Chatbots with Social Skills Will Convince You to Buy Something,” “Can This Man Make AI More Human?”)

Keep Reading

Most Popular

Scientists are finding signals of long covid in blood. They could lead to new treatments.

Faults in a certain part of the immune system might be at the root of some long covid cases, new research suggests.

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.