Skip to Content
Artificial intelligence

An algorithm that mimics our tribal instincts could help AI learn to socialize

January 22, 2019

Humans are instinctively tribal creatures. When we observe the interactions of people around us, we can intuitively infer whom we should get along with and whom we shouldn’t. This might sound like a negative instinct, but it’s actually what makes teamwork possible. Researchers at MIT believe this skill may be an important prerequisite for creating sociable AI systems that can cooperate with us in our day-to-day lives.

The idea of imbuing machines with social knowledge isn’t totally new. Game-playing AI agents also require an understanding of the relationship landscape to know whom to cooperate and compete with. But they’re given these relationship structures explicitly within the rules of the game, while humans can quickly pick them up in ambiguous situations.

Inspired by this ability, the researchers developed a new machine-learning algorithm to figure out the relationships among multiple agents through a limited number of observations. They then ran two experiments to test the algorithm’s performance. In the first one, it had to infer the alliances of players in a video game by watching several sequences of game play. In the second, it had to predict the players’ actions in the same video game to see whether it truly understood each player’s motivations. It wasn’t trained for either task.

In both experiments, the algorithm’s inferences and predictions closely corresponded to the judgments of humans, demonstrating its ability to rapidly grasp social structures from very little data.

This story originally appeared in our AI newsletter The Algorithm. To have it directly delivered to your inbox, subscribe here for free.

Deep Dive

Artificial intelligence

A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?

Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.

The viral AI avatar app Lensa undressed me—without my consent

My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.

Roomba testers feel misled after intimate images ended up on Facebook

An MIT Technology Review investigation recently revealed how images of a minor and a tester on the toilet ended up on social media. iRobot said it had consent to collect this kind of data from inside homes—but participants say otherwise.

How to spot AI-generated text

The internet is increasingly awash with text written by AI software. We need new tools to detect it.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.