Humans are instinctively tribal creatures. When we observe the interactions of people around us, we can intuitively infer whom we should get along with and whom we shouldn’t. This might sound like a negative instinct, but it’s actually what makes teamwork possible. Researchers at MIT believe this skill may be an important prerequisite for creating sociable AI systems that can cooperate with us in our day-to-day lives.
The idea of imbuing machines with social knowledge isn’t totally new. Game-playing AI agents also require an understanding of the relationship landscape to know whom to cooperate and compete with. But they’re given these relationship structures explicitly within the rules of the game, while humans can quickly pick them up in ambiguous situations.
Inspired by this ability, the researchers developed a new machine-learning algorithm to figure out the relationships among multiple agents through a limited number of observations. They then ran two experiments to test the algorithm’s performance. In the first one, it had to infer the alliances of players in a video game by watching several sequences of game play. In the second, it had to predict the players’ actions in the same video game to see whether it truly understood each player’s motivations. It wasn’t trained for either task.
In both experiments, the algorithm’s inferences and predictions closely corresponded to the judgments of humans, demonstrating its ability to rapidly grasp social structures from very little data.
This story originally appeared in our AI newsletter The Algorithm. To have it directly delivered to your inbox, subscribe here for free.