Skip to Content

What Am I Thinking About You?

Knowing how the brain deals with other people could lead to smarter computers.

The ability to discern what other people are thinking and feeling is critical to social interaction and a key part of the human experience. So it’s not surprising that the human brain devotes a lot of resources to so-called social cognition. But only recently has neuroscience begun to tease apart which brain regions and processes are devoted to thinking about other people.

Understanding how the brain perceives, interprets, and makes decisions about other people could help advance treatments and interventions for autism and other disorders in which social interactions are impaired. It could also help us build more socially intelligent computers. So far, artificial intelligence has struggled to program computers that make the kinds of social judgments that come easily to us, such as interpreting ambiguous facial expressions or deciding whether a person’s words convey anger or sadness.

More than a decade ago, neuroscientist Rebecca Saxe discovered a brain region that develops a “theory of mind”—a sense of what other people are thinking and feeling. More recently, she became an investigator in MIT’s Center for Minds, Brains, and Machines, and she’s been studying autism and social cognition in children and adults. Saxe and MIT Technology Review contributing editor, Courtney Humphries discussed the implications of this new research on the social brain.

Is social cognition something seen only in humans?

There’s every reason to believe that at least in some ways, we are uniquely good at this kind of thing. Humans are by far the most social species, other than insects. Even the interaction you and I are having, that two strangers could meet and for no particular reason act coöperatively for an hour—that’s unheard of outside of humans. If two ants did it, they would be sisters. Our extraordinary social lives and our hugely complex cognitive capacities combine to make human social cognition distinctive.

How do you study it in the brain?

Nothing invasive—no genetic engineering, no optogenetics, none of these things. We are limited to what are called noninvasive neuroimaging technologies, and the most well known of these is functional MRI, which uses blood flow as an index of neural activity.

So you can see which areas of the brain are active when people are thinking about other people. Was it a surprise to find brain regions devoted to social cognition?

In a sense, that had been predicted about 15 or 20 years earlier, when people noticed that kids with autism seemed disproportionately bad at that kind of thing. But otherwise this was completely unknown. In some ways, I think it was the most important, most surprising new discovery of human cognitive neuroscience. All the visual regions, all the sensory regions, all the motor control regions—we predicted they would be there. But the social brain was not predicted at all. It just emerged. That was wild.

“There’s a huge focus in AI right now on trying to take the natural ­language people use and figure out [whether they like something]. Now take it up to the level of distinguishing between language when you feel disappointed, lonely, or terrified. That’s the kind of problem that we want to solve.”

In the last 10 years [we’ve been] trying to refine our interpretation about what information is in those brain regions, how they interact with one another, how they develop, whether or not those brain regions do have anything to do with autism.

And do these regions indeed not function well in people with autism?

That was the original hypothesis that we went after. Maybe [people with autism] are trying to solve social problems with the machinery we would use for other problems, instead of having the dedicated machinery. There is no evidence that that is right. Too bad, because I like that idea. Autism has turned out to be a much, much harder problem at every level of analysis than I think anyone expected. Ten years ago people thought that cognitively, neurally, genetically, autism would be crackable. Now it looks like maybe there are thousands of genetic variations of autism.

How might your work help lead to more socially capable computers?

To me, the signature of human social cognition is the same thing that makes good old-fashioned AI hard, which is its generativity. We can recognize and think about and reason through a literally infinite set of situations and goals and human minds. And yet we have a very particular and finite machinery to do that. So what are the right ingredients? If we know what those are, then we can try to understand how the combinations of those ingredients generate this massively productive, infinitely generalizable human capacity.

What do you mean by “ingredients”?

Let’s say you hear about a friend of yours. She was told she was being called to her boss’s office, and she thought she was finally getting the promotion she’d been waiting for. But it turned out she actually got fired. Let’s say the next day you see her coming down the street and she has a huge smile on her face. Probably not what you had expected, right?

You take that and you build a whole interior world. Maybe it’s a fake smile and she’s putting on a brave face. Maybe she’s relieved because now she can move to the other side of the continent and live with her boyfriend. You need to figure out: What were her goals? What did she want? What changed her mind? There are all kinds of features of that story that you were able to extract in the moment. If a computer could extract [such] features, we could [improve its ability to do] sentiment analysis. There’s a huge focus in AI right now on trying to take the natural language people use and figure out: Did they like or not like that thing? Did they like that restaurant or not like that restaurant? Now take it up to the level of distinguishing between language when you feel disappointed, lonely, or terrified. That’s the kind of problem that we want to solve.

How can computers learn to do that?

You need to translate those words into more abstract things—goals, desires, plans. My colleague Josh Tenenbaum and I have been working for years just to build a kind of mathematical representation of what it means to think of somebody as having a plan or a goal, such that this model can predict human judgments about the person’s goal in a really simple context. What do you need to know about a goal? We’re trying to build models that describe that knowledge.

That’s very different from having a computer look at millions of examples to find patterns.

Exactly. This is not big data; it’s trying to describe the structure of the knowledge. That’s always been viewed as an opposition: the people who want bigger data sets and the people who want the right knowledge structures. My impression right now is that there’s a lot more intermediate ground. What used to be viewed as opposite traditions in AI should now be viewed as complementary, where you try to figure out probabilistic representations that learn from data.

But the prospect of replicating social cognition in a computer seems far off, right? We don’t yet understand how the brain does it.

It feels pretty plausible that the full understanding is not in the grasp of me in my lifetime, and that’s good, because it means that I have a lot of work to do. So in the meantime, I do whatever seems likely to produce a little bit of instrumental progress toward that bigger goal.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.