Skip to Content
Artificial intelligence

AI researchers want to study AI the same way social scientists study humans

Maybe we don’t need to look inside the black box after all. We just need to watch how machines behave, instead
April 29, 2019
A conceptual illustration of a researcher studying AI
A conceptual illustration of a researcher studying AITsjisse Talsma

Much ink has been spilled on the black-box nature of AI systems—and how it makes us uncomfortable that we often can’t understand why they reach the decisions they do. As algorithms have come to mediate everything from our social and cultural to economic and political interactions, computer scientists have attempted to respond to rising demands for their explainability by developing technical methods to understand their behaviors.

But a group of researchers from academia and industry are now arguing that we don’t need to penetrate these black boxes in order to understand, and thus control, their effect on our lives. After all, these are not the first inscrutable black boxes we’ve come across.

“We've developed scientific methods to study black boxes for hundreds of years now, but these methods have primarily been applied to [living beings] up to this point,” says Nick Obradovich, an MIT Media Lab researcher and co-author of a new paper published last week in Nature. “We can leverage many of the same tools to study the new black box AI systems.”

The paper’s authors, a diverse group of researchers from industry and academia, propose to create a new academic discipline called “machine behavior.” It approaches studying AI systems in the same way we’ve always studied animals and humans: through empirical observation and experimentation.

In this way a machine behavorist is to a computer scientist what a social scientist is to a neuroscientist. The former looks to understand how an agent—whether artificial or biological—behaves in its habitat, when coexisting in groups, and when interacting with other intelligent agents. The latter seeks to dissect the decision-making mechanics behind those behaviors.

“We’re seeing the rise of machines with agency, machines that are actors making decisions and taking actions autonomously,” Iyad Rahwan, another Media Lab researcher and lead author on the paper, said in a blog post accompanying the publication. Thus they need to be studied “as a new class of actors with their own behavioral patterns and ecology.”

This doesn’t mean to suggest that AI systems have developed some kind of free will. (They certainly have not; they’re only glorified math models.) But it is meant to move away from viewing AI systems as passive tools that can be assessed purely through their technical architecture, performance, and capabilities. They should instead be considered as active actors that change and influence their environments and the people and machines around them.

So, what would this even look like? A machine behaviorist might interrogate, for example, the impact of voice assistants on a child’s personality development. Or they might examine how online dating algorithms have changed how people meet and fall in love. Ultimately, they would study the emergent properties that arise from many humans and machines coexisting and collaborating together.

“We are all one giant human-machine system,” says Obradovich. “We need to acknowledge that and start studying it that way.”

It’s important to note that most of these ideas aren’t new. Roboticists, for example, have long studied human-computer interaction. And the field of science, technology, and society have what’s known as the “actor-network theory,” a framework for describing everything in the social and natural worlds—both humans and algorithms—as actors that somehow relate to one another. But for the most part, each of these efforts have been siloed in separate disciplines. Bringing them together under one umbrella helps align their goals, formalize a common language, and foster interdisciplinary collaborations. “It will help us find each other,” Obradovich says.

Despite being in a distinct discipline from AI researchers, machine behaviorists should still work closely with them. As the latter discover new ways AI systems behave and affect people, the former can bring those learnings to bear on the system’s designs. The more each discipline can take advantage of the other’s expertise, the more they will be able to ensure that artificial agents benefit humans rather than harm them.

“We need the expertise of scientists from across all behavioral and computational disciplines,” Obradovich says. “Figuring out how to live with machines is a problem too vast for any one discipline to solve alone.”

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.