Skip to Content
Artificial intelligence

AI researchers must confront “missed opportunities” to achieve social good

Deeper collaboration with social sciences and underserved communities is required to make sure that AI tools don’t cause more problems than they solve.
Jeremy Portje

Technologists need to be prepared to get out of their comfort zone and engage more with the experts and communities affected by AI algorithms—or the systems they build will reinforce and exacerbate social problems, according to one leading expert.

Rediet Abebe, a computer science researcher at Cornell University, specializes in algorithms, artificial intelligence, and their application for social good. She told an audience at EmTech Digital, an event organized by MIT Technology Review, that she has discovered a surprising lack of collaboration in certain areas of AI research.

“Algorithmic and AI-driven solutions are embedded in every aspect of our lives—lending decisions, housing applications, interactions with the criminal justice system,” she said. “There’s a disconnect between researchers and practitioners and communities.”

The unintended consequences of algorithmic models have caused a great deal of controversy, including revelations that AI-driven risk assessment tools are being trained on biased historical data. Face recognition systems trained on lopsided data sets, meanwhile, are much more likely to misidentify black women than light-skinned men. Attempts to fix one issue often perpetuate other systemic problems.

“We need adequate representation of communities that are being affected. We need them to be present and tell us the issues they’re facing,” said Abebe. “We also need insights from experts from areas including social sciences and the humanities … they’ve been thinking about this and working on this for longer than I’ve been alive. These missed opportunities to use AI for social good—these happen when we’re missing one or more of these perspectives.”

Abebe said she has tried to tackle this problem as cofounder of Mechanism Design for Social Good, a large interdisciplinary research group that she believes can be a model for greater collaboration and participation.

The organization has focused its own efforts on a handful of areas. These include global inequality, the application of AI in developing nations, algorithmic bias and discrimination, and the impact of algorithmic decision-making on specific policy areas including online labor markets, health care, and housing.

One example she pointed to from her own work was a project to use AI to investigate which families should receive government financial support when they are hit with an “income shock”—for example, a missed paycheck or an unexpected bill.

Instead of using traditional models, a team from Cornell and Princeton tried an interdisciplinary approach that brought in data and expertise from affected communities.

“We were able to identify economically distressed families that you wouldn’t normally find,” she said. She added, “There are many families who might look like they’re doing okay [when considered by typical models] … but they are more susceptible to economic shocks.”

She also pointed to work done by Nobel Prize–winning economist Alvin Roth at Stanford, who has used interdisciplinary research to develop models that better match kidney donors with patients. Meanwhile, said Abebe, a project by the University of Michigan’s Tawanna Dillahunt to design tools for low-resource job seekers involved a great deal of consultation with the people who were most likely to use it. Other researchers, she said, should follow their lead and reach out to get better informed before pushing their technologies into the world.

“I would recommend just getting uncomfortable,” she said. “Attend a talk you wouldn’t normally attend—an inequality talk in your sociology department, for example. If something seems interesting to you, go learn the perspectives of other communities that have been working on it.”

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.