AI researchers must confront “missed opportunities” to achieve social good
Technologists need to be prepared to get out of their comfort zone and engage more with the experts and communities affected by AI algorithms—or the systems they build will reinforce and exacerbate social problems, according to one leading expert.
Rediet Abebe, a computer science researcher at Cornell University, specializes in algorithms, artificial intelligence, and their application for social good. She told an audience at EmTech Digital, an event organized by MIT Technology Review, that she has discovered a surprising lack of collaboration in certain areas of AI research.
“Algorithmic and AI-driven solutions are embedded in every aspect of our lives—lending decisions, housing applications, interactions with the criminal justice system,” she said. “There’s a disconnect between researchers and practitioners and communities.”
The unintended consequences of algorithmic models have caused a great deal of controversy, including revelations that AI-driven risk assessment tools are being trained on biased historical data. Face recognition systems trained on lopsided data sets, meanwhile, are much more likely to misidentify black women than light-skinned men. Attempts to fix one issue often perpetuate other systemic problems.
“We need adequate representation of communities that are being affected. We need them to be present and tell us the issues they’re facing,” said Abebe. “We also need insights from experts from areas including social sciences and the humanities … they’ve been thinking about this and working on this for longer than I’ve been alive. These missed opportunities to use AI for social good—these happen when we’re missing one or more of these perspectives.”
Abebe said she has tried to tackle this problem as cofounder of Mechanism Design for Social Good, a large interdisciplinary research group that she believes can be a model for greater collaboration and participation.
The organization has focused its own efforts on a handful of areas. These include global inequality, the application of AI in developing nations, algorithmic bias and discrimination, and the impact of algorithmic decision-making on specific policy areas including online labor markets, health care, and housing.
One example she pointed to from her own work was a project to use AI to investigate which families should receive government financial support when they are hit with an “income shock”—for example, a missed paycheck or an unexpected bill.
Instead of using traditional models, a team from Cornell and Princeton tried an interdisciplinary approach that brought in data and expertise from affected communities.
“We were able to identify economically distressed families that you wouldn’t normally find,” she said. She added, “There are many families who might look like they’re doing okay [when considered by typical models] … but they are more susceptible to economic shocks.”
She also pointed to work done by Nobel Prize–winning economist Alvin Roth at Stanford, who has used interdisciplinary research to develop models that better match kidney donors with patients. Meanwhile, said Abebe, a project by the University of Michigan’s Tawanna Dillahunt to design tools for low-resource job seekers involved a great deal of consultation with the people who were most likely to use it. Other researchers, she said, should follow their lead and reach out to get better informed before pushing their technologies into the world.
“I would recommend just getting uncomfortable,” she said. “Attend a talk you wouldn’t normally attend—an inequality talk in your sociology department, for example. If something seems interesting to you, go learn the perspectives of other communities that have been working on it.”
The inside story of how ChatGPT was built from the people who made it
Exclusive conversations that take us behind the scenes of a cultural phenomenon.
AI is dreaming up drugs that no one has ever seen. Now we’ve got to see if they work.
AI automation throughout the drug development pipeline is opening up the possibility of faster, cheaper pharmaceuticals.
GPT-4 is bigger and better than ChatGPT—but OpenAI won’t say why
We got a first look at the much-anticipated big new language model from OpenAI. But this time how it works is even more deeply under wraps.
The original startup behind Stable Diffusion has launched a generative AI for video
Runway’s new model, called Gen-1, can change the visual style of existing videos and movies.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.