Skip to Content
Opinion

AI ethics groups are repeating one of society’s classic mistakes

Too many councils and advisory boards still consist mostly of people based in Europe or the United States.
ai representation disparities
Getty

International organizations and corporations are racing to develop global guidelines for the ethical use of artificial intelligence. Declarations, manifestos, and recommendations are flooding the internet. But these efforts will be futile if they fail to account for the cultural and regional contexts in which AI operates.

AI systems have repeatedly been shown to cause problems that disproportionately affect marginalized groups while benefiting a privileged few. The global AI ethics efforts under way today—of which there are dozens—aim to help everyone benefit from this technology, and to prevent it from causing harm. Generally speaking, they do this by creating guidelines and principles for developers, funders, and regulators to follow. They might, for example, recommend routine internal audits or require protections for users’ personally identifiable information.

We believe these groups are well-intentioned and are doing worthwhile work. The AI community should, indeed, agree on a set of international definitions and concepts for ethical AI. But without more geographic representation, they’ll produce a global vision for AI ethics that reflects the perspectives of people in only a few regions of the world, particularly North America and northwestern Europe.

This work is not easy or straightforward. “Fairness,” “privacy,” and “bias” mean different things (pdf) in different places. People also have disparate expectations of these concepts depending on their own political, social, and economic realities. The challenges and risks posed by AI also differ depending on one’s locale.

If organizations working on global AI ethics fail to acknowledge this, they risk developing standards that are, at best, meaningless and ineffective across all the world’s regions. At worst, these flawed standards will lead to more AI systems and tools that perpetuate existing biases and are insensitive to local cultures.

In 2018, for example, Facebook was slow to act on misinformation spreading in Myanmar that ultimately led to human rights abuses. An assessment (pdf) paid for by the company found that this oversight was due in part to Facebook’s community guidelines and content moderation policies, which failed to address the country’s political and social realities.

There’s a clear lack of regional diversity in many AI advisory boards, expert panels, and councils.

To prevent such abuses, companies working on ethical guidelines for AI-powered systems and tools need to engage users from around the world to help create appropriate standards to govern these systems. They must also be aware of how their policies apply in different contexts.

Despite the risks, there’s a clear lack of regional diversity in many AI advisory boards, expert panels, and councils appointed by leading international organizations. The expert advisory group for Unicef’s AI for Children project, for example, has no representatives from regions with the highest concentration of children and young adults, including the Middle East, Africa, and Asia.

Unfortunately, as it stands today, the entire field of AI ethics is at grave risk of limiting itself to languages, ideas, theories, and challenges from a handful of regions—primarily North America, Western Europe, and East Asia.

This lack of regional diversity reflects the current concentration of AI research (pdf): 86% of papers published at AI conferences in 2018 were attributed to authors in East Asia, North America, or Europe. And fewer than 10% of references listed in AI papers published in these regions are to papers from another region. Patents are also highly concentrated: 51% of AI patents published in 2018 were attributed to North America.

Those of us working in AI ethics will do more harm than good if we allow the field’s lack of geographic diversity to define our own efforts. If we’re not careful, we could wind up codifying AI’s historic biases into guidelines that warp the technology for generations to come. We must start to prioritize voices from low- and middle-income countries (especially those in the “Global South”) and those from historically marginalized communities.

Advances in technology have often benefited the West while exacerbating economic inequality, political oppression, and environmental destruction elsewhere. Including non-Western countries in AI ethics is the best way to avoid repeating this pattern.

The good news is there are many experts and leaders from underrepresented regions to include in such advisory groups. However, many international organizations seem not to be trying very hard to solicit participation from these people. The newly formed Global AI Ethics Consortium, for example, has no founding members representing academic institutions or research centers from the Middle East, Africa, or Latin America. This omission is a stark example of colonial patterns (pdf) repeating themselves.

If we're going to build ethical, safe, and inclusive AI systems rather than engage in “ethics washing,” we must first build trust with those who have historically been harmed by these same systems. That starts with meaningful engagement.

At the Montreal AI Ethics Institute, where we both work, we’re trying to take a different approach. We host digital AI ethics meetups, which are open discussions that anyone with an internet connection or phone can join. During these events, we’ve connected with a diverse group of individuals, from a professor living in Macau to a university student studying in Mumbai.

Meanwhile, groups like the Partnership on AI, recognizing the lack of geographic diversity in AI more broadly, have recommended changes to visa laws and proposed policies that make it easier for researchers to travel and share their work. Maskhane, a grassroots organization, brings together natural-language-processing researchers from Africa to bolster machine-translation work that has neglected nondominant languages.

It’s encouraging to see international organizations trying to include more diverse perspectives in their discussions about AI. It’s important for all of us to remember that regional and cultural diversity are key to any conversation about AI ethics. Making responsible AI the norm, rather than the exception, is impossible without the voices of people who don’t already hold power and influence.

Abhishek Gupta is the founder of the Montreal AI Ethics Institute and a machine-learning engineer at Microsoft, where he serves on the CSE Responsible AI Board. Victoria Heath is a researcher at the Montreal AI Ethics Institute and a senior research fellow at the NATO Association of Canada.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Providing the right products at the right time with machine learning

Amid shifting customer needs, CPG enterprises look to machine learning to bolster their data strategy, says global head of MLOps and platforms at Kraft Heinz Company, Jorge Balestra.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.