Skip to Content

Sponsored

Artificial intelligence

Building a better society with better AI

Creating a more equitable and inclusive AI through decentralized data and reduced algorithmic bias can help improve equity.

In association withHewlett Packard Enterprise

Artificial intelligence (AI) has the vast potential to offer innovations to improve every facet of society, from legacy engineering systems to healthcare to creative processes in arts and entertainment. In Hollywood, for example, studios are using AI to surface and measure bias in scripts—the very tools producers and writers need to create more equitable and inclusive media. However, AI is only as smart as the data it's trained on, and that data reflects real-life biases. To avoid perpetuating stereotypes and exclusivity, technologists are addressing equity and inclusion both in real-life and in their innovations.

Innate bias in humans

As technologists look to use AI to find human-centric solutions to optimize industry practices and everyday lives alike, it’s critical to be mindful of the ways that our innate biases can have unintended consequences.

“As humans, we are highly biased,” says Beena Ammanath, the global head of the Deloitte AI Institute, and tech and AI ethics lead at Deloitte. “And as these biases get baked into the systems, there is very high likelihood of sections of society being left behind—underrepresented minorities, people who don't have access to certain tools—and it can drive more inequity in the world.”      

Projects that begin with good intentions -- to create equal outcomes or mitigate past inequities -- can still end up biased if systems are trained with biased data or researchers aren’t accounting for how their own perspectives affect lines of research.

Thus far, adjusting for AI biases has often been reactive with the discovery of biased algorithms or underrepresented demographics emerging after the fact, says Ammanath. But, companies now have to learn how to be proactive, to mitigate these issues early on, and to take accountability for missteps in their AI endeavors. 

Algorithmic bias in AI

In AI, bias appears in the form of algorithmic bias. “Algorithmic bias is a set of several challenges in constructing an AI model,” explains Kirk Bresniker, chief architect at Hewlett Packard Labs and vice president at Hewlett Packard Enterprise (HPE). “We can have a challenge because we have an algorithm that is not capable of handling diverse inputs, or because we haven't gathered broad enough sets of data to incorporate into the training of our model. In either case, we have insufficient data.”

Algorithmic bias can also come from inaccurate processing, data being modified, or someone injecting a false signal. Whether intentional or not, the bias results in unfair outcomes, perhaps privileging one group or excluding another altogether.

As an example, Ammanath describes an algorithm designed to recognize different types of shoes such as flip flops, sandals, formal shoes, and sneakers. However, when it was released, the algorithm couldn’t recognize women’s shoes with heels. The development team was a group of fresh college grads—all male—who never thought of training it on the heels of women’s shoes. 

“This is a trivial example, but you realize that the data set was limited,” Ammanath said. “Now think of a similar algorithm using historical data to diagnose a disease or an illness. What if it wasn't trained on certain body types or certain genders or certain races? Those impacts are huge.      

Critically, she says If you don't have that diversity at the table, you are going to miss certain scenarios.”    

Better AI means self-regulation and ethics guidelines

Simply obtaining more (and more diverse) datasets is a formidable challenge, especially as data has become more centralized. Data sharing brings up many concerns, not the least of which are security and privacy.      

“Right now, we have a situation where individual users have far less power than the vast companies that are collecting and processing their data,” says Nathan Schneider assistant professor of media studies at the University of Colorado Boulder.

It is likely that expanded laws and regulations will eventually dictate when and how data can be shared and used. But, innovation doesn’t wait for lawmakers. Right now, the onus is on AI-developing organizations to be good data stewards, protecting individual privacy while striving to reduce algorithmic bias. Because technology is maturing so quickly, it’s impossible to rely on regulations to cover every possible scenario, says Deloitte’s Ammanath. “We are going to enter an era where you're balancing between being adherent to existing regulations and at the same time, self-regulating.”

This kind of self-regulation means raising the bar for the entire supply chain of technologies that go into building AI solutions, from the data to the training to the infrastructure required to make those solutions possible. Further, companies need to create pathways for individuals across departments to raise concerns over biases. While it is unlikely that bias can be eliminated altogether, companies must regularly audit the efficacy of their AI solutions.

Because of the highly contextual nature of AI, self-regulation will look different for each company. HPE, for example, established ethical AI guidelines. A diverse set of individuals from across the company spent nearly a year working together to establish the company’s principles for AI, and then vetted those principles with a broad set of employees to ensure they could be followed and that they made sense for the corporate culture.

“We wanted to raise the general understanding of the issues and then collect best practices," says HPE’s Bresniker. “This is everyone’s job—to be literate in this area.”      

Technologists have reached a maturity with AI that has progressed from research to practical applications and value creation across all industries. The growing pervasiveness of AI across society means that organizations now have an ethical responsibility to provide robust, inclusive, and accessible solutions. This responsibility has prompted organizations to examine, sometimes for the first time, the data they’re pulling into a process. “We want people to establish that providence, that measurable confidence in the data that's going in,” says Bresniker. “They have that ability to stop perpetuating systemic inequalities and create equitable outcomes for a better future.”

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.