Skip to Content
Artificial intelligence

People shouldn’t pay such a high price for calling out AI harms

Pioneering AI researcher Joy Buolamwini’s story is in many ways an inspirational tale, and also a warning.

hand on a fire alarm seen through a thick haze of smoke
Stephanie Arnett/ MITTR | Envato

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

This week everyone is talking about AI. The White House just unveiled a new executive order that aims to promote safe, secure, and trustworthy AI systems. It’s the most far-reaching bit of AI regulation the US has produced yet, and my colleague Tate Ryan-Mosley and I have highlighted three things you need to know about it. Read them here

The G7 has just agreed a (voluntary) code of conduct that AI companies should abide by, as governments seek to minimize the harms and risks created by AI systems. And later this week, the UK will be full of AI movers and shakers attending the government’s AI Safety Summit, an effort to come up with global rules on AI safety. 

In all, these events suggest that the narrative pushed by Silicon Valley about the “existential risk” posed by AI seems to be increasingly dominant in public discourse.

This is concerning, because focusing on fixing hypothetical harms that may emerge in the future takes attention from the very real harms AI is causing today. “Existing AI systems that cause demonstrated harms are more dangerous than hypothetical ‘sentient’ AI systems because they are real,” writes Joy Buolamwini, a renowned AI researcher and activist, in her new memoir Unmasking AI: My Mission to Protect What Is Human in a World of Machines. Read more of her thoughts in an excerpt from her book, out tomorrow. 

I had the pleasure of talking with Buolamwini about her life story and what concerns her in AI today. Buolamwini is an influential voice in the field. Her research on bias in facial recognition systems made companies such as IBM, Google, and Microsoft change their systems and back away from selling their technology to law enforcement. 

Now, Buolamwini has a new target in sight. She is calling for a radical rethink of how AI systems are built, starting with more ethical, consensual data collection practices. “What concerns me is we’re giving so many companies a free pass, or we’re applauding the innovation while turning our head [away from the harms],” Buolamwini told me. Read my interview with her

While Buolamwini’s story is in many ways an inspirational tale, it is also a warning. Buolamwini has been calling out AI harms for the better part of a decade, and she has done some impressive things to bring the topic to the public consciousness. What really struck me was the toll speaking up has taken on her. In the book, she describes having to check herself into the emergency room for severe exhaustion after trying to do too many things at once—pursuing advocacy, founding her nonprofit organization the Algorithmic Justice League, attending congressional hearings, and writing her PhD dissertation at MIT. 

She is not alone. Buolamwini’s experience tracks with a piece I wrote almost exactly a year ago about how responsible AI has a burnout problem.  

Partly thanks to researchers like Buolamwini, tech companies face more public scrutiny over their AI systems. Companies realized they needed responsible AI teams to ensure that their products are developed in a way that mitigates any potential harm. These teams evaluate how our lives, societies, and political systems are affected by the way these systems are designed, developed, and deployed. 

But people who point out problems caused by AI systems often face aggressive criticism online, as well as pushback from their employers. Buolamwini described having to fend off public attacks on her research from one of the most powerful technology companies in the world: Amazon. 

When Buolamwini was first starting out, she had to convince people that AI was worth worrying about. Now, people are more aware that AI systems can be biased and harmful. That’s the good news. 

The bad news is that speaking up against powerful technology companies still carries risks. That is a shame. The voices trying to shift the Overton window on what kinds of risks are being discussed and regulated are growing louder than ever and have captured the attention of lawmakers, such as the UK’s prime minister, Rishi Sunak. If the culture around AI actively silences other voices, that comes at a price to us all. 

Deeper Learning

Ilya Sutskever, OpenAI’s chief scientist, on his hopes and fears for the future of AI

Instead of building the hottest new AI models, Sutskever tells Will Douglas Heaven in an exclusive interview, his new priority is to figure out how to stop an artificial superintelligence (a hypothetical future technology he sees coming with the certainty of a true believer) from going rogue.

It gets wilder: Sutskever says he thinks ChatGPT just might be conscious (if you squint). He thinks the world needs to wake up to the true power of the technology his company and others are racing to create. And he thinks some humans will one day choose to merge with machines. Read the full interview here

Bits and Bytes

Where does AI data come from? 
AI systems are notoriously not transparent. In an attempt to tackle this problem, MIT, Cohere for AI, and 11 other institutions have audited and traced nearly 2,000 of the most widely used fine-tuning data sets, which form the backbone of many published breakthroughs in natural-language processing. The end product is nerdy but cool. (The Data Provenance Initiative

AI will come for women first
Researchers from McKinsey argue that the jobs most at risk of being replaced by generative AI will be in customer service and sales—both professions that employ lots of women. (Foreign Policy

What the UN’s AI advisory group is up to
The United Nations has been eager to step up and take a more active role in overseeing AI globally. To that end, it has amassed  a team of AI experts from both industry and academia tasked with coming up with recommendations that will shape what a potential UN agency for AI governance will look like. This is a nice explainer. (Time)

AI is slowly reenergizing San Francisco
High housing costs, crime rates, and poverty have plagued the people of San Francisco for years. But now a new crop of buzzy AI startups are starting to draw money, people, and “vibes” back into the city. (The Washington Post $)

Margaret Atwood is not impressed with AI literature
The author, who published a searing review of a story written by a large language model, makes a strong case for why published authors don’t need to worry about AI. (The Walrus

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

What’s next for generative video

OpenAI's Sora has raised the bar for AI moviemaking. Here are four things to bear in mind as we wrap our heads around what's coming.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.