Skip to Content
Policy

The AI community needs to take responsibility for its technology and its actions

At the opening keynote of a prominent AI research conference, Celeste Kidd, a cognitive psychologist, challenged the audience to think critically about the future they want to build.
December 13, 2019
Celeste Kidd
Celeste KiddNiall Carson/AP

On Monday, at the opening of one of the world’s largest gatherings of AI researchers, Celeste Kidd addressed thousands of attendees in a room nearly twice the size of a football field. She was not pulling her punches.

“There’s no such thing as a neutral platform,” the influential scientist and prominent #metoo figurehead told those gathered at the NeurIPS conference in Vancouver. “The algorithms pushing content online have profound impacts on what we believe.”

Kidd, a professor of psychology at the University of California, Berkeley, is known within her field for making important contributions to our understanding of theory of mind—how we acquire knowledge and how we form beliefs. Two years ago, she also became known to the wider world when Time named her Person of the Year among others who spoke out against sexual abuse and harassment.

On stage, Kidd shared five lessons from her research and demonstrated how the tech industry’s decisions could influence people to develop false beliefs—denying climate change, for example. Near the end of her talk, she also shared her experience with sexual harassment as a graduate student and directly addressed some of the misunderstandings she’d heard about the #metoo movement from men.

“It may seem like a scary time to be a man in tech right now,” she said to the conference goers, roughly 80% of whom are men this year. “There’s a sense that a career could be destroyed over awkward passes or misunderstandings.”

“What I want to say today to all of the men in the room is that ​you have been misled​,” she said.

Her talk received a standing ovation—a rare moment in the conference’s history.

Kidd’s remarks come at a time when the AI community—and the tech industry more broadly—has been forced to reckon with the unintentional harms of its technologies. In the past year alone, a series of high-profile cases have exposed how deepfakes can abuse women, how algorithms can make discriminatory decisions in health care and credit lending, and how developing AI models can be immensely costly for the environment. At the same time, the community has been rocked by several sexual abuse and harassment scandals, including some over incidents at previous years of the conference itself. It has also continued to suffer from appalling diversity numbers.

But Kidd’s talk highlighted an important shift that has begun to happen—one that was felt palpably in the room that night. After her talk, dozens of people lined up at the microphones scattered around the room to thank her for speaking out about these issues. Dozens more gathered around her after the session—some just to shake her hand in gratitude. To attendees who remember the annual gathering even two years ago, there is a new openness to acknowledging these challenges and a renewed focus on doing better.

The day after her talk, I sat down with Kidd to talk more about the two messages she delivered, how they are related, and her hopes for the future.

This interview has been edited and condensed for clarity.

In the research portion of your talk, you ended with your message: “There’s no such thing as a neutral platform.” How did you arrive at this conclusion from your research?

Something I’ve only realized in the past few years—because of my interactions with my two graduate students—is there’s not really a distinction between knowledge and beliefs. Those are the same thing, basically.

Now we’re moving toward understanding how these dynamics that we’ve observed in lab experiments extend to the real world. When somebody goes to the internet not sure of what they should believe, what do they tend to walk away with from these neutral searches? Can we use those same kinds of ideas to try to explain why people believe the earth is flat, and why those misconceptions don’t get corrected? That’s not an area that I have seen a lot of attention on, but it’s one that I think is very important.

Why was it important for you to share your message at this conference?

So much of what we believe now comes from online sources. Especially kids—they are forming the building blocks of knowledge that will later shape what they believe in, what they’re interested in learning about downstream. For young kids, there’s also reason to expect that they are consuming more autoplay and suggested content than adults. So initially they’re more at risk of being influenced by the algorithm pushing content, because that’s their only choice.

My talk was intended as a message to people working on the systems to be considerate about how those back-end decisions influence an individual person’s beliefs, but also society as a whole. I don’t think there’s enough sensitivity in tech to how the decisions that you make behind the scenes about how to push content impact people’s lives.

There’s a common battle cry when questions come up about how content is offered—the claim that platforms are neutral. And I think that’s dishonest. The back-end decisions that you make directly influence what people believe, and people know this. So to pretend like that’s not a thing is dishonest.

When we change people’s behavior, what we are doing is changing their beliefs. And those changes have real, concrete consequences. When a parent searches for information about whether or not they should vaccinate their child—if they walk up to their laptop undecided and they walk away decided, it really matters what content was offered, what views were represented.

I don’t think it's reasonable to say you don’t have any responsibility for what a mother does to her child—whether she decides to vaccinate them or not—because that was not something you considered when you built the system. I think you have a responsibility to consider what the repercussions are of the back-end decisions.

You mentioned in the private Q&A after your talk that you’ve never presented both your research and your experiences of sexual harassment in a public forum. Why do you usually separate those two, and why did you decide to combine them together?

I’ll start with the second one—I made an exception to the rule in this case because I thought it was very important for this community to hear that message. Computer science is a field where women have had a really difficult time for a long time getting traction and breaking in. There’s a high degree of interest early on, and then there’s a leaky pipe. And I know that one of the things that make it very hard to do well as a woman in this field is less mentorship opportunities.

I know that it’s very common that men in computer science with good intentions are worried about offending women. The downstream implication of that is that women are losing out on training opportunities, but also the men are losing out on the ideas and innovation that the women would bring. Empirical studies show that diversity leads to higher rates of innovation. And the opportunity to talk to a large portion of these men in one room all at once—I felt like it was important, and I had to do that.

The reason why I usually don’t mix them: I didn’t choose what happened to me my first year of grad school at Rochester. I didn’t choose what the university’s response would be. I wanted a career in science and I want to protect that, so I don’t want to do less talking about science because I’ve spoken out on this issue. But I’m also aware that most people don’t get the opportunity, they don’t get a platform to speak out. Usually what happens to people that were sexually harassed early in their careers and had their institution retaliate against them is they disappear. I wouldn’t feel okay doing nothing. People who have privilege need to use it where they can. And this was an opportunity to use the privilege of giving a talk at NeurIPS to help the more junior women who deserve equal treatment.

Were you worried about the way these comments would land?

Of course. But being afraid of the response is not a reason to not speak. I talked a little bit about privilege. I’m also in a relatively privileged position at this particular conference because there are so many people in industry, and I think the pressures to keep people silent are greater at companies than they are in academia, at least in tech right now. So if I was worried about being fired, that would be an extra thing keeping me quiet. UC Berkeley was aware of my speaking out on these issues before they hired me, and they’ve shown me nothing but support and encouragement in fighting for equity. By being in a place that supports me like that, I can say things without fear of losing my job and not being able to pay for food for my child. And that’s the other reason I felt like I should speak.

I was fully expecting some people to be angry. It’s 13,000 people. Of course some people may misunderstand me. I literally talked about how when we use words, they’re ambiguous and people activate different concepts. It's not possible to convince all of the people exactly what you have in mind.

Even though you usually separate your talks about your research and your activism, and you separated it in two sections at NeurIPS, to me they really address the same thing: how to develop responsible technology by taking more responsibility for your decisions and actions.

Right. You mentioned to me that there’s more talk in the AI community about the ethical implications, that there’s more appreciation for the connection between the technology and society. And I think part of that comes from the community becoming more diverse and involving more people. They come from a less privileged place and are more acutely aware of things like bias and injustice and how technologies that were designed for a certain demographic may actually do harm to disadvantaged populations.

I hope that continues. We have to be talking to each other in order to make progress. And we all deserve to feel safe when interacting with each other.

To have more stories like this delivered directly to your inbox, sign up for our Webby-nominated AI newsletter The Algorithm. It's free.

Deep Dive

Policy

Is there anything more fascinating than a hidden world?

Some hidden worlds--whether in space, deep in the ocean, or in the form of waves or microbes--remain stubbornly unseen. Here's how technology is being used to reveal them.

Yes, remote learning can work for preschoolers

The largest-ever humanitarian intervention in early childhood education shows that remote learning can produce results comparable to a year of in-person teaching.

Three technology trends shaping 2024’s elections

The biggest story of this year will be elections in the US and all around the globe

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.