On August 31, Airbnb launched Project Lighthouse, an initiative meant to “uncover, measure, and overcome discrimination” on the home-sharing platform. According to the company, Project Lighthouse will identify discrimination by measuring whether a renter’s perceived race correlates with differences in the rate or quality of that person’s bookings, cancellations, or reviews. This project comes amid an outpouring of solidarity statements and policy changes from the tech industry in response to uprisings after the killing of George Floyd by Minneapolis police on May 25.
While these nods toward racial justice may be well-intentioned, they highlight a problem that casts doubt on whether the industry’s efforts to date can truly combat bias: the tendency to position race, not racism, as the cause of discrimination.
This way of thinking about inequality is emblematic of “racecraft,” a term coined by sociologist Karen E. Fields and historian Barbara J. Fields to describe “the mental terrain and pervasive beliefs” about race and racism in America. Though Fields and Fields outline many aspects of the concept, their basic proposition is that the very idea of race arises out of racist practices rather than biological realities. Racecraft, they write, is a “conjuror’s trick of transforming racism into race, leaving black persons in view while removing white persons from the stage.”
A good example can be seen in Airbnb’s introduction to Project Lighthouse, which states that the company was “deeply troubled by stories of travelers who were turned away by Airbnb hosts during the booking process because of the color of their skin.” Were those guests really turned away because of their skin color, or because their prospective hosts were racist?
The same maneuver can be seen in a statement from Adam Mosseri, the head of Instagram, in which he says the platform’s efforts to ensure that Black voices are heard “won’t stop with the disparities people may experience solely on the basis of race.”
Racecraft, as conceptualized by Fields and Fields, is what allows Airbnb and Instagram to transform an aggressive act—racism—into a mere category: race. This sleight of hand positions race as the problem, allowing companies to absolve themselves of responsibility for racism. It also perpetuates the alluring myth that abolishing racial categories will lead to the post-racial society some hoped would follow the election of Barack Obama to the US presidency in 2008.
The truth companies need to grapple with, however, is that racist actions—not racial categories—are what cause discrimination.
I found linguistic evidence of racecraft throughout 63 public-facing documents that I collected and analyzed from Airbnb, Facebook, Twitter, Instagram, TikTok, and YouTube, all issued between May 26 and June 24 of this year. In a moment marked by racial injustice, these companies were reluctant to even use the word “race,” regularly opting to use “diversity” instead.
These statements (including those from TikTok and Facebook) also explicitly address Black people far more frequently than white people by using phrases such as “We stand with the Black community.” In 63 statements, Black people and communities were referenced 241 times while white people were referenced only four times.
By so rarely naming whiteness, these statements normalize the ideas that white people are raceless and that only those oppressed by the racial structure need have any interest in dismantling it. This language also suggests that dismantling racism doesn’t require confronting those privileged by racism.
This critique might seem nitpicky, but the language people use to talk about racism shapes how they understand what’s happening and which solutions sound appropriate. As others have pointed out, for example, the term “officer-involved shooting” is a passive phrasing that deemphasizes police officers’ use of deadly force, obscuring their role in state violence. In the same way, the language in these tech company statements obscures the central role that whiteness and racism play in the injustices Black people endure.
Such obfuscation spills over into the solutions that companies propose. Project Lighthouse, for example, is built to examine the (Black) people who experience racism on Airbnb rather than the (white) people who are responsible for perpetuating it. This again positions race, not racism, as the problem to be overcome. By focusing on race as a category, Airbnb has inscribed the mental tricks of racecraft into its project.
Tech companies and social-media platforms need to understand that fighting racism cannot start and end with statements of solidarity and technical fixes.
Real change begins with increasing the number of people from underrepresented groups in executive positions, which both Airbnb and Facebook pledged to do in their statements. But tech companies cannot think about Black employees as just a convenient resource in times of racial upheaval. In crafting their public statements, many of these companies relied on Black employee groups for assistance. All of Twitter’s statements, for example, were written by employee resource groups—but, as the Washington Post has reported, this work was often unpaid, fell outside employees’ normal duties, and had potential negative ramifications for them.
Bland statements about diversity and inclusion fail to address the long-standing anti-Black injustice that persists in American society. The tech industry must talk about racism in ways that implicate systems of power and call attention to the systemic inequality and racial injustice that Black people face. Only then can the industry produce solutions that reduce harm.
With ongoing unrest in Kenosha, Wisconsin, after yet another case of racialized police violence, we’re sure to see more corporate statements regarding racial justice. Without more awareness of racecraft and its harms, they’re bound to repeat the same mistakes.
Amber M. Hamilton is a PhD candidate in sociology at the University of Minnesota and an affiliate of the Microsoft Research Social Media Collective. Her work focuses on the intersection of race and technology.
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
The Biggest Questions: What is death?
New neuroscience is challenging our understanding of the dying process—bringing opportunities for the living.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
How to fix the internet
If we want online discourse to improve, we need to move beyond the big platforms.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.