Technologies driven by algorithms and artificial intelligence are increasingly present in our lives, and we are now regularly bumping up against a thorny question: can these programs be neutral actors? Or will they always reflect some degree of human bias?
The dustup over Facebook’s “trending topics” list and its possible liberal bias hit such a nerve that the U.S. Senate called on the company to come up with an official explanation, and this week COO Sheryl Sandberg said the company will begin training employees to identify and control their political leanings.
This is just one result, however, of a broader trend that Fred Benenson, Kickstarter’s former data chief, calls “mathwashing”: our tendency to idolize programs like Facebook’s as entirely objective because they have mathematics at their core.
One area of potential bias comes from the fact that so many of the programmers creating these programs, especially machine-learning experts, are male. In a recent Bloomberg article, Margaret Mitchell, a researcher at Microsoft, is quoted lamenting the dangers of a “sea of dudes” asking the questions central to creating these programs.
Concern has been building over this issue for some time, as studies found evidence of bias in online advertising, recruiting, and pricing strategies driven by presumably neutral algorithms.
In one study, Harvard professor Latanya Sweeney looked at the Google AdSense ads that came up during searches of names associated with white babies (Geoffrey, Jill, Emma) and names associated with black babies (DeShawn, Darnell, Jermaine). She found that ads containing the word “arrest” were shown next to more than 80 percent of “black” name searches but fewer than 30 percent of “white” name searches.
Sweeney worries that the ways Google’s advertising technology perpetuates racial bias could undermine a black person’s chances in a competition, whether it’s for an award, a date, or a job.
Areas such as lending and credit that have traditionally suffered from well-known human discrimination must be especially careful.
ZestFinance, an online lender founded on the idea that machine-learning programs can expand the number of people deemed creditworthy by looking at tens of thousands of data points, maintains that it is well attuned to the dangers of discriminatory lending. To guard against discrimination, ZestFinance has built tools to test its own results.
But the danger remains that unrecognized bias, not just in the programming of an algorithm but even in the data flowing into it, could inadvertently turn any program into a discriminator. For consumers who are unable to unpack the complexities of these programs, it will be hard to know whether they have been treated fairly.
“Algorithm and data-driven products will always reflect the design choices of the humans who built them,” Benenson explained in a recent Q&A with Technical.ly Brooklyn, “and it’s irresponsible to assume otherwise.”
The hype around DeepMind’s new AI model misses what’s actually cool about it
Some worry that the chatter about these tools is doing the whole field a disservice.
These materials were meant to revolutionize the solar industry. Why hasn’t it happened?
Perovskites are promising, but real-world conditions have held them back.
Why China is still obsessed with disinfecting everything
Most public health bodies dealing with covid have long since moved on from the idea of surface transmission. China’s didn’t—and that helps it control the narrative about the disease’s origins and danger.
A quick guide to the most important AI law you’ve never heard of
The European Union is planning new legislation aimed at curbing the worst harms associated with artificial intelligence.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.