Technologies driven by algorithms and artificial intelligence are increasingly present in our lives, and we are now regularly bumping up against a thorny question: can these programs be neutral actors? Or will they always reflect some degree of human bias?
The dustup over Facebook’s “trending topics” list and its possible liberal bias hit such a nerve that the U.S. Senate called on the company to come up with an official explanation, and this week COO Sheryl Sandberg said the company will begin training employees to identify and control their political leanings.
This is just one result, however, of a broader trend that Fred Benenson, Kickstarter’s former data chief, calls “mathwashing”: our tendency to idolize programs like Facebook’s as entirely objective because they have mathematics at their core.
One area of potential bias comes from the fact that so many of the programmers creating these programs, especially machine-learning experts, are male. In a recent Bloomberg article, Margaret Mitchell, a researcher at Microsoft, is quoted lamenting the dangers of a “sea of dudes” asking the questions central to creating these programs.
Concern has been building over this issue for some time, as studies found evidence of bias in online advertising, recruiting, and pricing strategies driven by presumably neutral algorithms.
In one study, Harvard professor Latanya Sweeney looked at the Google AdSense ads that came up during searches of names associated with white babies (Geoffrey, Jill, Emma) and names associated with black babies (DeShawn, Darnell, Jermaine). She found that ads containing the word “arrest” were shown next to more than 80 percent of “black” name searches but fewer than 30 percent of “white” name searches.
Sweeney worries that the ways Google’s advertising technology perpetuates racial bias could undermine a black person’s chances in a competition, whether it’s for an award, a date, or a job.
Areas such as lending and credit that have traditionally suffered from well-known human discrimination must be especially careful.
ZestFinance, an online lender founded on the idea that machine-learning programs can expand the number of people deemed creditworthy by looking at tens of thousands of data points, maintains that it is well attuned to the dangers of discriminatory lending. To guard against discrimination, ZestFinance has built tools to test its own results.
But the danger remains that unrecognized bias, not just in the programming of an algorithm but even in the data flowing into it, could inadvertently turn any program into a discriminator. For consumers who are unable to unpack the complexities of these programs, it will be hard to know whether they have been treated fairly.
“Algorithm and data-driven products will always reflect the design choices of the humans who built them,” Benenson explained in a recent Q&A with Technical.ly Brooklyn, “and it’s irresponsible to assume otherwise.”
How Facebook and Google fund global misinformation
The tech giants are paying millions of dollars to the operators of clickbait pages, bankrolling the deterioration of information ecosystems around the world.
This new startup has built a record-breaking 256-qubit quantum computer
QuEra Computing, launched by physicists at Harvard and MIT, is trying a different quantum approach to tackle impossibly hard computational tasks.
This scientist now believes covid started in Wuhan’s wet market. Here’s why.
How a veteran virologist found fresh evidence to back up the theory that covid jumped from animals to humans in a notorious Chinese market—rather than emerged from a lab leak.
DeepMind says it will release the structure of every protein known to science
The company has already used its protein-folding AI, AlphaFold, to generate structures for the human proteome, as well as yeast, fruit flies, mice, and more.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.