Skip to Content

How to Hold Algorithms Accountable

Algorithmic systems have a way of making mistakes or leading to undesired consequences. Here are five principles to help technologists deal with that.

Algorithms are now used throughout the public and private sectors, informing decisions on everything from education and employment to criminal justice. But despite the potential for efficiency gains, algorithms fed by big data can also amplify structural discrimination, produce errors that deny services to individuals, or even seduce an electorate into a false sense of security. Indeed, there is growing awareness that the public should be wary of the societal risks posed by over-reliance on these systems and work to hold themaccountable.

Various industry efforts, including a consortium of Silicon Valley behemoths, are beginning to grapple with the ethics of deploying algorithms that can have unanticipated effects on society. Algorithm developers and product managers need new ways to think about, design, and implement algorithmic systems in publicly accountable ways. Over the past several months, we and some colleagues have been trying to address these goals by crafting a set of principles for accountable algorithms.

Let’s consider one case where algorithmic accountability is sorely needed: the risk assessment scores that inform criminal-justice decisions in the U.S. legal system. These scores are calculated by asking a series of questions relating to things like the defendant’s age, criminal history, and other characteristics. The data are fed into an algorithm to calculate a score that can then be used in decisions about pretrial detention, probation, parole, or even sentencing. And these models are often trained using proprietary machine-learning algorithms and data about previous defendants.

Recent investigations show that risk assessment algorithms can be racially biased, generating scores that, when wrong, more often incorrectly classify black defendants as high risk. These results have generated considerable controversy. Given the literally life-altering nature of these algorithmic decisions, they should receive careful attention and be held accountable for negative consequences.

Algorithms and the data that drive them are designed and created by people. Even for techniques such as genetic algorithms that evolve on their own, or machine-learning algorithms where the resulting model was not hand-crafted by a person, results are shaped by human-made design decisions, rules about what to optimize, and choices about what training data to use. “The algorithm did it” is not an acceptable excuse if algorithmic systems make mistakes or have undesired consequences.

Accountability implies an obligation to report and justify algorithmic decision-making, and to mitigate any negative social impacts or potential harms. We’ll consider accountability through the lens of five core principles: responsibility, explainability, accuracy, auditability, and fairness.

Responsibility. For any algorithmic system, there needs to be a person with the authority to deal with its adverse individual or societal effects in a timely fashion. This is not a statement about legal responsibility but, rather, a focus on avenues for redress, public dialogue, and internal authority for change. This could be as straightforward as giving someone on your technical team the internal power and resources to change the system, making sure that person’s contact information is publicly available.

Explainability. Any decisions produced by an algorithmic system should be explainable to the people affected by those decisions. These explanations must be accessible and understandable to the target audience; purely technical descriptions are not appropriate for the general public. Explaining risk assessment scores to defendants and their legal counsel would promote greater understanding and help them challenge apparent mistakes or faulty data. Some machine-learning models are more explainable than others, but just because there’s a fancy neural net involved doesn’t mean that a meaningful explanation can’t be produced.

Accuracy. Algorithms make mistakes, whether because of data errors in their inputs (garbage in, garbage out) or statistical uncertainty in their outputs. The principle of accuracy suggests that sources of error and uncertainty throughout an algorithm and its data sources need to be identified, logged, and benchmarked. Understanding the nature of errors produced by an algorithmic system can inform mitigation procedures.

Auditability. The principle of auditability states that algorithms should be developed to enable third parties to probe and review the behavior of an algorithm. Enabling algorithms to be monitored, checked, and criticized would lead to more conscious design and course correction in the event of failure. While there may be technical challenges in allowing public auditing while protecting proprietary information, private auditing (as in accounting) could provide some public assurance. Where possible, even limited access (e.g., via an API) would allow the public a valuable chance to audit these socially significant algorithms.

Fairness. As algorithms increasingly make decisions based on historical and societal data, existing biases and historically discriminatory human decisions risk being “baked in” to automated decisions. All algorithms making decisions about individuals should be evaluated for discriminatory effects. The results of the evaluation and the criteria used should be publicly released and explained.

There’s lots of room to adapt and interpret these principles to your own context, and of course political, proprietary, or business concerns will intervene. But we do think that considering these ideas throughout the design, implementation, and release cycles of development will lead to more socially responsible deployment of algorithms in society.

How do you get started? We outline some pragmatic questions that the product and development team can work through to form a social impact statement that addresses these principles.

Nicholas Diakopoulos is an assistant professor at the University of Maryland, College Park. Sorelle Friedler is an assistant professor at Haverford College, and an affiliate at the Data & Society Research Institute.

 

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.