Microsoft CEO Satya Nadella is concerned about the power artificial intelligence will wield over our lives. In a post on Slate yesterday he advised the computing industry to start thinking now about how to design intelligent software to respect our humanity.
“The tech industry should not dictate the values and virtues of this future,” he wrote.
Nadella called for “algorithmic accountability so that humans can undo unintended harm.” He said that smart software must be designed in ways that let us inspect its workings and prevent it from discriminating against certain people or using private data in unsavory ways.
These are noble and rational concerns—but ones tech leaders should have been talking about some time ago. There is ample evidence that the algorithms and software shaping daily life can already be marked by troubling biases.
Studies from the Federal Trade Commission have found signs that racial and economic biases decried in pre-Internet times are now reappearing in the systems powering targeted ads and other online services. In Wisconsin a fight is taking place over why the workings of a system that tries to predict whether a criminal will reoffend—and is used to determine jail terms—must be kept secret.
Just today the ACLU filed suit against the U.S. government on behalf of researchers with a plan to look for racial discrimination in online job and housing ads. They can’t carry it out because of restrictions in federal hacking laws and the way tech firms write their terms and conditions.
It’s clear that some of the problems Nadella says could be created by future artificial intelligence are in fact already here. Microsoft researcher Kate Crawford nicely summarized the root of algorithmic bias in a recent New York Times op-ed, writing that software “may already be exacerbating inequality in the workplace, at home and in our legal and judicial systems.”
Nadella concludes his forward-looking post on artificial intelligence by saying: “The most critical next step in our pursuit of A.I. is to agree on an ethical and empathic framework for its design.” What better way to be ready for the AI-dominated future than to start work now on applying an ethical and empathic framework to the “dumb” software that already surrounds us?
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
The Biggest Questions: What is death?
New neuroscience is challenging our understanding of the dying process—bringing opportunities for the living.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
How to fix the internet
If we want online discourse to improve, we need to move beyond the big platforms.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.