Last year, just as I was beginning to cover artificial intelligence, the AI world was getting a major wake-up call. There were some incredible advancements in AI research in 2018—from reinforcement learning to generative adversarial networks (GANs) to better natural-language understanding. But the year also saw several high-profile illustrations of the harm these systems can cause when they are deployed too hastily.
A Tesla crashed on Autopilot, killing the driver, and a self-driving Uber crashed, killing a pedestrian. Commercial face recognition systems performed terribly in audits on dark-skinned people, but tech giants continued to peddle them anyway, to customers including law enforcement. At the beginning of this year, reflecting on these events, I wrote a resolution for the AI community: Stop treating AI like magic, and take responsibility for creating, applying, and regulating it ethically.
In some ways, my wish did come true. In 2019, there was more talk of AI ethics than ever before. Dozens of organizations produced AI ethics guidelines; companies rushed to establish responsible AI teams and parade them in front of the media. It’s hard to attend an AI-related conference anymore without part of the programming being dedicated to an ethics-related message: How do we protect people’s privacy when AI needs so much data? How do we empower marginalized communities instead of exploiting them? How do we continue to trust media in the face of algorithmically created and distributed disinformation?
But talk is just that—it’s not enough. For all the lip service paid to these issues, many organizations’ AI ethics guidelines remain vague and hard to implement. Few companies can show tangible changes to the way AI products and services get evaluated and approved. We’re falling into a trap of ethics-washing, where genuine action gets replaced by superficial promises. In the most acute example, Google formed a nominal AI ethics board with no actual veto power over questionable projects, and with a couple of members whose inclusion provoked controversy. A backlash immediately led to its dissolution.
Meanwhile, the need for greater ethical responsibility has only grown more urgent. The same advancements made in GANs in 2018 have led to the proliferation of hyper-realistic deepfakes, which are now being used to target women and erode people’s belief in documentation and evidence. New findings have shed light on the massive climate impact of deep learning, but organizations have continued to train ever larger and more energy-guzzling models. Scholars and journalists have also revealed just how many humans are behind the algorithmic curtain. The AI industry is creating an entirely new class of hidden laborers—content moderators, data labelers, transcribers—who toil away in often brutal conditions.
But not all is dark and gloomy: 2019 was the year of the greatest grassroots pushback against harmful AI from community groups, policymakers, and tech employees themselves. Several cities—including San Francisco and Oakland, California, and Somerville, Massachusetts—banned public use of face recognition, and proposed federal legislation could soon ban it from US public housing as well. Employees of tech giants like Microsoft, Google, and Salesforce also grew increasingly vocal against their companies’ use of AI for tracking migrants and for drone surveillance.
Within the AI community, researchers also doubled down on mitigating AI bias and reexamined the incentives that lead to the field’s runaway energy consumption. Companies invested more resources in protecting user privacy and combating deepfakes and disinformation. Experts and policymakers worked in tandem to propose thoughtful new legislation meant to rein in unintended consequences without dampening innovation. At the largest annual gathering in the field this year, I was both touched and surprised by how many of the keynotes, workshops, and posters focused on real-world problems—both those created by AI and those it could help solve.
So here is my hope for 2020: that industry and academia sustain this momentum and make concrete bottom-up and top-down changes that realign AI development. While we still have time, we shouldn’t lose sight of the dream animating the field. Decades ago, humans began the quest to build intelligent machines so they could one day help us solve some of our toughest challenges.
AI, in other words, is meant to help humanity prosper. Let’s not forget.
To have more stories like this delivered directly to your inbox, sign up for our Webby-nominated AI newsletter The Algorithm. It's free.
Yann LeCun has a bold new vision for the future of AI
One of the godfathers of deep learning pulls together old ideas to sketch out a fresh path for AI, but raises as many questions as he answers.
Inside a radical new project to democratize AI
A group of over 1,000 AI researchers has created a multilingual large language model bigger than GPT-3—and they’re giving it out for free.
Sony’s racing AI destroyed its human competitors by being nice (and fast)
What Gran Turismo Sophy learned on the racetrack could help shape the future of machines that can work alongside humans, or join us on the roads.
DeepMind has predicted the structure of almost every protein known to science
And it’s giving the data away for free, which could spur new scientific discoveries.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.