Tech companies must anticipate the looming risks as AI gets creative

The technology industry must take preemptive steps to avoid emerging dangers as artificial intelligence becomes increasingly capable of being creative and otherwise acting more like human beings, Microsoft’s Harry Shum warned at MIT Technology Review’s EmTech Digital conference on Monday.
“This is the point in the cycle … where we need to engineer responsibility into the very fabric of the technology,” said Shum, executive vice president of the software giant’s artificial intelligence and research group, onstage at the San Francisco event.
Shum noted that the industry has already failed to anticipate flaws in the technology, as well as some of the troublesome ways that it’s been used in the real world.
Face recognition software, for example, has proved to be bad at accurately identifying faces with dark skin tones. China has been pairing these tools with surveillance cameras to monitor members of its Uighur Muslim minority and to shame alleged debtors and jaywalkers by posting their faces on billboards. An Uber self-driving car struck and killed a pedestrian last year. And IBM’s Watson has reportedly prescribed “unsafe and incorrect” cancer treatments.
These challenges will only become more complicated as AI gets better at discerning human emotions, conducting sophisticated conversations, and producing stories, poetry, songs, and paintings that seem increasingly indistinguishable from those created by humans, Shum said. These emerging capabilities could make it easier to produce and spread fake audio, images, and video, adding to the challenges of dealing with propaganda and misinformation online.
Microsoft is addressing these rising risks in a handful of ways. Shum said the company has improved its face recognition tools by adding altered versions of photos with a wider variety of skin colors, eyebrows, and lighting conditions to its databases.
The company has also established an AI ethics committee and joined collaborative industry groups like the Partnership on AI. Microsoft will “one day very soon” add an AI ethics review step to its standard checklist of privacy, security, and accessibility audits that must occur before new products are released, Shum said.
But he acknowledged that self-regulation won’t be enough.
Indeed, a growing chorus of voices in and out of the technology industry are calling for tighter regulations surrounding artificial intelligence. In December, the AI Now Institute at New York University, a group that includes Microsoft and Google employees, argued that government agencies need greater power to “oversee, audit, and monitor” these technologies, and called for “stringent regulation” of face recognition tools in particular.
“We are working hard to get ahead of the challenges posed by AI creation,” Shum said. “But these are hard problems that can’t be solved with technology alone, so we really need the cooperation across academia and industry. We also need to educate consumers about where the content comes from that they are seeing and using.”
Deep Dive
Artificial intelligence

Artificial intelligence is creating a new colonial world order
An MIT Technology Review series investigates how AI is enriching a powerful few by dispossessing communities that have been dispossessed before.

Meta has built a massive new language AI—and it’s giving it away for free
Facebook’s parent company is inviting researchers to pore over and pick apart the flaws in its version of GPT-3

This horse-riding astronaut is a milestone in AI’s journey to make sense of the world
OpenAI’s latest picture-making AI is amazing—but raises questions about what we mean by intelligence.

How the AI industry profits from catastrophe
As the demand for data labeling exploded, an economic catastrophe turned Venezuela into ground zero for a new model of labor exploitation.
Stay connected

Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.