Skip to Content
Artificial intelligence

For better AI, diversify the people building it

Speakers at EmTech Digital offered up tangible solutions to the problem of bias in AI.
March 28, 2018

Big technology companies get much of the blame when technology behaves badly. But these same companies, says Partnership on AI executive director Terah Lyons, could also be part of the solution in making sure future AI technology works better for the world.

Speaking at MIT Technology Review’s annual EmTech Digital conference in San Francisco, Lyons presented the Partnership on AI’s four-point mission statement and eight tenets that the organization calls its guiding principles. Those tenets include working to protect the privacy and security of individuals, striving to respect the interests of all parties that may be affected by AI advances, helping keep AI researchers socially responsible, ensuring that AI research and technology is robust and safe, and creating a culture of cooperation, trust, and openness among AI scientists to help achieve these goals. The Partnership on AI hopes that these principles will be adopted by the wider technology community.

Six companies—Amazon, Apple, IBM, Facebook, Google, and Microsoft—started Partnership on AI in 2016 with the belief that a lot of the issues in AI are too complex to handle alone. The organization is now up to 54 member institutes that range from technology companies like eBay and Intel to nonprofit groups like the ACLU and Amnesty International.

Lyons announced the Partnership on AI’s first three working groups, which are dedicated to fair, transparent, and accountable AI; safety-critical AI; and AI, labor, and the economy. Each group will have a for-profit and nonprofit chair and aim to share its results as widely as possible. Lyons says these groups will be like a “union of concerned scientists.”

“A big part of this is on us to really achieve inclusivity,” she says.

Tess Posner, the executive director of AI4ALL, a nonprofit that runs summer programs teaching AI to students from underrepresented groups, showed why training a diverse group for the next generation of AI workers is essential. Currently, only 13 percent of AI companies have female CEOs, and less than 3 percent of tenure-track engineering faculty in the US are black. Yet an inclusive workforce may have more ideas and can spot problems with systems before they happen, and diversity can improve the bottom line. Posner pointed out a recent Intel report saying diversity could add $500 billion to the US economy.

“It’s good for business,” she says.

These weren’t the first presentations at EmTech Digital by women with ideas on fixing AI. On Monday, Microsoft researcher Timnit Gebru presented examples of bias in current AI systems, and earlier on Tuesday Fast.ai cofounder Rachel Thomas talked about her company’s free deep-learning course and its effort to diversify the overall AI workforce. Even with the current problems achieving diversity, there are more women and people of color that could be brought into the workforce.

“I just don’t buy [that talent can’t be found],” Posner says. “If you aren’t finding it, you aren’t looking in the right way.”

Deep Dive

Artificial intelligence

conceptual illustration showing various women's faces being scanned
conceptual illustration showing various women's faces being scanned

A horrifying new AI app swaps women into porn videos with a click

Deepfake researchers have long feared the day this would arrive.

Conceptual illustration of a therapy session
Conceptual illustration of a therapy session

The therapists using AI to make therapy better

Researchers are learning more about how therapy works by examining the language therapists use with clients. It could lead to more people getting better, and staying better.

a Chichuahua standing on a Great Dane
a Chichuahua standing on a Great Dane

DeepMind says its new language model can beat others 25 times its size

RETRO uses an external memory to look up passages of text on the fly, avoiding some of the costs of training a vast neural network

THE BLOB, 1958, promotional artwork
THE BLOB, 1958, promotional artwork

2021 was the year of monster AI models

GPT-3, OpenAI’s program to mimic human language,  kicked off a new trend in artificial intelligence for bigger and bigger models. How large will they get, and at what cost?

Stay connected

Illustration by Rose WongIllustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.