For better AI, diversify the people building it
Big technology companies get much of the blame when technology behaves badly. But these same companies, says Partnership on AI executive director Terah Lyons, could also be part of the solution in making sure future AI technology works better for the world.
Speaking at MIT Technology Review’s annual EmTech Digital conference in San Francisco, Lyons presented the Partnership on AI’s four-point mission statement and eight tenets that the organization calls its guiding principles. Those tenets include working to protect the privacy and security of individuals, striving to respect the interests of all parties that may be affected by AI advances, helping keep AI researchers socially responsible, ensuring that AI research and technology is robust and safe, and creating a culture of cooperation, trust, and openness among AI scientists to help achieve these goals. The Partnership on AI hopes that these principles will be adopted by the wider technology community.
Six companies—Amazon, Apple, IBM, Facebook, Google, and Microsoft—started Partnership on AI in 2016 with the belief that a lot of the issues in AI are too complex to handle alone. The organization is now up to 54 member institutes that range from technology companies like eBay and Intel to nonprofit groups like the ACLU and Amnesty International.
Lyons announced the Partnership on AI’s first three working groups, which are dedicated to fair, transparent, and accountable AI; safety-critical AI; and AI, labor, and the economy. Each group will have a for-profit and nonprofit chair and aim to share its results as widely as possible. Lyons says these groups will be like a “union of concerned scientists.”
“A big part of this is on us to really achieve inclusivity,” she says.
Tess Posner, the executive director of AI4ALL, a nonprofit that runs summer programs teaching AI to students from underrepresented groups, showed why training a diverse group for the next generation of AI workers is essential. Currently, only 13 percent of AI companies have female CEOs, and less than 3 percent of tenure-track engineering faculty in the US are black. Yet an inclusive workforce may have more ideas and can spot problems with systems before they happen, and diversity can improve the bottom line. Posner pointed out a recent Intel report saying diversity could add $500 billion to the US economy.
“It’s good for business,” she says.
These weren’t the first presentations at EmTech Digital by women with ideas on fixing AI. On Monday, Microsoft researcher Timnit Gebru presented examples of bias in current AI systems, and earlier on Tuesday Fast.ai cofounder Rachel Thomas talked about her company’s free deep-learning course and its effort to diversify the overall AI workforce. Even with the current problems achieving diversity, there are more women and people of color that could be brought into the workforce.
“I just don’t buy [that talent can’t be found],” Posner says. “If you aren’t finding it, you aren’t looking in the right way.”
Geoffrey Hinton tells us why he’s now scared of the tech he helped build
“I have suddenly switched my views on whether these things are going to be more intelligent than us.”
ChatGPT is going to change education, not destroy it
The narrative around cheating students doesn’t tell the whole story. Meet the teachers who think generative AI could actually make learning better.
Deep learning pioneer Geoffrey Hinton has quit Google
Hinton will be speaking at EmTech Digital on Wednesday.
We are hurtling toward a glitchy, spammy, scammy, AI-powered internet
Large language models are full of security vulnerabilities, yet they’re being embedded into tech products on a vast scale.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.