Big technology companies get much of the blame when technology behaves badly. But these same companies, says Partnership on AI executive director Terah Lyons, could also be part of the solution in making sure future AI technology works better for the world.
Speaking at MIT Technology Review’s annual EmTech Digital conference in San Francisco, Lyons presented the Partnership on AI’s four-point mission statement and eight tenets that the organization calls its guiding principles. Those tenets include working to protect the privacy and security of individuals, striving to respect the interests of all parties that may be affected by AI advances, helping keep AI researchers socially responsible, ensuring that AI research and technology is robust and safe, and creating a culture of cooperation, trust, and openness among AI scientists to help achieve these goals. The Partnership on AI hopes that these principles will be adopted by the wider technology community.
Six companies—Amazon, Apple, IBM, Facebook, Google, and Microsoft—started Partnership on AI in 2016 with the belief that a lot of the issues in AI are too complex to handle alone. The organization is now up to 54 member institutes that range from technology companies like eBay and Intel to nonprofit groups like the ACLU and Amnesty International.
Lyons announced the Partnership on AI’s first three working groups, which are dedicated to fair, transparent, and accountable AI; safety-critical AI; and AI, labor, and the economy. Each group will have a for-profit and nonprofit chair and aim to share its results as widely as possible. Lyons says these groups will be like a “union of concerned scientists.”
“A big part of this is on us to really achieve inclusivity,” she says.
Tess Posner, the executive director of AI4ALL, a nonprofit that runs summer programs teaching AI to students from underrepresented groups, showed why training a diverse group for the next generation of AI workers is essential. Currently, only 13 percent of AI companies have female CEOs, and less than 3 percent of tenure-track engineering faculty in the US are black. Yet an inclusive workforce may have more ideas and can spot problems with systems before they happen, and diversity can improve the bottom line. Posner pointed out a recent Intel report saying diversity could add $500 billion to the US economy.
“It’s good for business,” she says.
These weren’t the first presentations at EmTech Digital by women with ideas on fixing AI. On Monday, Microsoft researcher Timnit Gebru presented examples of bias in current AI systems, and earlier on Tuesday Fast.ai cofounder Rachel Thomas talked about her company’s free deep-learning course and its effort to diversify the overall AI workforce. Even with the current problems achieving diversity, there are more women and people of color that could be brought into the workforce.
“I just don’t buy [that talent can’t be found],” Posner says. “If you aren’t finding it, you aren’t looking in the right way.”
Yann LeCun has a bold new vision for the future of AI
One of the godfathers of deep learning pulls together old ideas to sketch out a fresh path for AI, but raises as many questions as he answers.
The dark secret behind those cute AI-generated animal images
Google Brain has revealed its own image-making AI, called Imagen. But don't expect to see anything that isn't wholesome.
The hype around DeepMind’s new AI model misses what’s actually cool about it
Some worry that the chatter about these tools is doing the whole field a disservice.
AI’s progress isn’t the same as creating human intelligence in machines
Honorees from this year's 35 Innovators list are employing AI to find new molecules, fold proteins, and analyze massive amounts of medical data.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.