This week, AI experts, politicians, and CEOs will gather to ask an important question: Can the United States, China, or anyone else agree on how artificial intelligence should be used and controlled?
The World Economic Forum, the international organization that brings together the world’s rich and powerful to discuss global issues at Davos each year, will host the event in San Francisco.
The WEF will also announce the creation of an “AI Council” designed to find common ground on policy between nations that increasingly seem at odds over the power and the potential of AI and other emerging technologies (see “Trump’s feud with Huawei and China could lead to the Balkanization of tech”).
The issue is of paramount importance given the current geopolitical winds. AI is widely viewed as critical to national competitiveness and geopolitical advantage. The effort to find common ground is also important considering the way technology is driving a wedge between countries, especially the United States and its big economic rival, China.
“Many see AI through the lens of economic and geopolitical competition,” says Michael Sellitto, deputy director of the Stanford Institute for Human-Centered AI. “[They] tend to create barriers that preserve their perceived strategic advantages, in access to data or research, for example.”
A number of nations have announced AI plans that promise to prioritize funding, development, and application of the technology. But efforts to build consensus on how AI should be governed have been limited. This April, the EU released guidelines for the ethical use of AI. The Organisation for Economic Co-operation and Development (OECD), a coalition of countries dedicated to promoting democracy and economic development, this month announced a set of AI principles built upon its own objectives.
It would be a lot more significant (and surprising) to find common ground between the United States, China, and the rest of the world when it comes to AI. But the WEF effort is clearly designed to do that.
This week’s event will host dozens of experts, executives, and policymakers. In attendance will be representatives from the United Nations and Unicef, along with companies including Microsoft, IBM, the Chinese insurance and technology giant Ping An, and the Canadian AI consultancy Element AI. The meeting will also feature prominent academics and politicians from a handful of smaller countries.
Tellingly, the two chairs of the WEF’s AI council will be Brad Smith, president of Microsoft and head of the company’s legal and corporate affairs teams, and Kai-Fu Lee, a prominent Chinese AI expert and investor who has written a book called AI Superpowers, outlining China’s rising technology prowess.
“The role that the forum plays is that of an impartial international organization,” says Kay Firth-Butterfield, head of AI and machine learning at the WEF. She says the new council will seek to identify the three most important issues in AI, which she thinks will be how the technology may affect the future of work; how research in AI could benefit emerging countries; and what specific use cases of the technology will emerge. “We are looking for areas where we need to bridge so called ‘governance gaps,’” she says.
One specific use of AI that seems destined to cause friction is surveillance. Civil rights groups in the US have pushed for greater regulation of face recognition in particular, and some cities have obliged, but the there is little resistance to this application in China.
“Different cultures have different values, and AI is a technology that can encode values,” says Jack Clark, who will attend the event on behalf of OpenAI, an AI company in San Francisco backed by big-name Silicon Valley investors. “I think it’s going to be challenging at first to agree on things like ‘What values should we encode into a system?’ from a global perspective.”
Even so, many may see the new AI council as a valuable and necessary step at a time when a technological cold war is brewing. “One thing that seems like an unalloyed good is having a bunch of people from different cultures and contexts come together and talk,” says Clark.
South Africa’s private surveillance machine is fueling a digital apartheid
As firms have dumped their AI technologies into the country, it’s created a blueprint for how to surveil citizens and serves as a warning to the world.
A quick guide to the most important AI law you’ve never heard of
The European Union is planning new legislation aimed at curbing the worst harms associated with artificial intelligence.
Inside the fierce, messy fight over “healthy” sugar tech
Yi-Heng “Percival” Zhang was a leader in rare sugar research. Then things got sticky.
The secret police: Inside the app Minnesota police used to collect data on journalists at protests
Intrepid Response is a little-known but powerful app that lets police quickly upload and share information across agencies. But what happens to the information it collects?
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.