Did China and the US just agree on something?
This week, Chinese scientists and engineers released a code of ethics for artificial intelligence that might signal a willingness from Beijing to rethink how it uses the technology.
And while China’s government is widely criticized for using AI as a way to monitor citizens, the newly published guidelines seem remarkably similar to ethical frameworks laid out by Western companies and governments.
The Beijing AI Principles were announced last Saturday by the Beijing Academy of Artificial Intelligence (BAAI), an organization backed by the Chinese Ministry of Science and Technology and the Beijing municipal government. They spell out guiding principles for research and development in AI, including that “human privacy, dignity, freedom, autonomy, and rights should be sufficiently respected.”
While it would be easy to dismiss talk of privacy and individual freedoms as disingenuous, it signals a surprising willingness to discuss such issues within Chinese policy circles.
“I actually think this is a pretty good development,” says Yasheng Huang, an expert on business and policy in China at MIT’s Sloan business school. “On something like human rights, I always like to see the Chinese government offer opportunities with the US to talk.”
The code was developed in collaboration with the most prominent and important technical organizations and tech companies working on AI in China, including Peking University, Tsinghua University, the Institute of Automation and Institute of Computing Technology within the Chinese Academy of Sciences, and the country’s big three tech firms: Baidu, Alibaba, and Tencent.
“The development of AI is a common challenge for all humanity. Only through coordination on a global scale can we build AI that is beneficial to both humanity and nature,” Yi Zeng, director of BAAI, told the People’s Daily, the official publication of the Chinese Communist Party. “The Beijing Principles reflect our position, vision, and our willingness to create a dialogue with the international society.”
AI ethics vs. the trade war
It is, of course, a critical moment for Chinese-American relations, especially with regard to emerging technologies. Alarmed by China’s progress in areas like AI and 5G, the Trump administration has used the levers of global trade to attack and in some cases cripple key Chinese tech firms. The telecommunications giant Huawei, for example, has been targeted with export and import controls that threaten to undermine its business. The approach is creating mistrust and divisions that threaten to create new fault lines in the tech world, which came of age in the era of globalization and has come to rely on the economic openness that has accompanied it.
The US government is also said to be considering export controls targeting Chinese companies that sell surveillance equipment and software, such as Hikvision and Dahua Technology, because of their role in helping the Chinese government implement surveillance.
Huang says because AI raises ethical issues, it offers an opportunity for the US and China to talk about issues such as personal freedoms. And he says it is unusual for the government to offer flexibility in such areas. “The important thing here is that by describing the issues subject to conversation and dialogue, they are conceding this is not something they have the right to control one hundred percent,” adds Huang. “That’s a big deal in that political culture.”
Despite the ongoing trade war between the two countries, some Western experts have been trying to build bridges. This week, the World Economic Forum announced its own AI principles, developed in collaboration with academics, business leaders, and policymakers from the US, China, and other countries. One of the co-chairs of the WEF’s new AI council is Kai-Fu Lee, a prominent AI investor based in Beijing, who previously helped establish both Microsoft’s and Google’s outposts in China. Lee says the WEF group discussed the fact that the Chinese principles seem very similar to those developed by Western countries and companies. “This makes us quite optimistic,” he says.
Finding common ground in the current climate may still prove difficult. The Chinese Communist Party maintains tight control over domestic companies and shows no signs of scaling back on its schemes for tracking and monitoring citizens. But MIT’s Huang says this makes the Beijing AI principles all the more important.
“Not to engage with China on this matter is self-defeating,” he says.
Why Meta’s latest large language model survived only three days online
Galactica was supposed to help scientists. Instead, it mindlessly spat out biased and incorrect nonsense.
A bot that watched 70,000 hours of Minecraft could unlock AI’s next big thing
Online videos are a vast and untapped source of training data—and OpenAI says it has a new way to use it.
Responsible AI has a burnout problem
Companies say they want ethical AI. But those working in the field say that ambition comes at their expense.
Biotech labs are using AI inspired by DALL-E to invent new drugs
Two groups have announced powerful new generative models that can design new proteins on demand not seen in nature.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.