Ren Zhengfei, the reclusive founder and CEO of China’s embattled tech giant, Huawei, is defiant about American efforts to impede his company with lawsuits and restrictions.
“There is no way the US can crush us,” Ren said in a rare recent interview with international media. “The world cannot leave us because we are more advanced.”
It might sound like bluff and bluster, but these words carry a measure of truth. Huawei’s technology road map, especially in the field of artificial intelligence, points to a company that is progressing more rapidly—and on more technology fronts—than any other business in the world. Apart from its AI aspirations, Huawei is an ascendant player in the next-generation 5G wireless networking market, as well as the world’s second-largest smartphone maker behind Samsung (and ahead of Apple).
“The [Chinese] government and private sector approach is to build companies that compete across the full tech stack,” says Samm Sacks, who specializes in cybersecurity and China at New America, a Washington think tank. “That’s what Huawei is doing.”
But it’s Huawei’s AI strategy that will give it truly unparalleled reach across the whole of the tech landscape. It will also raise a host of new security issues. The company’s technological ubiquity, and the fact that Chinese companies are ultimately answerable to their government, are big reasons why the US views Huawei as an unprecedented national security threat.
In an exclusive interview with MIT Technology Review, Xu Wenwei, director of the Huawei board and the company’s chief strategy and marketing officer, touted the scope of its AI plans. He also defended the company’s record on security. And he promised that Huawei would seek to engage with the rest of the world to address emerging risks and threats posed by AI.
Xu (who uses the Western name William Xu) said that Huawei plans to increase its investments in AI and integrate it throughout the company to “build a full-stack AI portfolio.” Since Huawei is a private firm, it’s tricky to quantify its technology investments. But officials from the company said last year that it planned to more than double annual R&D spending to between $15 billion and $20 billion. This could catapult the company to between fifth and second place in worldwide spending on R&D. According to its website, some 80,000 employees, or 45% of Huawei’s workforce, are involved in R&D.
Huawei’s vision stretches from AI chips for data centers and mobile devices to deep-learning software and cloud services that offer an alternative to those from Amazon, Microsoft, or Google. The company is researching key technical challenges, including making machine-learning models more data and energy efficient and easier to update, Xu said.
But Huawei is struggling to convince the Western world that it can be trusted. The company faces accusations of intellectual-property theft, espionage, and fraud, and its deputy chairwoman and CFO (and Ren’s daughter), Meng Wanzhou, is currently under house arrest in Canada, awaiting possible extradition to the US. America and several other countries have banned the sale of Huawei’s devices or are considering restrictions, citing concerns that Huawei’s 5G equipment that could potentially be exploited by the Chinese government to attack systems or slurp up sensitive data.
Xu defended the company’s reputation: “Huawei's record on security is clean.”
But AI adds another dimension to such worries. Machine-learning services are a new source of risk, since they can be exploited by hackers, and the data used to train such services may contain private information. The use of AI algorithms also makes systems more complex and opaque, which means security auditing is more challenging.
As part of an effort to reassure doubters, Xu promised that Huawei would release a code of AI principles in April. This will amount to a promise that the company will seek to protect user data and ensure security. Xu also said Huawei wants to collaborate with its international competitors, which would include the likes of Google and Amazon, to ensure that the technology is developed responsibly. It is, however, unclear whether Huawei might allow its AI services to be audited by a third party, as it has done with its hardware.
“Many companies across the industry, including Huawei, are developing AI principles,” Xu told MIT Technology Review. “For now, we know at least three things for certain: technology should be secure and transparent; user privacy and rights should be protected; and AI should facilitate the development of social equality and welfare.”
As Huawei advances in AI and progresses toward its aim of becoming a “full stack” company, however, it may increasingly seem too powerful for many in the West.
Already, it boasts a dizzying array of offerings. Last year, Huawei launched an AI chip for its smartphones, called Ascend, that is comparable to a chip found in the latest iPhones, and tailor-made for running machine-learning code that powers tasks like face and voice recognition. The technology for the chip came from a startup called Cambricon, which was spun out of the Chinese Academy of Sciences, but Huawei recently said it would design future generations in-house.
Huawei also sells a range of AI-optimized chips for desktops, servers, and data centers. The chips lag behind those offered by Nvidia and Qualcomm (both US companies) in terms of sophistication, but no other business can boast such a range of AI hardware.
Then there’s the software. Huawei offers a cloud computing platform with 45 different AI services—similar in scope to offerings by Western giants like Google, Amazon, and Microsoft. In the second quarter of 2019, Huawei will also release its first deep-learning framework, called MindSpore, which will compete with the likes of Google’s Tensorflow or Facebook’s PyTorch.
AI is also woven into Huawei’s ambitions to provide the 5G equipment that will connect everything from industrial machinery to self-driving cars. “We need to use AI to reduce maintenance costs,” Xu said. “Telecom networks are becoming more and more complex—70% of network failures are caused by human errors, and if we use AI in network maintenance, over 50% of potential failures can be predicted.”
Xu’s statements on AI ethics are also, in a sense, part of an effort to lead the world’s AI development. Ensuring ethical AI will mean crafting technical standards, which will be important to shaping the future of the technology itself. The United States has exerted an outsize influence over its development of the internet through technical standards.
To that end, the Chinese Association for Artificial Intelligence, a state-run organization, set up a committee earlier this year to draft a national code of AI ethics. Several of China’s big tech companies, including Baidu, Alibaba, and Tencent, also have initiatives dedicated to understanding the impact of AI.
Agreeing on AI ethics and standards could prove a challenge as tensions between East and West escalate, however. A number of national governments, as well as organizations like the EU, are also seeking to set the rules of the road. “AI brings value as well as problems and confusions,” Xu told MIT Technology Review. “Global collaboration is needed to address these problems.”
And international collaboration is not exactly a forte of the US right now. Indeed, outside of its own borders, the American government can do only so much to hamper Huawei. Some allies are apparently tiring of US strong-arm tactics; the UK and Germany both seem increasingly unlikely to ban Huawei from supplying 5G equipment and other products and services.
The company’s interest in ingratiating itself with wary countries also has its limits. In recent comments its CEO, Ren, contended that the international picture is changing, at least in technological terms. “If the lights go out in the West, the East will still shine,” he said. “And if the North goes dark, there is still the South. America doesn’t represent the world. America only represents a portion of the world.”
Either way, there will be Huawei.
A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?
Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.
The viral AI avatar app Lensa undressed me—without my consent
My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.
Roomba testers feel misled after intimate images ended up on Facebook
An MIT Technology Review investigation recently revealed how images of a minor and a tester on the toilet ended up on social media. iRobot said it had consent to collect this kind of data from inside homes—but participants say otherwise.
How to spot AI-generated text
The internet is increasingly awash with text written by AI software. We need new tools to detect it.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.