The US and China are in a race to become the AI superpower of the century. The perceived stakes are high: not only would the victor reap massive economic benefits, but it could also establish a new military edge. As Russian president Vladimir Putin phrased it last year, “Whoever becomes the leader in this sphere will become the ruler of the world.”
Not all experts agree—and most AI researchers don’t see themselves in an arms race at all. But that hasn’t stopped leaders in both countries from rapidly escalating their offensives.
On November 19, the US made its latest move by proposing to broaden its restrictions on technology exports.
The Commerce Department maintains a list of militarily sensitive technologies that require a special license to leave the country. It’s now proposing to add to the list, among other things, a host of foundational tools and techniques in artificial intelligence, like neural networks and deep learning, natural-language processing, computer vision, and expert systems. These underlie consumer products such as the Siri-enabled iPhone and computer-vision-enabled Roomba, as well as self-driving cars and IBM’s Watson.
To be clear, the government isn’t proposing a blanket blockade. Commerce’s “advance notice of proposed rulemaking” is just a request for public comment. The department says it will use the feedback “to determine whether there are specific emerging technologies” within each category that merit restriction.
Nonetheless, the extent of the list is striking. “Most commentators expected AI technologies to be added,” says R. David Edelman, the director of the Project on Technology, the Economy, and National Security at MIT. “What was remarkable was the breadth that they included. It’s almost an all-of-the-above approach to modern AI products.”
“The really surprising assertion here might be the claim that AI is inherently military,” he adds.
Edelman worries that if the restrictions are mishandled, they could cause serious “collateral damage” for US businesses. Companies like Apple and Google, for example, which rely on China for a large share of their profits, might scale back their AI development to avoid the famously lengthy export control process. Smaller companies that can’t handle the high compliance costs might write off international expansion.
“This is intended to help US companies be more competitive,” Edelman says. “The irony is it would almost certainly give Chinese companies that don’t face those same restrictions a sizable advantage in the playing field.”
AI techniques can be put to countless different uses, so it may be hard for regulators to draw a clear line between tools that have military applications and those that don’t. Moreover, because the field moves so quickly, the list would also need to be constantly updated; what’s cutting-edge technology today could be powering Alexa a few years hence. “Most of the technologies in question on that list are what I would regard as general-purpose computing for the next decade,” Edelman says.
The restrictions could also affect American universities, which benefit from an influx of foreign research talent, Edelman adds. Depending on the definition the Commerce Department settles on, the term “export” could include collaboration with foreign researchers. Universities ill equipped to weather the export controls process would no longer be able to sustain AI research programs.
Finally, Edelman says, the restrictions may just be impractical because so much of the current research is completely open-source and highly collaborative across borders.
“In some cases there may be a good case for trying to control aspects of how AI is used,” he says, “but the approach that tries to simply control and limit the proliferation of next-decade general-purpose computing is doomed to failure.”
On the bright side, he believes that “the government is actually asking for help on this one.” Commerce is seeking public comment through December 19 to better understand how the proposal will affect consumers, companies, and academia. There will likely be several revisions before the regulations go into effect.
This story originally appeared in our AI newsletter, The Algorithm. Get it in your inbox twice a week by subscribing here for free.
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
Unpacking the hype around OpenAI’s rumored new Q* model
If OpenAI's new model can solve grade-school math, it could pave the way for more powerful systems.
Generative AI deployment: Strategies for smooth scaling
Our global poll examines key decision points for putting AI to use in the enterprise.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.