A pandemic is raging with devastating consequences, and long-standing problems with racial bias and political polarization are coming to a head. Artificial intelligence (AI) has the potential to help us deal with these challenges. However, AI’s risks have become increasingly apparent. Scholarship has illustrated cases of AI opacity and lack of explainability, design choices that result in bias, negative impacts on personal well-being and social interactions, and changes in power dynamics between individuals, corporations, and the state, contributing to rising inequalities. Whether AI is developed and used in good or harmful ways will depend in large part on the legal frameworks governing and regulating it.
There should be a new guiding tenet to AI regulation, a principle of AI legal neutrality asserting that the law should tend not to discriminate between AI and human behavior. Currently, the legal system is not neutral. An AI that is significantly safer than a person may be the best choice for driving a vehicle, but existing laws may prohibit driverless vehicles. A person may manufacture higher-quality goods than a robot at a similar cost, but a business may automate because it saves on taxes. AI may be better at generating certain types of innovation, but businesses may not want to use AI if this restricts ownership of intellectual-property rights. In all these instances, neutral legal treatment would ultimately benefit human well-being by helping the law better achieve its underlying policy goals.
Consider the American tax system. AI and people are engaging in the same sorts of commercially productive activities—but the businesses for which they work are taxed differently depending on who, or what, does the work. For instance, automation allows businesses to avoid employer wage taxes. So if a chatbot costs a company as much as before taxes as an employee who does the same job (or even a bit more), it actually costs the company less to automate after taxes.
In addition to avoiding wage taxes, businesses can accelerate tax deductions for some AI when it has a physical component or falls under certain exceptions for software. In other words, employers can claim a large portion of the cost of some AI up front as a tax deduction. Finally, employers also receive a variety of indirect tax incentives to automate. In short, even though the tax laws were not designed to encourage automation, they favor AI over people because labor is taxed more than capital.
And AI does not pay taxes! Income and employment taxes are the largest sources of revenue for the government, together accounting for almost 90% of total federal tax revenue. Not only does AI not pay income taxes or generate employment taxes, it does not purchase goods and services, so it is not charged sales taxes, and it does not purchase or own property, so it does not pay property taxes. AI is simply not a taxpayer. If all work were to be automated tomorrow, most of the tax base would immediately disappear.
When businesses automate, the government loses revenue, potentially hundreds of billions of dollars in the aggregate. This may significantly constrain the government’s ability to pay for things like Social Security, national defense, and health care. If people eventually get comparable jobs, then the revenue loss is only temporary. But if job losses are permanent, the entire tax structure must change.
Debate about taxing robots took off in 2017 after the European Parliament rejected a proposal to consider a robot tax and Bill Gates subsequently endorsed the idea of a tax. The issue is even more critical today, as businesses turn to the use of robots as a result of pandemic-related risks to workers. Many businesses are asking: Why not replace people with machines?
Automation should not be discouraged on principle, but it is critical to craft tax-neutral policies to avoid subsidizing inefficient uses of technology and to ensure government revenue. Automating for the purpose of tax savings may not make businesses any more productive or result in any consumer benefits, and it may result in productivity decreases to reduce tax burdens. This is not socially beneficial.
The advantage of tax neutrality between people and AI is that it permits the marketplace to adjust without tax distortions. Businesses should then automate only if it will be more efficient or productive. Since the current tax system favors automation, a move toward a neutral tax system would increase the appeal of workers. Should the pessimistic prediction of a future with substantially increased unemployment due to automation prove correct, the revenue from neutral taxation could then be used to provide improved education and training for workers, and even to support social benefit programs such as basic income.
Once policymakers agree that they do not want to advantage AI over human workers, they could reduce taxes on people or reduce tax benefits given to AI. For instance, payroll taxes (which are charged to businesses on their workers’ salaries) should perhaps be eliminated, which would promote neutrality, reduce tax complexity, and end taxation of something of social value—human labor.
More ambitiously, AI legal neutrality may prompt a more fundamental change in how capital is taxed. Though new tax regimes could directly target AI, this would likely increase compliance costs and make the tax system more complex. It would also “tax innovation” in the sense that it might penalize business models that are legitimately more productive with less human labor. A better solution would be to increase capital gains taxes and corporate tax rates to reduce reliance on revenue sources such as income and payroll taxes. Even before AI entered the scene, some tax experts had argued for years that taxes on labor income were too high compared with other taxes. AI may provide the necessary impetus to finally address this issue.
Opponents of increased capital taxation largely base their arguments on concerns about international competition. Harvard economist Lawrence Summers, for instance, argues that “taxes on technology are likely to drive production offshore rather than create jobs at home.” These concerns are overstated, particularly with respect to countries like the United States. Investors are likely to continue investing in the United States even with relatively high taxes for a variety of reasons: access to consumer and financial markets, a predictable and transparent legal system, and a well-developed workforce, infrastructure, and technological environment.
A tax system informed by AI legal neutrality would not only improve commerce by eliminating inefficient subsidies for automation; it would help to ensure that the benefits of AI do not come at the expense of the most vulnerable, by leveling the playing field for human workers and ensuring adequate tax revenue. AI is likely to result in massive but poorly distributed financial gains, and this will both require and enable policymakers to rethink how they allocate resources and distribute wealth. They may realize we are not doing such a good job of that now.
Yann LeCun has a bold new vision for the future of AI
One of the godfathers of deep learning pulls together old ideas to sketch out a fresh path for AI, but raises as many questions as he answers.
Inside a radical new project to democratize AI
A group of over 1,000 AI researchers has created a multilingual large language model bigger than GPT-3—and they’re giving it out for free.
Sony’s racing AI destroyed its human competitors by being nice (and fast)
What Gran Turismo Sophy learned on the racetrack could help shape the future of machines that can work alongside humans, or join us on the roads.
DeepMind has predicted the structure of almost every protein known to science
And it’s giving the data away for free, which could spur new scientific discoveries.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.