It’s time to rethink the legal treatment of robots
A pandemic is raging with devastating consequences, and long-standing problems with racial bias and political polarization are coming to a head. Artificial intelligence (AI) has the potential to help us deal with these challenges. However, AI’s risks have become increasingly apparent. Scholarship has illustrated cases of AI opacity and lack of explainability, design choices that result in bias, negative impacts on personal well-being and social interactions, and changes in power dynamics between individuals, corporations, and the state, contributing to rising inequalities. Whether AI is developed and used in good or harmful ways will depend in large part on the legal frameworks governing and regulating it.
There should be a new guiding tenet to AI regulation, a principle of AI legal neutrality asserting that the law should tend not to discriminate between AI and human behavior. Currently, the legal system is not neutral. An AI that is significantly safer than a person may be the best choice for driving a vehicle, but existing laws may prohibit driverless vehicles. A person may manufacture higher-quality goods than a robot at a similar cost, but a business may automate because it saves on taxes. AI may be better at generating certain types of innovation, but businesses may not want to use AI if this restricts ownership of intellectual-property rights. In all these instances, neutral legal treatment would ultimately benefit human well-being by helping the law better achieve its underlying policy goals.
Consider the American tax system. AI and people are engaging in the same sorts of commercially productive activities—but the businesses for which they work are taxed differently depending on who, or what, does the work. For instance, automation allows businesses to avoid employer wage taxes. So if a chatbot costs a company as much as before taxes as an employee who does the same job (or even a bit more), it actually costs the company less to automate after taxes.
In addition to avoiding wage taxes, businesses can accelerate tax deductions for some AI when it has a physical component or falls under certain exceptions for software. In other words, employers can claim a large portion of the cost of some AI up front as a tax deduction. Finally, employers also receive a variety of indirect tax incentives to automate. In short, even though the tax laws were not designed to encourage automation, they favor AI over people because labor is taxed more than capital.
And AI does not pay taxes! Income and employment taxes are the largest sources of revenue for the government, together accounting for almost 90% of total federal tax revenue. Not only does AI not pay income taxes or generate employment taxes, it does not purchase goods and services, so it is not charged sales taxes, and it does not purchase or own property, so it does not pay property taxes. AI is simply not a taxpayer. If all work were to be automated tomorrow, most of the tax base would immediately disappear.
When businesses automate, the government loses revenue, potentially hundreds of billions of dollars in the aggregate. This may significantly constrain the government’s ability to pay for things like Social Security, national defense, and health care. If people eventually get comparable jobs, then the revenue loss is only temporary. But if job losses are permanent, the entire tax structure must change.
Debate about taxing robots took off in 2017 after the European Parliament rejected a proposal to consider a robot tax and Bill Gates subsequently endorsed the idea of a tax. The issue is even more critical today, as businesses turn to the use of robots as a result of pandemic-related risks to workers. Many businesses are asking: Why not replace people with machines?
Automation should not be discouraged on principle, but it is critical to craft tax-neutral policies to avoid subsidizing inefficient uses of technology and to ensure government revenue. Automating for the purpose of tax savings may not make businesses any more productive or result in any consumer benefits, and it may result in productivity decreases to reduce tax burdens. This is not socially beneficial.
The advantage of tax neutrality between people and AI is that it permits the marketplace to adjust without tax distortions. Businesses should then automate only if it will be more efficient or productive. Since the current tax system favors automation, a move toward a neutral tax system would increase the appeal of workers. Should the pessimistic prediction of a future with substantially increased unemployment due to automation prove correct, the revenue from neutral taxation could then be used to provide improved education and training for workers, and even to support social benefit programs such as basic income.
Once policymakers agree that they do not want to advantage AI over human workers, they could reduce taxes on people or reduce tax benefits given to AI. For instance, payroll taxes (which are charged to businesses on their workers’ salaries) should perhaps be eliminated, which would promote neutrality, reduce tax complexity, and end taxation of something of social value—human labor.
More ambitiously, AI legal neutrality may prompt a more fundamental change in how capital is taxed. Though new tax regimes could directly target AI, this would likely increase compliance costs and make the tax system more complex. It would also “tax innovation” in the sense that it might penalize business models that are legitimately more productive with less human labor. A better solution would be to increase capital gains taxes and corporate tax rates to reduce reliance on revenue sources such as income and payroll taxes. Even before AI entered the scene, some tax experts had argued for years that taxes on labor income were too high compared with other taxes. AI may provide the necessary impetus to finally address this issue.
Opponents of increased capital taxation largely base their arguments on concerns about international competition. Harvard economist Lawrence Summers, for instance, argues that “taxes on technology are likely to drive production offshore rather than create jobs at home.” These concerns are overstated, particularly with respect to countries like the United States. Investors are likely to continue investing in the United States even with relatively high taxes for a variety of reasons: access to consumer and financial markets, a predictable and transparent legal system, and a well-developed workforce, infrastructure, and technological environment.
A tax system informed by AI legal neutrality would not only improve commerce by eliminating inefficient subsidies for automation; it would help to ensure that the benefits of AI do not come at the expense of the most vulnerable, by leveling the playing field for human workers and ensuring adequate tax revenue. AI is likely to result in massive but poorly distributed financial gains, and this will both require and enable policymakers to rethink how they allocate resources and distribute wealth. They may realize we are not doing such a good job of that now.
Ryan Abbott, is Professor of Law and Health Sciences at the University of Surrey School of Law and Adjunct Assistant Professor of Medicine at the David Geffen School of Medicine at UCLA.
Geoffrey Hinton tells us why he’s now scared of the tech he helped build
“I have suddenly switched my views on whether these things are going to be more intelligent than us.”
Deep learning pioneer Geoffrey Hinton has quit Google
Hinton will be speaking at EmTech Digital on Wednesday.
The future of generative AI is niche, not generalized
ChatGPT has sparked speculation about artificial general intelligence. But the next real phase of AI will be in specific domains and contexts.
Video: Geoffrey Hinton talks about the “existential threat” of AI
Watch Hinton speak with Will Douglas Heaven, MIT Technology Review’s senior editor for AI, at EmTech Digital.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.