Not a day goes by when we do not hear about the threat of AI taking over the jobs of everyone from truck drivers to accountants to radiologists. An analysis coming out of McKinsey suggested that “currently demonstrated technologies could automate 45 percent of the activities people are paid to perform.” There are even online tools based on research from the University of Oxford to estimate the probability that various jobs will be automated.
This concern that progress in AI will make most human labor obsolete has led some to call for a (universal) basic income, in which all citizens periodically and unconditionally receive money from the state (see “Basic Income: A Sellout of the American Dream”). Y Combinator, a prominent startup incubator in Silicon Valley, will run a pilot study of basic income in Oakland, California, and its president has stated that “at some point in the future, as technology continues to eliminate traditional jobs and massive new wealth gets created, we’re going to see some version of this at a national scale.” A European Parliament draft report recently stated that in light of the possible effects on the labor market of robotics and AI, “a general basic income should be seriously considered,” and the organization “invites all Member States to do so.” And in June of this year, Switzerland even held a referendum on basic income (though 77 percent of voters voted against it).
Is a collapse of the demand for human labor really imminent? As an AI researcher, I think the answer is no, and I will explain why.
To be clear, I do think we can expect significant advances in AI in the near future, and that these will have a significant impact on the market for labor. Given the progress in autonomous vehicles, one can imagine that many driving jobs will be largely eliminated. Significant progress is being made on automated analysis of medical images and other data. Algorithms are taking over an ever greater share of work in the financial sector. Cooking robots are under development. The list goes on.
On the other hand, some degree of skepticism is appropriate. Ask yourself: How impressed are you with the progress in robot vacuums over the last decade? How about the progress in dishwashing machines? In truth, designing fully autonomous AI systems for messy, real-world environments is hard. More generally, current AI systems do not have a broad understanding of the world, including our social conventions, and they lack common sense. Language understanding is a good example of the problem; it is remarkably hard to get computers to answer many types of simple questions (see “AI’s Language Problem” and “A Tougher Turing Test Exposes Chatbots’ Stupidity”).
AI systems are not yet capable of true abstraction, of taking a step back, inspecting their own reasoning process, and generalizing what is going on. One consequence is that they are still limited in creativity. They can come up with new solutions to problems: for example, Google DeepMind’s AlphaGo played a highly unusual move in one of its matches against human Go champion Lee Sedol. They can create some kinds of art, such as the apparently psychedelic art produced via neural nets by Google’s DeepDream. But it is not the kind of creativity that truly gives one a new perspective on the situation at hand. And we need not consider such lofty feats as, say, Einstein formulating the general theory of relativity to find examples of such creativity in human work. Consider, for example, an assistant who suggests combining two meetings into one to save time. Such problem solving is routine enough to us, but it would be very difficult to replicate in an AI scheduling system.
Overall, when we try to have AI do existing jobs, we often find it failing in ways a person never would. The history of AI research is littered with examples where researchers create systems that perform surprisingly well at a well-defined task, only to find that it is still hard to replace the people who perform similar tasks in the messier real world.
Perhaps the more typical case will be that jobs are partially eliminated because part of the job can be performed by AI. Technological advances may also further facilitate outsourcing jobs to people around the world. At the same time, many jobs will remain immune, at least for the foreseeable future, because they fundamentally require skills that are hard to replicate in AI. Consider, for example, therapists, coaches, or kindergarten teachers: these jobs require a general understanding of the world, including human psychology and social reasoning, ability to deal with unusual circumstances, and so on. AI may even bring some people back into the workforce. For example, progress in robotics could make it easier for people with disabilities to hold some jobs, and progress in language processing may do the same for people who have difficulty using current computer interfaces.
Now, it is certainly possible I am quite wrong, and that progress in AI will come much faster than I expect; technological progress is notoriously difficult to forecast. But if one really believes there is a good chance that AI will broadly exceed human capabilities in the relatively short term, then as a species we have bigger concerns than whether to implement a basic income (indeed, there are people who seriously worry about this, but that is a separate article).
The idea that recent progress in AI will prevent most people from meaningfully contributing to society is nonsense. We may have to make some changes in the way society works, including making it easier for displaced workers to retrain, and perhaps at times increasing public spending on (say) carefully selected infrastructure projects to counterbalance job losses in the private sector. We should also be mindful that advances in AI may come unexpectedly, and do our best to prepare and make society resilient to such shocks. But the idea that we are about to enter a techno-utopia with almost no need for human labor is not supported by the current state of AI research. Countries that completely overhaul their welfare systems on the basis of this idea now may well regret it once it becomes clear that recent advances in AI, while impressive, still have their limitations.
Vincent Conitzer is a professor of computer science, economics, and philosophy at Duke University.
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
The Biggest Questions: What is death?
New neuroscience is challenging our understanding of the dying process—bringing opportunities for the living.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
How to fix the internet
If we want online discourse to improve, we need to move beyond the big platforms.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.