AI Wants to Be Your Bro, Not Your Foe
Artificial intelligence will transform just about everything, but technologists should stop fretting that it’s going to destroy the world like Skynet.
The odds that artificial intelligence will enslave or eliminate humankind within the next decade or so are thankfully slim. So concludes a major report from Stanford University on the social and economic implications of artificial intelligence.
At the same time, however, the report concludes that AI looks certain to upend huge aspects of everyday life, from employment and education to transportation and entertainment. More than 20 leaders in the fields of AI, computer science, and robotics coauthored the report. The analysis is significant because the public alarm over the impact of AI threatens to shape public policy and corporate decisions.
It predicts that automated trucks, flying vehicles, and personal robots will be commonplace by 2030, but cautions that remaining technical obstacles will limit such technologies to certain niches. It also warns that the social and ethical implications of advances in AI, such as the potential for unemployment in certain areas and likely erosions of privacy driven by new forms of surveillance and data mining, will need to be open to discussion and debate.
The study, part of a project intended to last 100 years, was commissioned in response to often dizzying advancements in computer science that have made machines more capable of learning and behaving in intelligent ways. But it is also something of a rebuttal to some of the alarmist pronouncements that have been made about this progress. “No machines with self-sustaining long-term goals and intent have been developed, nor are they likely to be developed in the near future,” the report says.
“I really see this as a coming-of-age moment for the field,” says Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, an independent research institute in Seattle, and a coauthor of the report. “The extreme positive hype is wrong, and the fearmongering is not based on any data.”
The report identifies the most promising areas for future AI research, and Etzioni says key among these is research on ways for humans and AI systems to collaborate and coӧperate effectively.
The Stanford Hundred Year Study will report findings every five years. The first report focuses on several key areas in which AI will have a significant impact, including transportation, health care, education, and employment.
While dismissing doomsday scenarios at the outset, the report warns that AI will have an impact on employment, eliminating jobs in certain sectors, such as transportation, and augmenting jobs—by taking over key tasks—in many other areas. The report goes so far as to suggest that some consideration should be given to new social safety nets aimed at assisting those who are displaced. “It’s not too soon for social debate on how the economic fruits of AI-technologies should be shared,” it states.
Some European countries and a handful of U.S. entrepreneurs have raised the idea that disruption to the world of work could necessitate some sort of basic income, although the idea remains controversial with many economists (see “Basic Income: A Sellout of the American Dream”).
The study was welcomed by other AI experts, some of whom have grown tired of the sci-fi narratives that often seem to surround public discussion of AI. “It is encouraging to hear less fearmongering and a more sober assessment of what is likely to come,” says Ronald Arkin, a professor of Georgia Tech who studies the implications of autonomous weapons. “The report is a refreshing change from the constant barrage from the so-called experts of doom portending the evils of AI and the demise of humanity.”
Much of the more controversial discussions about AI can be traced to the ideas of Nick Bostrom, a philosopher at the University of Oxford, and the author of an influential book, Superintelligence: Paths, Dangers, Strategies (see “AI Doomsayer Says His Ideas Are Catching On”). Bostrom’s ideas have evidently persuaded others, including the influential entrepreneur and technologist Elon Musk and the theoretical physicist Stephen Hawking, to speak out about the dangers of AI.
Max Tegmark a professor of physics at MIT and the president of the Future of Life Institute, an organization created to study technologies that might pose an existential risk to humanity, agrees with the report’s conclusions about the opportunities for AI. But he suggests that the report may have been too hasty in dismissing the risks posed by unforeseen technological shifts. “Its claim that superhuman robots are probably impossible is controversial and viewed by some AI researchers as too pessimistic,” he says.
Stuart Russell, a professor of computer science at the University of California, Berkeley, who leads a new foundation announced this week, the Center for Human-Compatible Artificial Intelligence, also questions whether the report is overly rosy in its outlook. “I agree with the report's assertion that there is ‘no cause for concern that AI is an imminent threat to humankind,’" Russell wrote by e-mail. “But I do not agree with the conclusion, implied by the report's dismissal of those who have raised long-term concerns, that there is no long-term risk.”
Etzioni, of the Allen Institute for AI, says he hopes the report will inform the public, but he doesn’t expect it to put a stop to outlandish claims. “The extremists will ignore it,” he says. “The hope is that thoughtful people will be influenced. But, you know, Hollywood will not change its perspective.”
Couldn't get to Cambridge? We brought EmTech MIT to you!Watch session videos here