Artificial intelligence will reshape the world of finance over the next decade or so by automating investing and other services—but it could also introduce troubling systematic weaknesses and risks, according to a new report from the World Economic Forum (WEF).
Compiled through interviews with dozens of leading financial experts and industry leaders, the report concludes that artificial intelligence will disrupt the industry by allowing early adopters to outmaneuver competitors. It also suggests that the technology will create more convenient products for consumers, such as sophisticated tools for managing personal finances and investments.
But most notably, the report points to the potential for big financial institutions to build machine-learning-based services that live in the cloud and are accessed by other institutions.
“The dynamics of machine learning create a strong incentive to network the back office,” says the report’s main author, Jesse McWaters, who leads the AI in Financial Services Project at the World Economic Forum. “A more networked world is more vulnerable to cybersecurity risks, and it also creates concentration risks.”
In other words, financial systems that incorporate machine learning and are accessed through the cloud by many different institutions could present a juicy target for hackers and a single point of systemic failure.
Wall Street is already rapidly adopting machine learning, the technology at the center of the artificial-intelligence boom. Finance firms generally have lots of data and plenty of incentive to innovate. Hedge funds and banks are hiring AI researchers as quickly as they can, and the financial industry is experimenting with back-office automation in a big way. The automation of high-frequency trading has already created systemic risks, as highlighted by several runaway trading events, or “flash crashes,” in recent years.
Andrew Lo, a professor at MIT’s Sloan School of Management, researches the issue of systemic risk in the financial system, and he has previously warned that the system as a whole may be vulnerable because of its sheer complexity.
The WEF report raises other issues as well. It says that big tech companies will have an opportunity to get into finance, often through tie-ins with financial firms, because of their expertise in AI as well as their access to consumer data.
And McWaters says that as AI becomes more widely used in finance, it will be important to consider issues like biased algorithms, which can discriminate against certain groups of people. Financial companies should not be too eager to simply replace staff either, he says. As the study suggests, human skills will remain important even as automation becomes more widespread.
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
Unpacking the hype around OpenAI’s rumored new Q* model
If OpenAI's new model can solve grade-school math, it could pave the way for more powerful systems.
Generative AI deployment: Strategies for smooth scaling
Our global poll examines key decision points for putting AI to use in the enterprise.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.