Artificial intelligence will reshape the world of finance over the next decade or so by automating investing and other services—but it could also introduce troubling systematic weaknesses and risks, according to a new report from the World Economic Forum (WEF).
Compiled through interviews with dozens of leading financial experts and industry leaders, the report concludes that artificial intelligence will disrupt the industry by allowing early adopters to outmaneuver competitors. It also suggests that the technology will create more convenient products for consumers, such as sophisticated tools for managing personal finances and investments.
But most notably, the report points to the potential for big financial institutions to build machine-learning-based services that live in the cloud and are accessed by other institutions.
“The dynamics of machine learning create a strong incentive to network the back office,” says the report’s main author, Jesse McWaters, who leads the AI in Financial Services Project at the World Economic Forum. “A more networked world is more vulnerable to cybersecurity risks, and it also creates concentration risks.”
In other words, financial systems that incorporate machine learning and are accessed through the cloud by many different institutions could present a juicy target for hackers and a single point of systemic failure.
Wall Street is already rapidly adopting machine learning, the technology at the center of the artificial-intelligence boom. Finance firms generally have lots of data and plenty of incentive to innovate. Hedge funds and banks are hiring AI researchers as quickly as they can, and the financial industry is experimenting with back-office automation in a big way. The automation of high-frequency trading has already created systemic risks, as highlighted by several runaway trading events, or “flash crashes,” in recent years.
Andrew Lo, a professor at MIT’s Sloan School of Management, researches the issue of systemic risk in the financial system, and he has previously warned that the system as a whole may be vulnerable because of its sheer complexity.
The WEF report raises other issues as well. It says that big tech companies will have an opportunity to get into finance, often through tie-ins with financial firms, because of their expertise in AI as well as their access to consumer data.
And McWaters says that as AI becomes more widely used in finance, it will be important to consider issues like biased algorithms, which can discriminate against certain groups of people. Financial companies should not be too eager to simply replace staff either, he says. As the study suggests, human skills will remain important even as automation becomes more widespread.
A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?
Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.
The viral AI avatar app Lensa undressed me—without my consent
My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.
Roomba testers feel misled after intimate images ended up on Facebook
An MIT Technology Review investigation recently revealed how images of a minor and a tester on the toilet ended up on social media. iRobot said it had consent to collect this kind of data from inside homes—but participants say otherwise.
How to spot AI-generated text
The internet is increasingly awash with text written by AI software. We need new tools to detect it.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.