Hedge Funds Are Increasingly Turning to AI—and That Might Be a Problem
Financial firms have generally been slow to accept artificially intelligent stock pickers. They have already invested billions collectively to bring in quantitative analysts, who do major number crunching as well as develop powerful non-AI algorithms. Some argue that there isn’t a lot of easy money left on the table for AI to pick up.
Even so, hedge funds are now starting to turn to AI to give them an edge. Hedge fund managers, with their high fees (typically 20 percent of profits and 2 percent off the top of whatever an investor puts in), need to have healthy returns to justify the costs. And this year, two top-performing hedge funds have used machine learning to bump up their profits, according to the Wall Street Journal. One firm said that AI accounts for more than 50 percent of its gains so far.
Not all firms are just getting into AI. One, called Voleon, was launched to start experimenting with machine learning for investing in 2008. It lost money until developing a second-generation platform that started bringing in profits in 2011—and continued through 2015. Voleon lost money in 2016, though, a sign that AI isn’t a surefire way to beat the markets.
But it’s in San Francisco, where new technology is always heartily embraced, that AI-powered investing might properly come of age. The Economist has just profiled two firms based there that are going big on using AI for investing, including a hedge fund called Numerai that encrypts all its data in a way meant to ensure that no bias can sneak into its systems.
What does that mean for the future of investing? There is a chance AI could make markets more volatile, right when they least need it. And a new report by the Financial Stability Board warns that AI doesn’t have a lot of data from past financial crises, and might not behave in a suitable way if another Great Recession or similar crash were to happen. What’s more, if hedge funds continue to bring AI into the mix, markets could tighten and become much more susceptible to big shocks.
Deep Dive
Artificial intelligence
A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?
Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.
The viral AI avatar app Lensa undressed me—without my consent
My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.
Roomba testers feel misled after intimate images ended up on Facebook
An MIT Technology Review investigation recently revealed how images of a minor and a tester on the toilet ended up on social media. iRobot said it had consent to collect this kind of data from inside homes—but participants say otherwise.
How to spot AI-generated text
The internet is increasingly awash with text written by AI software. We need new tools to detect it.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.