Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

Various economists argue that the efficiency of a market ought to be clearly evident in the returns it produces. They say that the more efficient it is, the more random its returns will be and a perfect market should be completely random.

That would appear to give the lie to the widespread belief that humans are unable to tell the difference between financial market returns and, say, a sequence of coin tosses. A number of experiments seem to back up this belief, showing for example that humans studying randomly generated data very quickly identify ‘trends’ in the data and develop hypotheses about them.

To find out whether humans can reliably distinguish between real and random market data, Jasmina Hasanhodzic at AlphaSimplex, an investment strategy company in Cambridge, Mass, Andrew Lo at MIT’s Sloan School of Management, who founded AlphaSimplex and Emanuele Viola at NorthEastern University, have devised a simple experiment.

They have created a computer game in which a player is shown two time-series of data. One is real data from a financial market such as the US Dollar Index, or the spot price of Gold. The other is the same data randomly rearranged. The player has to guess which is the real series and is immediately told whether the guess is right or wrong.

Hasanhodzic and co call this a financial Turing test and anybody can sign up and take the test on their website.

In their experiment, 78 people took the test, with each contest lasting two weeks.

The results show that that humans are actually rather good at this game. After a few guesses most people quickly learn how to distinguish the real data from the random stuff. “The results provide overwhelming statistical evidence (p-values of at most 0.5%) that humans can quickly learn to distinguish actual price series from randomly generated ones,” say Hasanhodzic and co.

It’s not hard to see why. In feedback sessions, the players say that the real data was smoother than the randomised data or vice versa and that these patterns were easy to spot after a few goes.

That’s an intriguing result but what to make of it? First let’s look at what the study does not address. The study does not address any notion of predictability. A truly random market is entirely unpredictable, by definition. There is good evidence that real markets are not random and that their behaviour can be described by fairly simple principles. That doesn’t make them predictable, however (although we have looked at evidence that certain kinds of bubble markets might be predictable here, here and here).

Neither does the study address whether humans are good at making predictions; whether they are better at predicting the future performance of a market than, say, a coin toss.

So what does it show? It shows that humans are good at pattern recognition. Nothing more and nothing less.

Ref: arxiv.org/abs/1002.4592: Is It Real, or Is It Randomized?: A Financial Turing Test

7 comments. Share your thoughts »

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me