Computationally Efficient Markets

Four centuries ago, telescopes were turned to the sky for the first time – and what they saw ultimately launched much of modern science. Over the past twenty years I have begun to explore a new universe – the computational universe –made visible not by telescopes but by computers.
— Stephen Wolfram ("A New Kind of Science", 2002)

So here are some questions of interest to quantitative finance: What do we expect to see when peering into the computational universe? What do "market imperfections" or "market inefficiencies" look like, exactly, from a computational viewpoint? More practically, how do we go about finding them?

The classical view of market efficiency, which can be traced to the work of Eugene Fama and Paul Samuelson in the 1960’s, is centered on the notion of how well market prices reflect all available information. Specifically, in an informationally efficient market, price changes must be unforecastable if they fully incorporate the information and expectations of all market participants. In other words, one can’t routinely profit from information that is already out there. Keep in mind though, that the process of impounding information into market prices might not be instantaneous. The resulting proposition, called the Efficient Market Hypothesis (or EMH), is “disarmingly simple to state, has far-reaching consequences for academic theories and business practice, and yet is surprisingly resilient to empirical proof or refutation,” according to Andrew Lo. We won’t be joining the ongoing academic debate here. Instead, we look to discover what is deep and enduringly true about the markets, but from a practical computational point of view.

For perspective, let's recall two interesting bits of history. The first bit of history happened at a time just before the telegraph, in the 1840’s, when Paul Reuter started a financial information service using carrier pigeons to carry messages between Aachen and Brussels. That was the missing link to connect Berlin and Paris. The carrier pigeons were much faster than the post train, giving Reuter faster access to stock news from the Paris stock exchange. The carrier pigeons was a brilliant idea; they were a superior means of transport that got information from point A to point B, and making markets more efficient in the process. The carrier pigeon was an advantageous technology until the telegraph came along.

The second bit of history occurred in 1960, when the Ford Foundation provided a grant for the University of Chicago to create the Center for Research in Security Prices (CRSP) tapes. These were Univac computer tapes that held stock price data from the stock exchanges, and were given away at cost to anybody who cared to study them. This was a historic turning point because before then nobody ever had stock price data. The CRSP tapes contained stock price data all the way back to 1926, and the database was extensively analyzed. As recounted by Robert Shiller, the CRSP tapes were “a breakthrough of science, of computers, of someone getting the data organized, and getting it available.”

The carrier pigeons and the CRSP tapes; both are technologies of their times. They are the underlying computational resources, i.e., for communications and storage of data, that enable market data to be processed into knowledge. The view of the markets as seen by carrier pigeons (if only they can read the tiny strips of paper tied to their feet) is likely very different from any man on the street in Paris or Berlin. Similarly, amidst the vast featureless landscape of seemingly random fluctuations in stock prices from the CRSP tapes, remnants of statistical memory persist as market anomalies when, and only when, processed through the FORTRAN program running on a computer. It is as though we cannot simply talk about markets being informationally efficient, or inefficient, without also considering the bounds of the underlying computational resources required to establishing the truth of such a statement.

Such a computational view of market efficiency has been proposed earlier in 2009. It was recognized that markets may be efficient for some investors, but not for others, based solely upon the differences in their computational capabilities. In particular, it is plausible that a high-memory strategy may “feed off” low-memory ones. In other words, it is precisely the presence of the low-memory strategies that create opportunities, which were not present initially, for high-memory strategies to exploit. Our understanding here is that definition of high/low can mean either memory span (long or short) or data resolution (high or low). This is certainly a piece of news worth knowing. We now realize that not only should MVP be built with copious working memory and generous data storage, it must also learn to use them well so as not to get picked off by others easily.

More recently, a surprising connection that links market efficiency in finance to computational efficiency in computer science was identified. Philip Maymin showed that markets are efficient if and only if P=NP, which is a shorthand for the claim that “any computation problem whose solution can be verified quickly (namely, in polynomial time) can also be solved quickly.” While the question of whether P=NP remains an open problem (with a million dollar reward for anyone who can resolve it in this new millennium), the overwhelming consensus among computer scientists today appears to be that P<>NP. Therefore, following Maymin’s result, we may be able to observe greater inefficiency in the market, especially when there is greater data, given the exponentially increasing complexity of checking all possible strategies. One could not help but wonder if the observed anomalies of value, growth, size, and momentum, among others, are just expressions of this computational phenomenon? After all, everything else being equal, more data should lead to more anomalies, or otherwise make existing anomalies more profitable. That would be good news for traders, if true.

In the final analysis, one might simply argue that perfectly efficient markets are impossible. If markets are perfectly efficient, both informationally and computationally, then there is no profit to gathering information or conducting searches. As a result, there is no longer a reason to trade and markets would eventually collapse. This is also known as the Grossman-Stiglitz paradox. Therefore, a degree of market inefficiency is necessary; it determines the amount of efforts traders are willing to expend to gather, process, and trade on information using computational resources available to them. In other words, traders need to be compensated for their efforts by sufficient profit opportunities, i.e., market inefficiencies. Put a different way, a trader’s livelihood depends upon market imperfections; just as a market’s normal functioning depends upon traders plying their trade.

Consider how markets impound information into prices, as described in the paper “How Markets Slowly Digest Changes in Supply and Demand”: “Because the outstanding liquidity of markets is always very small, trading is inherently an incremental process, and prices cannot be instantaneously in equilibrium and cannot instantaneously reflect all available information. There is nearly always a substantial offset between latent offer and latent demand, which only slowly gets incorporated in prices." Under this view, trade impact is a mechanical – or rather, statistical – phenomenon; the net effect regarding market impact of all trades is not only the same, but also history-dependent. It is surprising to us that long memory in order flow can be compatible with unpredictability of asset returns; they seem an incongruous mix. We propose to apply machine learning techniques towards filtering out market noises, in order to trade on the resultant signals in a statistical manner. While Mr. Market may appear characteristically unhurried these days, his mood swings have been more frequent, and behaves more erratically as well. Only a machine can figure him out.

Therefore, we think that having the right architecture for Big Data statistical search baked into the MVP is essential for success in finding both types of inefficiencies, whether informational or computational. A basic source of information comprising only of past prices, but coupled with a superior statistical package, might still reveal a few tradable patterns. A richer source of information, e.g., comprising of economic news or government statistics, in addition to past prices, should provide many more tradable patterns, especially if generous computational resources are available to process and refine them. However, a retail-level trading setup with just historical pricing data and entry-level computing capabilities (e.g., basic technical analysis) is unlikely to survive today’s super-competitive and hyper-efficient markets; and may end up as fodder for faster or higher-memory strategies.

It is important that MVP be defined right from the get-go to be able to fully explore the opportunity frontier of market imperfections, both informationally and computationally. Remember: Big Data is key, so computation cannot be all about speed; memory and storage matter quite a bit, too. In fact, lots of bits.

The carrier pigeons, the telegraph, the ticker tape, the telephone, the teletypes, to the modern day fiber optics, and microwave, are but computational resources for transmitting raw bits. In our view, computational resources for transforming bits to yield statistical trading insights are equally, if not more, important. They define the essence of trading, which many career pigeons today sadly don’t get.

But this pigeon is grounded...

But this pigeon is grounded...

What we observe is not Nature itself but Nature exposed to our method of questioning.
— Werner Heisenberg (1962)

References:

  1. Patterson, Scott (2010). The Quants: How a New Breed of Math Whizzes Conquered Wall Street and Nearly Destroyed It (First Edition). Crown Business.
  2. Fama, Eugene F. (1970, May). Efficient Capital Markets: A Review of Theory and Empirical Work. Journal of Finance. Vol. 25, Issue 2, pp. 383-417. Retrieved from: http://www.e-m-h.org/Fama70.pdf
  3. Lo, Andrew (2007). Efficient Markets Hypothesis. In: The New Palgrave: A Dictionary of Economics (2nd Edition). Retrieved from: http://web.mit.edu/alo/www/Papers/EMH_Final.pdf
  4. Hasanhodzic, Jasmina and Lo, Andrew W. and Viola, Emanuele (2009, August 31). A Computational View of Market Efficiency. Retrieved from: http://lfe.mit.edu/wp-content/uploads/2014/03/Computational-View-of-Market-Efficiency.pdf
  5. Maymin, Philip Z. (2013). A New Kind of Finance. In: Irreducibility and Computational Equivalence: 10 Years After Wolfram’s A New Kind of Science. pp. 89-99. Hector Zenil, ed., Springer Verlag. Retrieved from: http://arxiv.org/pdf/1210.1588.pdf
  6. Maymin, Philip Z. (2011, February 28). Markets are Efficient If and Only If P=NP. Algorithmic Finance 1:1, 1-11. Retrieved from: http://ssrn.com/abstract=1773169
  7. Grossman, Sanford J. and Stiglitz, Joseph E. (1980, June). On the Impossibility of Informationally Efficient Markets. The American Economic Review, 70, pp. 393-408. Retrieved from: https://www.aeaweb.org/aer/top20/70.3.393-408.pdf
  8. Bouchaud, Jean-Philippe and Farmer, J. Doyne and Lillo, Fabrizio (2008, September 11). How Markets Slowly Digest Changes in Supply and Demand. Available at SSRN: http://ssrn.com/abstract=1266681 or http://dx.doi.org/10.2139/ssrn.1266681