Causal Entropic Forces

You have to die. Every day.
Until the Omega is destroyed.
— “Edge of Tomorrow: Live, Die, Repeat.” (2014)

Omega has the ability to control time. “Whenever an Alpha is killed, the Omega starts the day over again, but you see, this time it can remember what's going to happen,” explained Rita. And an enemy that knows the future can't lose.

Rita Vrataski (Emily Blunt) and Dr. Carter (Noah Taylor) are telling Bill Cage (Tom Cruise) his combat mission.

Rita Vrataski (Emily Blunt) and Dr. Carter (Noah Taylor) are telling Bill Cage (Tom Cruise) his combat mission.

Alex Wissner-Gross and Cameron Freer recently proposed “a causal generalization of entropic forces” that they showed can induce certain patterns of behavior with some very striking characteristics. One would not guess those outcomes by looking purely at the constraint that produces them. Underlying this set of intriguing behaviors is simply the computational capability to integrate over all possible futures so as to maximize the rate of entropy production over an entire trajectory. The observed behavior is considered to emerge from such a computation, without explicit goal-driven programming. In our view, these behaviors bear striking resemblance with examples we have seen in Swarm Intelligence (e.g., ant colonies, bird flocking, animal herding, bacterial growth, and fish schooling) or Artificial Life (aka A-Life or “life-as-it-might-be”). A key technical difference, it seems, lies in their underlying optimization algorithms. For instance, the entropic force F is determined by: F = T ∇Sτ, where T is the reservoir temperature, S is the entropy, and τ is the time horizon. Further development of interesting applications based on causal entropic forces is undertaken by Sergio Hernandez, who has also produced an interesting collection of videos along with code samples to demonstrate how it could be made to work.

“Physical agents driven by causal entropic forces might be viewed from a Darwinian perspective as competing to consume future histories,” according to the interpretation of Wissner-Gross and Freer. “In practice, such agents might estimate causal entropic forces through internal Monte Carlo sampling of future histories generated from learned models of their world. Such behavior would then ensure their uniform aptitude for adaptiveness to future change due to interactions with the environment, conferring a potential survival advantage.” This is powerful stuff, indeed. Certainly something we should look into more closely before releasing MVP into the trading jungle.

This may be a good time to examine, by way of an analogy with modern physics, how our approach to arbitrage trading based on speculated models might work. In 2009, while stranded in the south of France towards the end of his summer vacation, Erik Verlinde found time to put forth a heuristic argument starting from first principles that shows Newton's law of gravitation naturally arises in a theory in which space emerges through a holographic scenario. The key idea is that gravity is essentially a statistical effect, and a manifestation of entropy in the universe. More specifically, Verlinde uses the holographic principle to consider what is happening to a small mass at a certain distance from a bigger mass, say a star or a planet. Moving the small mass a little means changing the information content (or entropy) of a hypothetical holographic surface between both masses. This change of information is linked to a change in the energy of the system. Then, using statistics to consider all possible movements of the small mass and the energy changes involved, Verlinde finds movements toward the bigger mass are thermodynamically more likely than others. This effect can be seen as a net force pulling both masses together. This is called an entropic force, as it originates from the most likely changes in information content. This means that, in order to understand motion, one must keep track of the amount of information.

The central notion needed to derive gravity is information, measured in terms of entropy. Information causes motion. Gravity is thus an emergent phenomenon, and not one of the fundamental forces as previously thought, which is perhaps why gravity is so hard to be unified with the other three fundamental forces. Verlinde’s conceptual model is robust so far and has demonstrated superior explanatory power in a holographic context in which space is emerging:

  1. it can derive the familiar Newton’s law of gravitation (which nobody has been able to do for 300 years);
  2. it can explain inertia (which nobody had thought needed explaining);
  3. it recovers Newton’s second law of motion (i.e., the familiar F = ma); and
  4. it can generalize to a relativistic situation and derive Einstein’s equations.
Newton’s Law of Gravitation (accepted without explanation for 300 years).

Newton’s Law of Gravitation (accepted without explanation for 300 years).

Following Verlinde’s line of reasoning, other physicists are able to carry out immediate follow-on work in cosmology. For example, the mysterious dark energy is obviated when considering an alternative interpretation of the accelerated expansion of the universe as resulting from an entropic force naturally arising from information storage on the holographic horizon surface screen. We can think of Verlinde’s heuristic argument as representing a speculated model for the “origin of gravity.” After all, it is based on the postulated holographic principle and has not been rigorously tested in experiments. However, by simply adhering to the direct implications of the speculated model, one can deduce certain immediate “stylized facts” about the universe: some of which we have earlier observed empirically (e.g., Newton's law of gravitation), but others we have not even imagined a connection exists (e.g., dark energy). The analogy carries over effortlessly to the financial universe. Arbitrage trading based on the speculated model is possible if multiple sets of empirical observations, both old and new, can be nicely tied together this way and help calibrate internal parameters of the speculated model for added precision.

In 1946, as Stanislaw Ulam, a mathematician then working at Los Alamos, was convalescing from an illness and playing solitaires, he pondered an interesting question: What are the chances that a Canfield solitaire laid out with 52 cards will come out successfully? After spending a lot of time trying to estimate them by pure combinatorial calculations, Ulam wondered whether a more practical method than "abstract thinking" might not be to lay it out say one hundred times and simply observe and count the number of successful plays. Thus began the first attempts to practice the “Monte Carlo Method,” as it was later named by John von Neumann after he learned the method from Ulam. The use of Monte Carlo methods requires a large amount of random numbers, and spurs the development of pseudorandom number generators that can produce pseudo-random sequences which are “random enough” in a certain sense to be useful.

A Kindred Spirit: The Busy Count of Monte Carlo.

A Kindred Spirit: The Busy Count of Monte Carlo.

Back at Los Alamos, they began to plan actual calculations using the first computer, ENIAC. In von Neumann’s formulation of the neutron diffusion problem, each neutron history is analogous to a single game of Solitaire, and the use of random numbers to make the choice along the way is analogous to the random turn of the cards. But after a series of “games” have been played, how does one extract meaningful information? For each of the thousands of neutrons, the variables describing the chain of events are stored, and this collection constitutes a numerical model of the process being studied. The collection of variables is analyzed using statistical methods identical to those used to analyze experimental observations of physical processes. Once can thus extract information about any variable that was accounted for in the process.

Richard Feynman, who had worked at Los Alamos administering the computation group during the war years, was developing his “sum over histories” version of quantum mechanics at Cornell after the war. In Feynman’s formulation, a probability amplitude is associated with an entire motion of a particle as a function of time, rather than simply with a position of the particle at a particular time. Feynman’s path integral method is mathematically equivalent to Schroedinger’s wavefunction and Heisenberg’s matrix formulation. As Feynman wrote in 1948, “there is a pleasure in recognizing old things from a new point of view,” and that “there is always the hope that the new point of view will inspire an idea for the modification of present theories.” Although it is much harder to compute, the path integral is often more intuitive for thinking about particle interactions without actually having to compute the integral.

Feynman: The Man with a Life of Many Paths.

Feynman: The Man with a Life of Many Paths.

The process of sampling alternative paths reveals essential features of quantum mechanics, one of which is the inclination of electrons to “explore all paths”. Freeman Dyson recalled Feynman describing his new method this way: “the electron does anything it likes; it just goes in any direction at any speed, forward and backward in time, however it likes, and then you add up the amplitudes and it gives you the wavefunction.” The electron is a free spirit indeed. In fact, the electron is so free-spirited that it refuses to choose which path to follow – so it tries them all.

Path Integral: Summing over all possibilities. (Image Credit:  Einstein Online ).

Path Integral: Summing over all possibilities. (Image Credit: Einstein Online).

In a simulated universe, one has the luxury to try every single path, i.e., mind-numbingly and painstakingly exploring every unique permutation of events, in order to realize optimal coordination and control. All such efforts can be viewed as part of a giant search process, bounded in space by maximal causal entropy and in time by minimum coordination latency. To the outside world, what really matters, from a macroscopic viewpoint, is the net sum of all such internal microscopic efforts, mostly unseen, unknown, and unappreciated. But they all add up to generate a final observable difference. That bit of difference is all that really matters in the end. What would you do in this world if you have but one life to live?

Two roads diverged in a wood, and I –
I took the one less traveled by, and that has made all the difference.
— Robert Frost (“The Road Not Taken”, 1920)


  1. Wissner-Gross, Alex and Freer, Cameron (2013). Causal Entropic Forces. Physical Review Letters, 110 (168702), 1-5. Retrieved from: and
  2. Metropolis, Nicholas, and Ulam, Stanislaw (1949). The Monte Carlo Method. Journal of the American Statistical Association, Vol. 44, No. 247, pp. 335-341. Retrieved from:
  3. Metropolis, Nicholas (1987). The Beginning of the Monte Carlo Method. Los Alamos Science (Special Issue), pp. 125-130. Retrieved from:
  4. Keim, Brandon (2009, March 10). Humans No Match for Go Bot Overlords. Wired. Retrieved from:
  5. Feynman, Richard P. (2006). QED: The Strange Theory of Light and Matter. Princeton University Press. Available as: Douglas Robb Memorial Lectures from the University of Auckland.
  6. Feynman, Richard P. (1948). Space-Time Approach to Non-Relativistic Quantum Mechanics. Rev. Mod. Phys. 20 (2): 367-387. Retrieved from:
  7. Taylor, Edwin F. and Vokos, Stamatis and O’Meara, John M. and Thornber, Nora S. (1998). Teaching Feynman’s Sum-Over-Paths Quantum Theory. Computers in Physics, Vol. 12, No. 2, pp. 190-199. Retrieved from:
  8. Verlinde, Erik P. (2011). On the Origin of Gravity and the Laws of Newton. JHEP 04, 29. Retrieved from: