The ergodicity problem in economics (wonkish)

6 Dec, 2019 at 15:59 | Posted in Economics | 9 Comments

A surprising reframing of economic theory follows directly from asking the core ergodicity question: is the time average of an observable equal to its expectation value?

ergAt a crucial place in the foundations of economics, it is assumed that the answer is always yes — a pernicious error. To make economic decisions, I often want to know how fast my personal fortune grows under different scenarios. This requires determining what happens over time in some model of wealth. But by wrongly assuming ergodicity, wealth is often replaced with its expectation value before growth is computed. Because wealth is not ergodic, nonsensical predictions arise. After all, the expectation value effectively averages over an ensemble of copies of myself that cannot be accessed.

This key error is patched up with psychological arguments about human behaviour. The consequences are numerous, but over the centuries their root cause has become invisible in the growing formalism. Observed behaviour deviates starkly from model predictions. Paired with a firm belief in its models, this has led to a narrative of human irrationality in large parts of economics. Scientifically, this deserves some reflection: the models were exonerated by declaring the object of study irrational.

Ole Peters / Nature Physics

Paul Samuelson once famously claimed that the ‘ergodic hypothesis’ is essential for advancing economics from the realm of history to the realm of science. But is it really tenable to assume — as Samuelson and most other mainstream economists — that ergodicity is essential to economics?

Ole Peters’ article shows why ergodicity is such an important concept for understanding the deep fundamental flaws of mainstream economics:

Sometimes ergodicity is mistaken for stationarity. But although all ergodic processes are stationary, they are not equivalent.

Let’s say we have a stationary process. That does not guarantee that it is also ergodic. The long-run time average of a single output function of the stationary process may not converge to the expectation of the corresponding variables — and so the long-run time average may not equal the probabilistic (expectational) average.

Say we have two coins, where coin A has a probability of 1/2 of coming up heads, and coin B has a probability of 1/4 of coming up heads. We pick either of these coins with a probability of 1/2 and then toss the chosen coin over and over again. Now let H1, H2, … be either one or zero as the coin comes up heads or tales. This process is obviously stationary, but the time averages — [H1 + … + Hn]/n — converges to 1/2 if coin A is chosen, and 1/4 if coin B is chosen. Both these time averages have a probability of 1/2 and so their expectational average is 1/2 x 1/2 + 1/2 x 1/4 = 3/8, which obviously is not equal to 1/2 or 1/4. The time averages depend on which coin you happen to choose, while the probabilistic (expectational) average is calculated for the whole “system” consisting of both coin A and coin B.

Instead of arbitrarily assuming that people have a certain type of utility function — as in mainstream theory — time average considerations show that we can obtain a less arbitrary and more accurate picture of real people’s decisions and actions by basically assuming that time is irreversible. When our assets are gone, they are gone. The fact that in a parallel universe it could conceivably have been refilled, are of little comfort to those who live in the one and only possible world that we call the real world.

Time average considerations show that because we cannot go back in time, we should not take excessive risks. High leverage increases the risk of bankruptcy. This should also be a warning for the financial world, where the constant quest for greater and greater leverage — and risks — creates extensive and recurrent systemic crises.


  1. From
    “such a lottery has no downside risk. I’m guaranteed a positive net profit, the only uncertain element is how much better off I will be after the game”
    Financial tools such as derivatives and options and futures give you the tools to turn lotteries into such risk-free lotteries.
    From a comment:
    “You’re proposing to compute what happens over time in a system, whereas the expectation values of wealth that were the motivating puzzle for utility theory compute what happens across an ensemble of parallel universes.”
    Financial index derivatives compute what happens across an ensemble of parallel universes. Often the index funds returns beat the hedge funds.

  2. Contrary to Prof. Syll and Davidson, it is not true that Samuelson claimed that that ergodicity is essential to economics.
    – Samuelson and Davidson on ergodicity – Álvarez & Ehnts JPKE 2016
    It is strange that Prof Syll is impressed by Ole Peters’ econophysics.
    – No real world empirirical evidence is proffered, only an artificial exercise.
    – This is a classroom semi-Randomised Controlled Trial which, according to Prof. Syll on 29 March 2019, is “a method in search of ontological foundations”.
    – Peters uses Bayesian techniques which, according to Prof. Syll on 18 March 2016, “is not a recipe for scientific akribi and progress.”
    Several features of Peters approach are obscure and there is no strong evidence that his theories perform any better than the Von Neumann-Morgenstern theorem.

    • Yours truly is not overly impressed by neither Bayesianism or Econophysics. That’s true. But that in no way implies that everything people applying those theories/approaches produce is worthless. Compared with mainstream theory re expected utility and ergodicity issues, people like Kelly and Peters come up with more plausible explanations than those working within the vNM tradition.

    • “Axiom 1 (Completeness) For any lotteries L,M, exactly one of the following holds: L < M, M < L, or L ~ M (either M is preferred, L is preferred, or the individual is indifferent.)" – wikipedia on vNM page
      Finance allows you to bet on M and on L, paying option premiums that are typically much less than potential payoffs. If M pays off you lose only a trivial amount on the option premium you paid for betting on L at the same time. Thus by remaining indifferent, that is by betting on both sides, you are guaranteed to profit (there is counterparty risk but you can insure against a counterparty default, too).
      I don't think the authors of the theorem allowed for financial innovations to profit from betting on both sides. I believe utility theory's mathematical conclusions about efficient prices resulting from rational preferences are nullified by such financial tricks which let you profit by taking both sides of a bet.

  3. I should probably note in passing that Ole Peters is engaging here in as purely analytic reasoning as one is likely to encounter. And, he is doing so with admirable scholarly self-awareness, placing his idea in the history of ideas as well as noting the epistemic status of his “null model”. This is analysis done as analysis should be done, paring back to the essential.
    I have no doubt that mainstream economists will reject the argument out of hand, clinging to the ad hoc psychology of “rationality” rolled up in the hairball of utility maximization.
    For myself, I would like to see more on the implications of the agency of the serial processor.

  4. The financial world has invented Exchange Traded Funds, which allow an individual investor to tie their time-average to the ensemble average. You can buy shares in an S&P 500 ETF and benefit from the index’s performance, rather than picking individual stocks.
    Finance has created tools to make a non-ergodic process ergodic.
    In the coin example, financiers would sell you a derivative representing both coins and you would receive the ensemble average as a return.

    • Ole Peters writes in :
      “Newton had to differentiate twice to get a constant.”
      “Newton was looking at positions, x, and how they behave over time. His genius allowed him to transform positions until he found something stable about them.”
      The VVIX, which is a tradable volatility index of a volatility derivative, is something like a fourth order differential. If you keep differentiating an index, eventually you will eliminate all uncertainty (financial risk)?

  5. Very good post. I first encountered these arguments in Yves Smith “Econned” and in “Debunking Economics”.

    As for your last observation, over the past 40 years, we’ve seen that the central banks are very good a providing a time-machine for the financial world. 😉

  6. I am pretty sure $2000 per month is more than $12,000 a year.

Sorry, the comment form is closed at this time.

Blog at
Entries and Comments feeds.