## Ergodicity and randomness in economics (wonkish)

16 Sep, 2012 at 16:42 | Posted in Economics, Statistics & Econometrics | 4 Comments

To understand real world ”non-routine” decisions and unforeseeable changes in behaviour, ergodic probability distributions are of no avail. In a world full of genuine uncertainty – where real historical time rules the roost – the probabilities that ruled the past are not those that will rule the future.

When we cannot accept that the observations, along the time-series available to us, are independent … we have, in strict logic, no more than one observation, all of the separate items having to be taken together. For the analysis of that the probability calculus is useless; it does not apply … I am bold enough to conclude, from these considerations that the usefulness of ‘statistical’ or ‘stochastic’ methods in economics is a good deal less than is now conventionally supposed … We should always ask ourselves, before we apply them, whether they are appropriate to the problem in hand. Very often they are not … The probability calculus is no excuse for forgetfulness.

John Hicks, Causality in Economics, 1979:121

Time is what prevents everything from happening at once. To simply assume that economic processes are ergodic and concentrate on ensemble averages – and a fortiori in any relevant sense timeless – is not a sensible way for dealing with the kind of genuine uncertainty that permeates open systems such as economies.

Ergodicity is a difficult concept that many students of economics have problems with understanding. In the very instructive video below, Ole Peters – from the Department of Mathematics at the Imperial College of London – has made an admirably simplified and pedagogical exposition of what it means for probability structures of stationary processses and ensembles to be ergodic. Using a progression of simulated coin flips, his example shows the all-important difference between time averages and ensemble averages for this kind of processes:

Why is the difference between ensemble and time averages of such importance? Well, basically, because when you assume the processes to be ergodic,ensemble and time averages are identical. Let me giva an example even simpler than the one Peters gives:

Assume we have a market with an asset priced at 100 €. Then imagine the price first goes up by 50% and then later falls by 50%. The ensemble average for this asset would be 100 €- because we here envision two parallel universes (markets) where the assetprice falls in one universe (market) with 50% to 50 €, and in another universe (market) it goes up with 50% to 150 €, giving an average of 100 € ((150+50)/2). The time average for this asset would be 75 € – because we here envision one universe (market) where the assetprice first rises by 50% to 150 €, and then falls by 50% to 75 € (0.5*150).

From the ensemble perspective nothing really, on average, happens. From the time perspective lots of things really, on average, happen. Assuming ergodicity there would have been no difference at all.

Just in case you think this is just an academic quibble without repercussion to our real lives, let me quote from an article of Peters in the Santa Fe Institute Bulletin from 2009 – On Time and Risk – that makes it perfectly clear that the flaw in thinking about uncertainty in terms of “rational expectations” and ensemble averages has had real repercussions on the functioning of the financial system:

In an investment context, the difference between ensemble averages and time averages is often small. It becomes important, however, when risks increase, when correlation hinders diversification, when leverage pumps up fluctuations, when money is made cheap, when capital requirements are relaxed. If reward structures—such as bonuses that reward gains but don’t punish losses, and also certain commission schemes—provide incentives for excessive risk, problems arise. This is especially true if the only limits to risk-taking derive from utility functions that express risk preference, instead of the objective argument of time irreversibility. In other words, using the ensemble average without sufficiently restrictive utility functions will lead to excessive risk-taking and eventual collapse. Sound familiar?

Still having problems understanding the ergodicity concept? Let me cite one last example that hopefully will make the concept more accessible on an intuitive level:

Why are election polls often inaccurate? Why is racism wrong? Why are your assumptions often mistaken? The answers to all these questions and to many others have a lot to do with the non-ergodicity of human ensembles. Many scientists agree that ergodicity is one of the most important concepts in statistics. So, what is it?

Suppose you are concerned with determining what the most visited parks in a city are. One idea is to take a momentary snapshot: to see how many people are this moment in park A, how many are in park B and so on. Another idea is to look at one individual (or few of them) and to follow him for a certain period of time, e.g. a year. Then, you observe how often the individual is going to park A, how often he is going to park B and so on.

Thus, you obtain two different results: one statistical analysis over the entire ensemble of people at a certain moment in time, and one statistical analysis for one person over a certain period of time. The first one may not be representative for a longer period of time, while the second one may not be representative for all the people. The idea is that an ensemble is ergodic if the two types of statistics give the same result. Many ensembles, like the human populations, are not ergodic.

1. Lars, where could I read a bit more about it in a lay-man format? I have a practical issue with some time-series modelling of averages and would be curious if such arguments can have important implications to our process set-up. Tnx.

• Sorry, but apart from the links I gave in the post, I really don’t no of any other “lay-man” introductions to ergodicity (if any of you out there knows of any, please feel free to come with suggestions). It’s a rather advanced topic that usually doesn’t even enter into mathematical statistics courses until second or third semester.

2. I know I’m late to the party, but. . . .

Looking at Peters’s coin toss example, I don’t see how he gets the ensemble average, unless he is making the elementary error of taking the arithmetic mean instead of the geometric mean. Gosh, when I was a kid, the first book on statistics that I read, a popular text, made that point. All of this stuff about time travel strikes me as mystification.

• OK, I think I get it.

Assume repeated bets. In one case, you always pay \$1 and half the time get back \$1.50 and half the time get back \$0.60. Then, assuming you don’t go bust, or end up with less than enough to bet, then the arithmetic mean makes sense. In the other case, you put your remaining stake up at the same odds. Then the geometric means makes sense.

I think that the second case is what Peters used for his time average, since he did not talk about going bust or not having enough money to make the bet. But in that case he switched between arithmetic and geometric means, which is confusing.

Sorry, the comment form is closed at this time.