## What is ergodicity?

23 November, 2016 at 10:27 | Posted in Economics | 2 CommentsWhy are election polls often inaccurate? Why is racism wrong? Why are your assumptions often mistaken? The answers to all these questions and to many others have a lot to do with the non-ergodicity of human ensembles. Many scientists agree that ergodicity is one of the most important concepts in statistics. So, what is it?

Suppose you are concerned with determining what the most visited parks in a city are. One idea is to take a momentary snapshot: to see how many people are this moment in park A, how many are in park B and so on. Another idea is to look at one individual (or few of them) and to follow him for a certain period of time, e.g. a year. Then, you observe how often the individual is going to park A, how often he is going to park B and so on.

Thus, you obtain two different results: one statistical analysis over the entire ensemble of people at a certain moment in time, and one statistical analysis for one person over a certain period of time. The first one may not be representative for a longer period of time, while the second one may not be representative for all the people.

The idea is that an ensemble is ergodic if the two types of statistics give the same result. Many ensembles, like the human populations, are not ergodic.

The importance of ergodicity becomes manifest when you think about how we all infer various things, how we draw some conclusion about something while having information about something else. For example, one goes once to a restaurant and likes the fish and next time he goes to the same restaurant and orders chicken, confident that the chicken will be good. Why is he confident? Or one observes that a newspaper has printed some inaccurate information at one point in time and infers that the newspaper is going to publish inaccurate information in the future. Why are these inferences ok, while others such as “more crimes are committed by black persons than by white persons, therefore each individual black person is not to be trusted” are not ok?

The answer is that the ensemble of articles published in a newspaper is more or less ergodic, while the ensemble of black people is not at all ergodic. If one searches how many mistakes appear in an entire newspaper in one issue, and then searches how many mistakes one news editor does over time, one finds the two results almost identical (not exactly, but nonetheless approximately equal). However, if one takes the number of crimes committed by black people in a certain day divided by the total number of black people, and then follows one random-picked black individual over his life, one would not find that, e.g. each month, this individual commits crimes at the same rate as the crime rate determined over the entire ensemble. Thus, one cannot use ensemble statistics to properly infer what is and what is not probable that a certain individual will do.

Or take an even clearer example: In an election each party gets some percentage of votes, party A gets a%, party B gets b% and so on. However, this does not mean that over the course of their lives each individual votes with party A in a% of elections, with B in b% of elections and so on …

A similar problem is faced by scientists in general when they are trying to infer some general statement from various particular experiments. When is a generalization correct and when it isn’t? The answer concerns ergodicity. If the generalization is done towards an ergodic ensemble, then it has a good chance of being correct.

Paul Samuelson once famously claimed that the “ergodic hypothesis” is essential for advancing economics from the realm of history to the realm of science. But is it really tenable to assume — as Samuelson and most other mainstream economists — that ergodicity is essential to economics?

In this video Ole Peters shows why ergodicity is such an important concept for understanding the deep fundamental flaws of mainstream economics:

Sometimes ergodicity is mistaken for stationarity. But although all ergodic processes are stationary, they are not equivalent.

Let’s say we have a stationary process. That does not guarantee that it is also ergodic. The long-run time average of a single output function of the stationary process may not converge to the expectation of the corresponding variables — and so the long-run time average may not equal the probabilistic (expectational) average. Say we have two coins, where coin A has a probability of 1/2 of coming up heads, and coin B has a probability of 1/4 of coming up heads. We pick either of these coins with a probability of 1/2 and then toss the chosen coin over and over again. Now let H1, H2, … be either one or zero as the coin comes up heads or tales. This process is obviously stationary, but the time averages — [H1 + … + Hn]/n — converges to 1/2 if coin A is chosen, and 1/4 if coin B is chosen. Both these time averages have a probability of 1/2 and so their expectational average is 1/2 x 1/2 + 1/2 x 1/4 = 3/8, which obviously is not equal to 1/2 or 1/4. The time averages depend on which coin you happen to choose, while the probabilistic (expectational) average is calculated for the whole “system” consisting of both coin A and coin B.

In an ergodic system time is irrelevant and has no direction. Nothing changes in any significant way; at most you will see some short-lived fluctuations. An ergodic system is indifferent to its initial conditions: if you re-start it, after a little while it always falls into the same equilibrium behavior.

For example, say I gave 1,000 people one die each, had them roll their die once, added all the points rolled, and divided by 1,000. That would be a finite-sample average, approaching the ensemble average as I include more and more people.

Now say I rolled a die 1,000 times in a row, added all the points rolled and divided by 1,000. That would be a finite-time average, approaching the time average as I keep rolling that die.

One implication of ergodicity is that ensemble averages will be the same as time averages. In the first case, it is the size of the sample that eventually removes the randomness from the system. In the second case, it is the time that I’m devoting to rolling that removes randomness. But both methods give the same answer, within errors. In this sense, rolling dice is an ergodic system.

I say “in this sense” because if we bet on the results of rolling a die, wealth does not follow an ergodic process under typical betting rules. If I go bankrupt, I’ll stay bankrupt. So the time average of my wealth will approach zero as time passes, even though the ensemble average of my wealth may increase.

A precondition for ergodicity is stationarity, so there can be no growth in an ergodic system. Ergodic systems are zero-sum games: things slosh around from here to there and back, but nothing is ever added, invented, created or lost. No branching occurs in an ergodic system, no decision has any consequences because sooner or later we’ll end up in the same situation again and can reconsider. The key is that most systems of interest to us, including finance, are non-ergodic.

## 2 Comments

Sorry, the comment form is closed at this time.

Create a free website or blog at WordPress.com.

Entries and comments feeds.

Many thanks – I’d been confused about ergodicity and this has really helped.

Comment by Peter Baker— 23 November, 2016 #

Dear Professor,

I have a couple of remarks I’d like to share with you to make sure I understood the logic of the time average vs. aggregate average.

As to the game Peters proposes in his lecture:

1. Is the single individual losing overtime because of the tail probability of one guy always getting the right answer and thus making millions?

2. Is the effect of the very lucky guy more than compensating the effect of the very unlucky guy because of the asymmetric payoff of the game (if you win payoff is 50% and if you lose it is -40%)?

3. And lastly, are these results due to the dependency of the game?

Thank you,

regards

Andrea

Comment by Andrea— 24 November, 2016 #