## The arrow of time and the importance of time averages and non-ergodicity (wonkish)

31 October, 2012 at 21:39 | Posted in Economics, Statistics & Econometrics | 10 CommentsTime is what prevents everything from happening at once. To simply assume that economic processes are ergodic and concentrate on ensemble averages – and *a fortiori* in any relevant sense timeless – is not a sensible way for dealing with the kind of genuine uncertainty that permeates open systems such as economies.

Ergodicity and the all-important difference between time averages and ensemble averages are difficult concepts that many students of economics have problems with understanding. So let me just try to explain the meaning of these concepts by means of a couple of simple examples.

Let’s say you’re offered a gamble where on a roll of a fair die you will get €10 billion if you roll a six, and pay me €1 billion if you roll any other number.

Would you accept the gamble?

If you’re an economics students you probably would, because that’s what you’re taught to be the only thing consistent with being **rational**. You would arrest the arrow of time by imagining six different “parallel universes” where the independent outcomes are the numbers from one to six, and then weight them using their stochastic probability distribution. Calculating the expected value of the gamble – **the ensemble average** – by averaging on all these weighted outcomes you would actually be a moron if you didn’t take the gamble (the expected value of the gamble being 5/6*€0 + 1/6*€10 billion = €1.67 billion)

If you’re not an economist you would probably trust your common sense and decline the offer, knowing that a large risk of bankrupting one’s economy is not a very rosy perspective for the future. Since you can’t really arrest or reverse the arrow of time, you know that once you have lost the €1 billion, it’s all over. The large likelihood that you go bust weights heavier than the 17% chance of you becoming enormously rich. By computing **the time average** – imagining one real universe where the six different but dependent outcomes occur consecutively – we would soon be aware of our assets disappearing, and *a fortiori* that it would be **irrational** to accept the gamble.

[From a mathematical point of view you can (somewhat non-rigorously) describe the difference between ensemble averages and time averages as a difference between **arithmetic averages** and **geometric averages**. Tossing a fair coin and gaining 20% on the stake (S) if winning (heads) and having to pay 20% on the stake (S) if loosing (tails), the arithmetic average of the return on the stake, assuming the outcomes of the coin-toss being independent, would be [(0.5*1.2S + 0.5*0.8S) – S)/S] = 0%. If considering the two outcomes of the toss not being independent, the relevant time average would be a geometric average return of squareroot[(1.2S *0.8S)]/S – 1 = -2%.]

Why is the difference between ensemble and time averages of such importance in economics? Well, basically, because when assuming the processes to be ergodic,ensemble and time averages are identical.

Assume we have a market with an asset priced at €100 . Then imagine the price first goes up by 50% and then later falls by 50%. The* ensemble average *for this asset would be €100 – because we here envision two parallel universes (markets) where the assetprice falls in one universe (market) with 50% to €50, and in another universe (market) it goes up with 50% to €150, giving an average of 100 € ((150+50)/2). The time average for this asset would be 75 € – because we here envision one universe (market) where the asset price first rises by 50% to €150, and then falls by 50% to €75 (0.5*150).

From the ensemble perspective nothing really, on average, happens. From the time perspective lots of things really, on average, happen. Assuming ergodicity there would have been no difference at all.

Just in case you think this is just an academic quibble without repercussion to our real lives, let me quote from an article by Ole Peters in the *Santa Fe Institute Bulletin* from 2009 – On Time and Risk – that makes it perfectly clear that the flaw in thinking about “rational” decisions in terms of ensemble averages has had real repercussions on the functioning of the financial system:

In an investment context, the difference between ensemble averages and time averages is often small. It becomes important, however, when risks increase, when correlation hinders diversification, when leverage pumps up fluctuations, when money is made cheap, when capital requirements are relaxed. If reward structures—such as bonuses that reward gains but don’t punish losses, and also certain commission schemes—provide incentives for excessive risk, problems arise. This is especially true if the only limits to risk-taking derive from utility functions that express risk preference, instead of the objective argument of time irreversibility. In other words, using the ensemble average without sufficiently restrictive utility functions will lead to excessive risk-taking and eventual collapse. Sound familiar?

On a more economic-theoretical level the difference between ensemble and time averages also highlights the problems concerning the neoclassical theory of expected utility that I have raised before (e. g. in Why expected utility theory is wrong).

When applied to the neoclassical theory of expected utility, one thinks in terms of “parallel universe” and asks what is the expected return of an investment, calculated as an average over the “parallel universe”? In our coin tossing example, it is as if one supposes that various “I” are tossing a coin and that the loss of many of them will be offset by the huge profits one of these “I” does. But this ensemble average does not work for an individual, for whom a time average better reflects the experience made in the “non-parallel universe” in which we live.

Time averages gives a more realistic answer, where one thinks in terms of the only universe we actually live in, and ask what is the expected return of an investment, calculated as an average over time.

Since we cannot go back in time – entropy and the arrow of time make this impossible – and the bankruptcy option is always at hand (extreme events and “black swans” are always possible) we have nothing to gain from thinking in terms of ensembles.

Actual events follow a fixed pattern of time, where events are often linked in a multiplicative process (as e. g. investment returns with “compound interest”) which is basically non-ergodic.

Instead of arbitrarily assuming that people have a certain type of utility function – as in the neoclassical theory – time average considerations show that we can obtain a less arbitrary and more accurate picture of real people’s decisions and actions by basically assuming that time is irreversible. When are assets are gone, they are gone. The fact that in a parallel universe it could conceivably have been refilled, are of little comfort to those who live in the one and only possible world that we call the real world.

Our coin toss example can be applied to more traditional economic issues. If we think of an investor, we can basically describe his situation in terms of our coin toss. What fraction of his assets should an investor – who is about to make a large number of repeated investments – bet on his feeling that he can better evaluate an investment (p = 0.6) than the market (p = 0.5)? The greater the fraction, the greater is the leverage. But also – the greater is the risk. Letting p be the probability that his investment valuation is correct and (1 – p) is the probability that the market’s valuation is correct, it means that he optimizes the rate of growth on his investments by investing a fraction of his assets that is equal to the difference in the probability that he will “win” or “lose”. This means that he at each investment opportunity (according to the so called Kelly criterion) is to invest the fraction of 0.6 – (1 – 0.6), i.e. about 20% of his assets (and the optimal average growth rate of investment can be shown to be about 2% (0.6 log (1.2) + 0.4 log (0.8))).

Time average considerations show that because we cannot go back in time, we should not take excessive risks. High leverage increases the risk of bankruptcy. This should also be a warning for the financial world, where the constant quest for greater and greater leverage – and risks – creates extensive and recurrent systemic crises. A more appropriate level of risk-taking is a necessary ingredient in a policy to come to curb excessive risk taking.

## 10 Comments »

RSS feed for comments on this post. TrackBack URI

### Leave a Reply

Create a free website or blog at WordPress.com. | The Pool Theme.

Entries and comments feeds.

Very informative and timely. Some of your points overlap with what I am reading in Y. Varoufakis, J. Halevi and M.J. Theocrakis’ Modern Political Economics: Making sense of the post-2009 world.

Comment by Dwayne Woods— 31 October, 2012 #

“Would you accept the gamble? If you’re an economics students you probably would, because that’s what you’re taught to be the only thing consistent with being rational.”

No, actually not. Expected utility theory says that you are rational if you maximize expected utlity of outcomes, not utility of expected outcome, or with math, max E[U(x)], not U(E[x]). For risk-averse individual U() will be of course concave, and thus it may be entirely rational to reject the bet even if it promises positive expected return. From the article you link to, it seems as if looking at time-averages is meant to be an alternative to introducing “ad-hoc” utility. But expected utility can be derived from underlying axioms describing preferences over lotteries, and given that it does the job (in the sense that here it describes reasonable behavior), I’m not sure I understand why it’s supposed to be wrong.

Comment by ivansml— 1 November, 2012 #

Daniel Bernoulli already in 1738 showed that one way of resolving the St Petersburg paradoxwas to introduce a logarithmic utility function. The problem is as I see it, that this cannot be derived from fundamental considerations, but rather from ad hoc axioms of preferences. And, further, expected utility theory is from a descriptive point of view totally inadequate to handle real choices and decisions (I elaborate on this, via Matthew Rabin,here). One of the great advantages of a time average approach is that it (contrary to the expected utility/ensemble average approach) transparently shows – in the one universe in which we live and where the arrow of time is operating – that the large probability of loosing all we’ve got isn’t compensated by a small probability of making us enormously rich.

Comment by Lars P Syll— 1 November, 2012 #

What’s fundamental and what’s ad-hoc is probably somewhat subjective. But still, time average approach is problematic:

1) unlike expected utility, it’s by definition applicable only in repeated gambles, not for single static decisions.

2) focusing on time average makes sense only when the stochastic proces under consideration is actually nonergodic. This holds for your first example (as once you lose all, you’re stuck at zero forever), but it’s not clear how it applies to more standard processes.

3) Kelly criterion, which arises as “optimal” in time average approach, is actually equivalent to maximizing log utility of final wealth. This seems just as ad-hoc as other forms of utility functions.

Comment by ivansml— 1 November, 2012 #

To illustrate, consider the so-called Bargaining Problem, which was what Nash talked a?out at Cowles. Imagine two or more people bargaining over how to divide some notional pte (an asset, a resource or simply a sum of money). If they come to an agreement, each collects the agreed portion. If not, no one benefits. This problem is central to economics si.nce all trade involves potential gains which, depending on the agreed price, are distributed dtfferently between buyer and seller. From thecabove mentioned bookm

Von Neumann studied carefully the Bargaining Problem but concluded that it cannot be ‘solved’, that it was indeterminate. He left that

project behind, convinced that mathematical analysis cannot recommend to a bargainer hoW to negotiate with a view to maximising his/her portion

Comment by Dwayne Woods— 1 November, 2012 #

“Assume we have a market with an asset priced at €100 . Then imagine the price first goes up by 50% and then later falls by 50%. The ensemble average for this asset would be €100 – because we here envision two parallel universes (markets) where the assetprice falls in one universe (market) with 50% to €50, and in another universe (market) it goes up with 50% to €150, giving an average of 100 € ((150+50)/2). The time average for this asset would be 75 € – because we here envision one universe (market) where the asset price first rises by 50% to €150, and then falls by 50% to €75 (0.5*150).”

I think this is wrong! Especially the last part (“The time average for this asset would be 75 €”) because (150+75)/2=112.5 not 75.

Comment by marc— 5 November, 2012 #

Sorry, but you would have failed the exam. Think twice :)

Comment by Lars P Syll— 5 November, 2012 #

Well I feel special (in an socialistic sense) to be left behind!

Comment by marc— 5 November, 2012 #

He is not 100% convincing!

Comment by marc— 12 November, 2012 #

well I give you that Ole Peters is an interesting chap! I would love to hear him talk about Boltzmann’s 1870s probability theory in economic settings.

However, what I dont like is the use of “the time average” since it is not an average at all! It just confuses things! I must say I resent you a little bit for framing the problem that way (even though it is not entirely your fault he he) I would have prefered the win50% or lose40% coin toss example since it illustrates the problem in a very clear light.

Comment by marc— 5 November, 2012 #