## Hicks on the inapplicability of probability calculus

24 February, 2014 at 18:48 | Posted in Economics, Statistics & Econometrics | 11 CommentsTo understand real world “non-routine” decisions and unforeseeable changes in behaviour, ergodic probability distributions are of no avail. In a world full of genuine uncertainty — where real historical time rules the roost — the probabilities that ruled the past are not necessarily those that will rule the future.

When we cannot accept that the observations, along the time-series available to us, are independent … we have, in strict logic, no more than one observation, all of the separate items having to be taken together. For the analysis of that the probability calculus is useless; it does not apply … I am bold enough to conclude, from these considerations that the usefulness of ‘statistical’ or ‘stochastic’ methods in economics is a good deal less than is now conventionally supposed … We should always ask ourselves, before we apply them, whether they are appropriate to the problem in hand. Very often they are not … The probability calculus is no excuse for forgetfulness.

John Hicks,Causality in Economics, 1979:121

To simply assume that economic processes are ergodic — and *a fortiori *in any relevant sense timeless — is not a sensible way for dealing with the kind of genuine uncertainty that permeates open systems such as economies.

**Added 25 February:** Commenting on this article, Paul Davidson writes:

After reading my article on the fallacy of rational expectations, Hicks wrote to me in a letter dated 12 February 1983 in which he said “I have just been reading your RE [rational expectations] paper … I do like it very much … You have now rationalized my suspicions and shown me that I missed a chance of labeling my own point of view as nonergodic. One needs a name like that to ram a point home.”

## 11 Comments

Sorry, the comment form is closed at this time.

Create a free website or blog at WordPress.com.

Entries and comments feeds.

IMO uncertainty is not the primary issue. If you have a fat tail distribution with a rare catastrophic event every 100 years that may justify the risk of a specific policy. But if the rare and catastrophic events are more frequent than we think and we do not know the underline distribution then the risk is too high to be justifiable. The issue is whether we know the distribution, not the uncertainty. And the distribution is far from known, actually it may not exist at all.

http://robertvienneau.blogspot.gr/2010/05/nonergodic-stationary-random-process.html

Comment by Digital Cosmology (@DCosmology)— 24 February, 2014 #

If you think of 2005-2009, then the probabilities that mattered were those of changes in ‘phase’, for which we had no statistical data. I agree that the concept of a ‘distribution’ seems inappropriate, but Hick’s argument is that even if we think there is a distribution, we cannot use conventional probabilistic or statistical methods.

This seems to be a part of what is meant by ‘uncertainty’.

Comment by Dave Marsay— 25 February, 2014 #

“but Hick’s argument is that even if we think there is a distribution, we cannot use conventional probabilistic or statistical methods”

Why not? If there is a distribution, the risks are known and you can design your policy. Uncertainty is a fact of random processes. The outcome of the next toss of a coin is uncertain but the probability is known. If there is certainty, there is nothing to talk about.

If the uncertainty is about the distribution itself then I agree. But that is a different issue.

For example we have an instance of quantitative easing in the US and no statistical data. Next time the outcome may be different. So any causation between QE and its effects is fallacy of composition. See this for examp.eel: http://www.digitalcosmology.com/Blog/2013/11/03/justifying-quantitative-easing-with-counterfactuals-is-naive-thinking/

Comment by Digital Cosmology (@DCosmology)— 25 February, 2014 #

Suppose you have a real coin with P(Heads)=p unknown. Then you can toss the coin many times to estimate p. But Hick’s point seems to be that we only have the one economy, analogous to one toss of the coin.

Comment by Dave Marsay— 25 February, 2014 #

Hick’s point is serial correlation, not uncertainty, like when in the past some decisions were based on some other decision, which may not be the case now. As far as your example, the fact that there is a coin, even with a bias, is enough information in many cases. Suppose that you are asked to play a game of tossing a biased coin with unknown p and you lose L when tails come up and G when heads come up. In many cases this is enough information to play the game regardless of the bias.

Example 1: G = 10, L = 1. The expectation is 11p -1. If you suspect the bias is reasonably small, you play without knowing p .

Example 2: G = 2, L = 1. You do not play if you think the coin is biased substantially.

Thus, probability calculus gives us enough information, in this case the expectation function, to decide what to do even though there is uncertainty about the uncertainty (p). In more complicated cases it provides bounds and equilibrium pointa and paths. I think people who believe otherwise, maybe do not understand probability theory.

Comment by Digital Cosmology (@DCosmology)— 26 February, 2014 #

Your notion of the application of probability seems much more credible than the mainstream practice that I suppose Hicks to have been criticising. Can you recommend a source?

Comment by Dave Marsay— 26 February, 2014 #

My probability Theory Bible is the book by Papoulis, A., Probability, Random Variables and Stochastic Processes. Old but still good with a brief discussion about the philosophical issues of the 4 definitions of probability. But one huge problem of the educational establishment is that they have failed to embrace common sense because in many cases they are afraid of it and also, in a few cases, it does not work. Thus they avoid it and education lacks common sense perspective.

BTW, Hick’s is not right because probability calculus can deal with dependency. Think of experiment when a second coin is tossed only when the first coin generates heads. He is right that “simple” approaches do not apply. For example some stock price series are serially correlated. You cannot bootstrap in this case and generate a normal distribution of means because the serial correlation is destroyed. But you could model this as white noise plus a deterministic trend I guess.

Comment by Digital Cosmology (@DCosmology)— 27 February, 2014 #

“Thus, probability calculus gives us enough information, in this case the expectation function, to decide what to do even though there is uncertainty about the uncertainty (p).”

No it does not. What do “give us enough information” are the statements “you suspect the bias is reasonably small” in the first example and “you think the coin is biased substantially”, which have nothing to do with probability calculus. In fact, they are so vague that your numeric values of G and L are completely irrelevant as long as “you believe the bias is reasonably large/small”.

Calculus have nothing to do with this.

Comment by pontus— 26 February, 2014 #

IMO, you should never conceive math apart from common sense. Except if you believe in Platonic forms. Math is a tool. Common sense prevails above all. Use no common sense and no math, no matter how advance, will ever help you.

To completely debunk your view, consider Euclidean geometry which applies in everyday life. The parallels axiom is a common sense assumption. It works except when one must use curved geometry. The statement that “I suspect parallel lines never meet” is quite reasonable for all practical “local” problems. Ask any architect and he will tell you so.

You are correct that “What do “give us enough information” are the statements “you suspect the bias is reasonably small”” but the form of the expectation function is also valuable information, except if you believe that information must exclude math. This is an interesting view but wrong flat because if I am told my function is y = ax+b I know it is linear as opposed to y = a/x and this is already enough information without any belief about the values of a.

Comment by Digital Cosmology (@DCosmology)— 27 February, 2014 #

One mistake is to suppose that probability is everything. But is it nothing?

Comment by Dave Marsay— 24 February, 2014 #

[…] Lars Syll recently provided an interesting quote from John Hicks’ 1979 book Causality in Economics. I thought that what Hicks said made an awful lot of sense, so I got my hands on a copy of the book. I have only so far scanned the book but I think that it is something of a masterpiece and I hope that someone suggests reissuing it; it could easily be a standard textbook for Post-Keynesian methodology. […]

Pingback by John Hicks’ Book on Non-Ergodicity: A Forgotten Post-Keynesian Classic | Fixing the Economists— 26 February, 2014 #