## Probability and economics

17 January, 2013 at 14:12 | Posted in Economics, Statistics & Econometrics | 7 CommentsModern neoclassical economics relies to a large degree on the notion of probability.

To at all be amenable to applied economic analysis, economic observations allegedly have to be conceived as random events that are analyzable within a probabilistic framework.

But is it really necessary to model the economic system as a system where randomness can only be analyzed and understood when based on an *a priori* notion of probability?

When attempting to convince us of the necessity of founding empirical economic analysis on probability models, neoclassical economics actually forces us to (implicitly) interpret events as random variables generated by an underlying probability density function.

This is at odds with reality. Randomness obviously is a fact of the real world. Probability, on the other hand, attaches (if at all) to the world via intellectually constructed models, and *a fortiori* is only a fact of a probability generating (nomological) machine or a well constructed experimental arrangement or “chance set-up”.

**Just as there is no such thing as a “free lunch,” there is no such thing as a “free probability.”** To be able at all to talk about probabilities, you have to specify a model. If there is no chance set-up or model that generates the probabilistic outcomes or events – in statistics one refers to any process where you observe or measure as an experiment (rolling a die) and the results obtained as the *outcomes* or *events* (number of points rolled with the die, being e. g. 3 or 5) of the experiment – there strictly seen is no event at all.

Probability is a relational element. It always must come with a specification of the model from which it is calculated. And then to be of any empirical scientific value it has to be *shown* to coincide with (or at least converge to) real data generating processes or structures – something seldom or never done!

And this is the basic problem with economic data. If you have a fair roulette-wheel, you can arguably specify probabilities and probability density distributions. But how do you conceive of the analogous nomological machines for prices, gross domestic product, income distribution etc? Only by a leap of faith. And that does not suffice. You have to come up with some really good arguments if you want to persuade people into believing in the existence of socio-economic structures that generate data with characteristics conceivable as stochastic events portrayed by probabilistic density distributions!

From a realistic point of view we really have to admit that the socio-economic states of nature that we talk of in most social sciences – and certainly in economics – are not amenable to analyze as probabilities, simply because in the real world open systems that social sciences – including economics – analyze, there are no probabilities to be had!

The processes that generate socio-economic data in the real world cannot just be assumed to always be adequately captured by a probability measure. And, so, it cannot really be maintained that it even should be mandatory to treat observations and data – whether cross-section, time series or panel data – as events generated by some probability model. The important activities of most economic agents do not usually include throwing dice or spinning roulette-wheels. Data generating processes – at least outside of nomological machines like dice and roulette-wheels – are not self-evidently best modeled with probability measures.

**If we agree on this, we also have to admit that much of modern neoclassical economics lacks a sound justification.** I would even go further and argue that there really is no justifiable rationale at all for this belief that all economically relevant data can be adequately captured by a probability measure. In most real world contexts one has to *argue* and *justify* one’s case. And that is obviously something seldom or never done by practitioners of neoclassical economics.

As **David Salsburg** (2001:146) notes on probability theory:

[W]e assume there is an abstract space of elementary things called ‘events’ … If a measure on the abstract space of events fulfills certain axioms, then it is a probability. To use probability in real life, we have to identify this space of events and do so with sufficient specificity to allow us to actually calculate probability measurements on that space … Unless we can identify [this] abstract space, the probability statements that emerge from statistical analyses will have many different and sometimes contrary meanings.

Just as e. g. **John Maynard Keynes** (1921) and **Nicholas Georgescu-Roegen** (1971), Salsburg (2001:301f) is very critical of the way social scientists – including economists and econometricians – uncritically and without arguments have come to simply assume that one can apply probability distributions from statistical theory on their own area of research:

Probability is a measure of sets in an abstract space of events. All the mathematical properties of probability can be derived from this definition. When we wish to apply probability to real life, we need to identify that abstract space of events for the particular problem at hand … It is not well established when statistical methods are used for observational studies … If we cannot identify the space of events that generate the probabilities being calculated, then one model is no more valid than another … As statistical models are used more and more for observational studies to assist in social decisions by government and advocacy groups, this fundamental failure to be able to derive probabilities without ambiguity will cast doubt on the usefulness of these methods.

Or as the great British mathematician **John Edensor Littlewood** says in his *A Mathematician’s Miscellany:*

Mathematics (by which I shall mean pure mathematics) has no grip on the real world ; if probability is to deal with the real world it must contain elements outside mathematics ; the meaning of ‘ probability ‘ must relate to the real world, and there must be one or more ‘primitive’ propositions about the real world, from which we can then proceed deductively (i.e. mathematically). We will suppose (as we may by lumping several primitive propositions together) that there is just one primitive proposition, the ‘probability axiom’, and we will call it A for short. Although it has got to be true, A is by the nature of the case incapable of deductive proof, for the sufficient reason that it is about the real world …

We will begin with the … school which I will call philosophical. This attacks directly the ‘real’ probability problem; what are the axiom A and the meaning of ‘probability’ to be, and how can we justify A? It will be instructive to consider the attempt called the ‘frequency theory’. It is natural to believe that if (with the natural reservations) an act like throwing a die is repeated n times the proportion of 6’s will, with certainty, tend to a limit, p say, as n goes to infinity … If we take this proposition as ‘A’ we can at least settle off-hand the other problem, of the meaning of probability; we define its measure for the event in question to be the number p. But for the rest this A takes us nowhere. Suppose we throw 1000 times and wish to know what to expect. Is 1000 large enough for the convergence to have got under way, and how far? A does not say. We have, then, to add to it something about the rate of convergence. Now an A cannot assert a certainty about a particular number n of throws, such as ‘the proportion of 6’s will certainly be within p +- e for large enough n (the largeness depending on e)’. It can only say ‘the proportion will lie between p +- e with at least such and such probability (depending on e and n*) whenever n>n*’. The vicious circle is apparent. We have not merely failed to justify a workable A; we have failed even to state one which would work if its truth were granted. It is generally agreed that the frequency theory won’t work. But whatever the theory it is clear that the vicious circle is very deep-seated: certainty being impossible, whatever A is made to state can only be in terms of ‘probability ‘.

**This importantly also means that if you cannot show that data satisfies all the conditions of the probabilistic nomological machine, then the statistical inferences used – and a fortiori neoclassical economics – lack sound foundations!**

*References*

Georgescu-Roegen, Nicholas (1971), *The Entropy Law and the Economic Process*. Harvard University Press.

Keynes, John Maynard (1973 (1921)), *A Treatise on Probability*. Volume VIII of *The Collected Writings of John Maynard Keynes*, London: Macmillan.

Littlewood, John Edensor (1953) A Mathematician’s Miscellany, London: Methuen & Co.

Salsburg, David (2001), *The Lady Tasting Tea*. Henry Holt.

## 7 Comments

Sorry, the comment form is closed at this time.

Create a free website or blog at WordPress.com.

Entries and comments feeds.

Let’s just take the example of medical research where highly controlled experiments are conducted. It is very difficult to get agree among medical professions of just what has been shown. Many studies are not replicated with the same result, for instance. Different related studies seem to be in opposition on certain point, if not outright contradictory. And there are tightly controlled experiments on a very limited issue in comparison with studying the larger issues of an economy, for instance. Even in this limited studies professionals say that a study “suggests” a probability rather than proves a causal connection (which, for some reason works sometimes but all the time wrt the same syndrome).

Even in medicine there is suspicion, sometime found to be borne out by investigation, that the study was intentionally biased, or overstated and contrary research was suppressed. This is a tightly regulated field. What of fields that are not as tightly regulated and do not even allow controlled experiment that can be replicated? While the money stakes are high in medicine, which can explain at least occasional cheating, what about in a field like economics were the stakes in terms of money and power are enormous and far-reaching?

In this light, many policy formulations justified on the basis of economic theory make claims that go way beyond what is warranted, and subsequent experience then bears this out in policy failure.

Comment by Tom Hickey— 17 January, 2013 #

I couldn’t agree more. Wiener was similarly sceptical, in a famous passage

from the beginning of his book Cybernetics, which I don’t have with me….the point was that for time-series analysis, you need at least 200 data points with no regime changes and the validity of the specification of the variables, too, (even more basic than model-specification) unchanging, and that just doesn’t happen much. And don’t even get me started about variables that «proxy» for the more fundamental but unobserved variables of the model…..

Comment by andrebourbaki— 7 March, 2013 #

Very nice notes

Comment by Abdi— 10 July, 2013 #

Very interest blog, I will be back again.

Comment by Lawyer Post— 24 September, 2013 #

Probability theory is also justified in terms of decision theory. But Littlewood’s approach would apply to that too, as it does to any attempt to justify by logic what could only be tested experimentally. The financial experiment, which supposed that one could act as if events were purely probabilistic, seems ample proof that non-probabilistic uncertainty matters.

Comment by Dave Marsay— 25 September, 2013 #

[…] Modern economics, neoclassical economics in particular, often relies on probability. Review this article for more information on the connection between probability and […]

Pingback by Do you think that probability should always be used when making economic decisions or forecasting? | Uni Essay Help— 15 August, 2015 #

I wonder why probability theory is being considered a part of neo-classical framework. If probability theory has been misused most, its by neo-classical school. Their sole focus on estimation (quantifying a pre-conceived theory) has destroyed econometrics in particular and probability theory in general. Neo-classical school totally fails to pass Popper’s criteria by limiting itself to estimation. No wonder economics is a dismal science! Fisher’s interpretation of p-value already solved the confusions about the probability a 100 years back.

Comment by Niraj— 7 December, 2016 #