Probability and economics

21 Sep, 2021 at 18:44 | Posted in Economics | 2 Comments

Modern mainstream economics relies to a large degree on the notion of probability.

To at all be amenable to applied economic analysis, economic observations allegedly have to be conceived as random events that are analyzable within a probabilistic framework.

But is it really necessary to model the economic system as a system where randomness can only be analyzed and understood when based on an a priori notion of probability?

The Questionable Uses of the Term "Probability" in Economics - EconlibWhen attempting to convince us of the necessity of founding empirical economic analysis on probability models,  neoclassical economics actually forces us to (implicitly) interpret events as random variables generated by an underlying probability density function.

This is at odds with reality. Randomness obviously is a fact of the real world. Probability, on the other hand, attaches (if at all) to the world via intellectually constructed models, and a fortiori is only a fact of a probability generating (nomological) machine or a well constructed experimental arrangement or ‘chance set-up.’

Just as there is no such thing as a ‘free lunch,’ there is no such thing as a ‘free probability.’

To be able at all to talk about probabilities, you have to specify a model. If there is no chance set-up or model that generates the probabilistic outcomes or events – in statistics one refers to any process where you observe or measure as an experiment (rolling a die) and the results obtained as the outcomes or events (number of points rolled with the die, being e. g. 3 or 5) of the experiment – there strictly seen is no event at all.

Probability is a relational element. It always must come with a specification of the model from which it is calculated. And then to be of any empirical scientific value it has to be shown to coincide with (or at least converge to) real data generating processes or structures – something seldom or never done.

And this is the basic problem with economic data. If you have a fair roulette-wheel, you can arguably specify probabilities and probability density distributions. But how do you conceive of the analogous nomological machines for prices, gross domestic product, income distribution etc? Only by a leap of faith. And that does not suffice. You have to come up with some really good arguments if you want to persuade people into believing in the existence of socio-economic structures that generate data with characteristics conceivable as stochastic events portrayed by probabilistic density distributions.

We simply have to admit that the socio-economic states of nature that we talk of in most social sciences – and certainly in economics – are not amenable to analyze as probabilities, simply because in the real world open systems there are no probabilities to be had!

The processes that generate socio-economic data in the real world cannot just be assumed to always be adequately captured by a probability measure. And, so, it cannot be maintained that it even should be mandatory to treat observations and data – whether cross-section, time series or panel data – as events generated by some probability model. The important activities of most economic agents do not usually include throwing dice or spinning roulette-wheels. Data generating processes – at least outside of nomological machines like dice and roulette-wheels – are not self-evidently best modeled with probability measures.

If we agree on this, we also have to admit that much of modern mainstream economics lacks sound foundations.

When economists and econometricians – often uncritically and without arguments — simply assume that one can apply probability distributions from statistical theory on their own area of research, they are really skating on thin ice.

This importantly also means that if you cannot show that data satisfies all the conditions of the probabilistic nomological machine, then the statistical inferences made in mainstream economics lack sound foundations.

2 Comments

  1. 》Just as there is no such thing as a ‘free lunch,’ there is no such thing as a ‘free probability.’
    .
    Do you require a free probability to establish that free lunches have zero probability?
    .
    Haven’t scientists like Gnuth called the universe itself a free lunch, as well as dark energy, which grows faster than space?
    .
    Is there a cynical allusion to Milton Friedman, in the appeal to no free lunch theorems?
    .
    Is the no free lunch argument more about emotions than science?

  2. What is the rationale for the probabilistic analysis of empirical data in applied economics and other sciences? This is an important question.
    .
    Instead of suggesting any constructive ideas, Prof Syll argues:
    “You have to come up with some really good arguments if you want to persuade people into believing in the existence of socio-economic structures that generate data with characteristics conceivable as stochastic events portrayed by probabilistic density distributions.”
    However, philosophical concepts such as “socio-economic structures” and “data generating processes” are superfluous and perhaps even meaningless in this context.
    The rationale for the probabilistic analysis of empirical data does not involve any such concepts borrowed from Marxism and critical realist philosophy.
    ———
    The following are “some really good arguments” which Prof. Syll overlooks or fails to appreciate:
    .
    “What we want are theories that, without involving us in direct logical contradictions, state that the observations will as a rule cluster in a limited subset of the set of all conceivable observations, while it is still consistent with the theory that an observation falls outside this subset ‘now and then’.
    As far as is known, the scheme of probability and random variables is, at least for the time being, the only scheme suitable for formulating such theories. “
    – Haavelmo 1944: The Probability Approach in Econometrics, p. 40
    https://booksc.org/book/29883315/d9406e
    .
    “Purely empirical investigations have taught us that certain things in the real world happen only very rarely, they are ‘miracles’ while others are ‘usual events’. The probability calculus has developed out of a desire to have a formal logical apparatus for dealing with such phenomena of real life.
    The question is not whether probabilities exist or not, but whether – if we proceed as if they existed – we are able to make statements about real phenomena that are ‘correct for practical purposes’.”
    Haavelmo 1944, p. 43
    .
    “Haavelmo argued that if a theory were treated as pregnable – rather than as an unchallengeable truth – it could be treated as an hypothesis about a probability distribution, and the non-experimentally obtained data could be considered a sample from this distribution.
    This formalisation of the problem allowed applied economists to be more flexible in their attitude to theory, since any chosen hypothesis might be incorrect and an alternative model correct.
    By laying out a framework in which decisions could be made about which theories are supported by data and which are not, Haavelmo provided an adequate experimental method for economics.”
    – Mary Morgan – The History of Econometric Ideas 1990, page 258
    https://b-ok.cc/book/915327/d834da


Sorry, the comment form is closed at this time.

Blog at WordPress.com.
Entries and Comments feeds.