## The limits of probabilistic reasoning

23 Feb, 2019 at 08:45 | Posted in Statistics & Econometrics | 2 Comments

Almost a hundred years after John Maynard Keynes wrote his seminal A Treatise on Probability (1921), it is still very difficult to find statistics books that seriously try to incorporate his far-reaching and incisive analysis of induction and evidential weight.

The standard view in statistics — and the axiomatic probability theory underlying it — is to a large extent based on the rather simplistic idea that more is better. But as Keynes argues — more of the same is not what is important when making inductive inferences. It’s rather a question of “more but different.”

Variation, not replication, is at the core of induction. Finding that p(x|y) = p(x|y & w) doesn’t make w irrelevant. Knowing that the probability is unchanged when w is present gives p(x|y & w) another evidential weight. Running 10 replicative experiments do not make you as sure of your inductions as when running 10 000 varied experiments — even if the probability values happen to be the same.

According to Keynes we live in a world permeated by unmeasurable uncertainty — not quantifiable stochastic risk — which often forces us to make decisions based on anything but ‘rational expectations.’ Keynes rather thinks that we base our expectations on the confidence or ‘weight’ we put on different events and alternatives. To Keynes, expectations are a question of weighing probabilities by ‘degrees of belief,’ beliefs that often have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents as modelled by modern social sciences. And often we “simply do not know.”

Science according to Keynes should help us penetrate to “the true process of causation lying behind current events” and disclose “the causal forces behind the apparent facts.” Models can never be more than a starting point in that endeavour. He further argued that it was inadmissible to project history on to the future. Consequently, we cannot presuppose that what has worked before, will continue to do so in the future. That statistical models can get hold of correlations between different variables is not enough. If they cannot get at the causal structure that generated the data, they are not really ‘identified.’

How strange that economists and other social scientists, as a rule, do not even touch upon these aspects of scientific methodology that seems to be so fundamental and important for anyone trying to understand how we learn and orient ourselves in an uncertain world. An educated guess on why this is a fact would be that Keynes’ concepts are not possible to squeeze into a single calculable numerical probability. In the quest for quantities one puts a blind eye to qualities and looks the other way — but Keynes ideas keep creeping out from under the statistics carpet.

The validity of the inferential models we as scientists use ultimately depends on the assumptions we make about the entities to which we apply them. Applying the traditional calculus of probability presupposes far-reaching ontological presuppositions. If we are prepared to assume that societies and economies are like urns filled with coloured balls in fixed proportions, then fine. But — really — who could earnestly believe in such an utterly ridiculous analogy?

In a real world full of ‘unknown unknowns’ and genuine non-ergodic uncertainty, urns are of little avail.

Human decisions affecting the future, whether personal or political or economic, cannot depend on strict mathematical expectation, since the basis for making such calculations does not exist; and that it is our innate urge to activity which makes the wheels go round, our rational selves choosing between the alternatives as best we are able, calculating where we can, but often falling back for our motive on whim or sentiment or chance.

J M Keynes

Added: Tom Hickey — as always — has some interesting comments on this post here.