## The limits to probabilistic reasoning

19 March, 2013 at 17:40 | Posted in Statistics & Econometrics, Theory of Science & Methodology | Comments Off on The limits to probabilistic reasoningProbabilistic reasoning in science – especially Bayesianism – reduces questions of rationality to questions of internal consistency (coherence) of beliefs, but – even granted this questionable reductionism – it’s not self-evident that rational agents really have to be probabilistically consistent. There is no strong warrant for believing so. Rather, there are strong evidence for us encountering huge problems if we let probabilistic reasoning become the dominant method for doing research in social sciences on problems that involve risk and uncertainty.

In many of the situations that are relevant to economics one could argue that there is simply not enough of adequate and relevant information to ground beliefs of a probabilistic kind, and that in those situations it is not really possible, in any relevant way, to represent an individual’s beliefs in a single probability measure.

Say you have come to learn (based on own experience and tons of data) that the probability of you becoming unemployed in Sweden is 10%. Having moved to another country (where you have no own experience and no data) you have no information on unemployment and a fortiori nothing to help you construct any probability estimate on. A Bayesian would, however, argue that you would have to assign probabilities to the mutually exclusive alternative outcomes and that these have to add up to 1, if you are rational. That is, in this case – and based on symmetry – a rational individual would have to assign probability 10% to becoming unemployed and 90% of becoming employed.

That feels intuitively wrong though, and I guess most people would agree. Bayesianism cannot distinguish between symmetry-based probabilities from information and symmetry-based probabilities from an absence of information. In these kinds of situations most of us would rather say that it is simply irrational to be a Bayesian and better instead to admit that we “simply do not know” or that we feel ambiguous and undecided. Arbitrary an ungrounded probability claims are more irrational than being undecided in face of genuine uncertainty, so if there is not sufficient information to ground a probability distribution it is better to acknowledge that simpliciter, rather than pretending to possess a certitude that we simply do not possess.

I think this critique of Bayesianism is in accordance with the views of **Keynes**’ *A Treatise on Probability* (1921) and *General Theory* (1937). According to Keynes we live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but rational expectations. Sometimes we “simply do not know.” Keynes would not have accepted the view of Bayesian economists, according to whom expectations “tend to be distributed, for the same information set, about the prediction of the theory.” Keynes, rather, thinks that we base our expectations on the confidence or “weight” we put on different events and alternatives. To Keynes expectations are a question of weighing probabilities by “degrees of belief”, beliefs that have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents modeled by probabilistically reasoning Bayesian economists.

In an interesting article on his blog, **John Kay** shows that these strictures on probabilistic-reductionist reasoning do not only apply to everyday life and science, but also to the law:

English law recognises two principal standards of proof. The criminal test is that a charge must be “beyond reasonable doubt”, while civil cases are decided on “the balance of probabilities”.

The meaning of these terms would seem obvious to anyone trained in basic statistics. Scientists think in terms of confidence intervals – they are inclined to accept a hypothesis if the probability that it is true exceeds 95 per cent. “Beyond reasonable doubt” appears to be a claim that there is a high probability that the hypothesis – the defendant’s guilt – is true. Perhaps criminal conviction requires a higher standard than the scientific norm – 99 per cent or even 99.9 per cent confidence is required to throw you in jail. “On the balance of probabilities” must surely mean that the probability the claim is well founded exceeds 50 per cent.

And yet a brief conversation with experienced lawyers establishes that they do not interpret the terms in these ways. One famous illustration supposes you are knocked down by a bus, which you did not see (that is why it knocked you down). Say Company A operates more than half the buses in the town. Absent other evidence, the probability that your injuries were caused by a bus belonging to Company A is more than one half. But no court would determine that Company A was liable on that basis.

A court approaches the issue in a different way. You must tell a story about yourself and the bus. Legal reasoning uses a narrative rather than a probabilistic approach, and when the courts are faced with probabilistic reasoning the result is often a damaging muddle …

When I have raised these issues with people with scientific training, they tend to reply that lawyers are mostly innumerate and with better education would learn to think in the same way as statisticians. Probabilistic reasoning has become the dominant method of structured thinking about problems involving risk and uncertainty – to such an extent that people who do not think this way are derided as incompetent and irrational …

It is possible – common, even – to believe something is true without being confident in that belief. Or to be sure that, say, a housing bubble will burst without being able to attach a high probability to any specific event, such as “house prices will fall 20 per cent in the next year”. A court is concerned to establish the degree of confidence in a narrative, not to measure a probability in a model.

Such narrative reasoning is the most effective means humans have developed of handling complex and ill-defined problems … Probabilistic thinking … often fails when we try to apply it to idiosyncratic events and open-ended problems. We cope with these situations by telling stories, and we base decisions on their persuasiveness. Not because we are stupid, but because experience has told us it is the best way to cope. That is why novels sell better than statistics texts.

Blog at WordPress.com.

Entries and comments feeds.