## Bayesianism — confusing degree of confirmation with probability

18 March, 2016 at 09:59 | Posted in Theory of Science & Methodology | 5 CommentsIf we identify degree of corroboration or confirmation with probability, we should be forced to adopt a number of highly paradoxical views, among them the following clearly self-contradictory assertion:

“There are cases in which x is strongly supported by z and y is strongly undermined by z while, at the same time, x is confirmed by z to a lesser degree than is y.”

Consider the next throw with a homogeneous die. Let x be the statement ‘six will turn up’; let y be its negation, that is to say, let y = x; and let z be the information ‘an even number will turn up’.

We have the following absolute probabilities:

p(x)=l/6; p(y) = 5/6; p(z) = 1/2.

Moreover, we have the following relative probabilities:

p(x, z) = 1/3; p(y, z) = 2/3.

We see that x is supported by the information z, for z raises the probability of x from 1/6 to 2/6 = 1/3. We also see that y is undermined by z, for z lowers the probability of y by the same amount from 5/6 to 4/6 = 2/3. Nevertheless, we have p(x, z) < p(y, z) …

A report of the result of testing a theory can be summed up by an appraisal. This can take the form of assigning some degree of corroboration to the theory. But it can never take the form of assigning to it a degree of probability; for

the probability of a statement (given some test statements) simply does not express an appraisal of the severity of the tests a theory has passed, or of the manner in which it has passed these tests.The main reason for this is that thecontentof a theory — which is the same as itsimprobability— determines itstestabilityand itscorroborability.

Although Bayesians think otherwise, to me there’s nothing magical about Bayes’ theorem. The important thing in science is for you to have strong evidence. If your evidence is strong, then applying Bayesian probability calculus is rather unproblematic. Otherwise — garbage in, garbage out. Applying Bayesian probability calculus to subjective beliefs founded on weak evidence is not a recipe for scientific akribi and progress.

Neoclassical economics nowadays usually assumes that agents that have to make choices under conditions of uncertainty behave according to Bayesian rules — that is, they maximize expected utility with respect to some subjective probability measure that is continually updated according to Bayes’ theorem.

Bayesianism reduces questions of rationality to questions of internal consistency (coherence) of beliefs, but – even granted this questionable reductionism – do rational agents really have to be Bayesian? As I have been arguing repeatedly over the years, there is no strong warrant for believing so.

In many of the situations that are relevant to economics one could argue that there is simply not enough of adequate and relevant information to ground beliefs of a probabilistic kind, and that in those situations it is not really possible, in any relevant way, to represent an individual’s beliefs in a single probability measure.

Bayesianism cannot distinguish between symmetry-based probabilities from information and symmetry-based probabilities from an absence of information. In these kinds of situations most of us would rather say that it is simply irrational to be a Bayesian and better instead to admit that we “simply do not know” or that we feel ambiguous and undecided. Arbitrary an ungrounded probability claims are more irrational than being undecided in face of genuine uncertainty, so if there is not sufficient information to ground a probability distribution it is better to acknowledge that simpliciter, rather than pretending to possess a certitude that we simply do not possess.

So, why then are so many scientists nowadays so fond of Bayesianism? I guess one strong reason is that Bayes’ theorem gives them a seemingly fast, simple and rigorous answer to their problems and hypotheses. But, as already Popper showed back in the 1950’s, the Bayesian probability (likelihood) version of confirmation theory is “absurd on both formal and intuitive grounds: it leads to self-contradiction.”

## 5 Comments »

RSS feed for comments on this post. TrackBack URI

### Leave a Reply

Create a free website or blog at WordPress.com.

Entries and comments feeds.

As Popper points out, it would be an error to confuse degree of confirmation with probability. I have also come across some people (not just economists) who appear to confuse the two. But I have yet to meet a mathematician or statistician, Bayesian or otherwise, who confuses the two.

Many of us have been told that ‘we’ have a certain probability of having a medical problem following some tests. Mathematically this is nonsense and there are much better approaches available. But I am not clear that it would be practical to change the approach. On the other hand, in economics I do think that we need to change. To me the problem is not that Bayesianism is ‘wrong’, but that it is sometimes mis-applied, with serious consequences. For medical tests, I find that a probability is often technically wrong but practically informative, and hence much better than nothing! The key is to be able to tell when probabilities inform and when they mislead.

Comment by Dave Marsay— 18 March, 2016 #

Prof. Syll proposes that “in face of genuine uncertainty” we should admit that we “simply do not know” or that we feel ambiguous and undecided.

Unfortunately, this lacks what Prof. Syll calls “akribi”.

(For the benefit of readers who like myself don’t know Swedish, according to several sources, “akribi” is a very nice Swedish word which roughly means “precision” or “accuracy” or “meticulousness”).

Prof. Syll proposal lacks akribi because it neglects the “well-established linguistic usage” regarding “estimates”, which are “a third type of probability judgment”. These are described in Frank Knight’s great book “Risk, Uncertainty and Profit”, published in 1921.

“The business man himself not merely forms the best estimate he can of the outcome of his actions, but he is likely also to estimate the probability that his estimate is correct. The “degree” of certainty or of confidence felt in the conclusion after it is reached cannot be ignored, for it is of the greatest practical significance. The action which follows upon an opinion depends as much upon the amount of confidence in that opinion as it does upon the favorableness of the opinion itself. The ultimate logic, or psychology, of these deliberations is obscure, a part of the scientifically unfathomable mystery of life and mind. We must simply fall back upon a “capacity” in the intelligent animal to form more or less correct judgments about things, an intuitive sense of values. We are so built that what seems to us reasonable is likely to be confirmed by experience, or we could not live in the world at all.”

…

“an estimate has the same form as a probability judgment; it is a ratio, expressed by a proper fraction. But in fact it appears to be meaningless and fatally misleading to speak of the probability, in an objective sense, that a judgment is correct.”

http://www.econlib.org/library/Knight/knRUP6.html#Pt.III,Ch.VII

NB. The last sentence quoted here does NOT say that it is meaningless to speak of the probability of estimates. It merely says that this third type of probability statement is not objective. Isn’t this what Bayesian prior probabilities are all about?

Comment by Kingsley Lewis— 19 March, 2016 #

“There are cases in which x is strongly supported by z and y is strongly undermined by z while, at the same time, x is confirmed by z to a lesser degree than is y.”

The problem here lies in the vernacular, not in the math. If I originally think that the odds are 5:1 against the die coming up a 6, and I learn that it has come up 2, 4, or 6, so I now think that the odds are 3:1 against the die having come up a 6, how has the die not coming up 6 been confirmed more than the die coming up 6?

—-

Why has Bayesianism made a comeback? I dunno. Perhaps because of Cox’s theorem, perhaps because of the ad hoc nature of non-Bayesian statistics, perhaps because of the success of Bayesianism in machine learning.

Comment by Min— 19 March, 2016 #

Oops! OC, that should be 2:1, not 3:1

Comment by Min— 20 March, 2016 #

First, I think it is worth presenting the complete Popper’s quote in the last paragraph:

“Thus we have proved that the identification of degree of corroboration or confirmation with probability (and even with likelihood) is absurd on both formal and intuitive grounds: it leads to self-contradiction.”

So, his claim is about (i) both Bayesian and Likelihood, and (ii) regarding falsificationism. This claim is, of course, a bit outdated (it was made more than 60 years ago). It is a bit unfair to present such an old argument without accounting for recent and modern discussions about the Bayesian approach to testing hypothesis and summarizing evidence (one could also present the pre-Socratic argument that the Earth is flat and stop there). For this, I recommend the following reference and discussion:

http://projecteuclid.org/euclid.ss/1089808272

http://andrewgelman.com/2004/10/14/bayes_and_poppe/

Comment by Alonso Quijano— 7 June, 2016 #