## Bayesianism — an unacceptable scientific reasoning

26 February, 2016 at 16:14 | Posted in Theory of Science & Methodology | 5 CommentsA major, and notorious, problem with this approach, at least in the domain of science, concerns how to ascribe objective prior probabilities to hypotheses. What seems to be necessary is that we list all the possible hypotheses in some domain and distribute probabilities among them, perhaps ascribing the same probability to each employing the principal of indifference. But where is such a list to come from? It might well be thought that the number of possible hypotheses in any domain is infinite, which would yield zero for the probability of each and the Bayesian game cannot get started. All theories have zero

probability and Popper wins the day. How is some finite list of hypotheses enabling some objective distribution of nonzero prior probabilities to be arrived at? My own view is that this problem is insuperable, and I also get the impression from the current literature that most Bayesians are themselves

coming around to this point of view.

Chalmers is absolutely right here in his critique of ‘objective’ Bayesianism, but I think it could actually be extended to also encompass its ‘subjective’ variety.

A classic example — borrowed from Bertrand Russell — may perhaps be allowed to illustrate the main point of the critique:

Assume you’re a Bayesian turkey and hold a nonzero probability belief in the hypothesis H that “people are nice vegetarians that do not eat turkeys and that every day I see the sun rise confirms my belief.” For every day you survive, you update your belief according to Bayes’ theorem

P(H|e) = [P(e|H)P(H)]/P(e),

where evidence e stands for “not being eaten” and P(e|H) = 1. Given that there do exist other hypotheses than H, P(e) is less than 1 and a fortiori P(H|e) is greater than P(H). Every day you survive increases your probability belief that you will not be eaten. This is totally rational according to the Bayesian definition of rationality. Unfortunately, for every day that goes by, the traditional Christmas dinner also gets closer and closer …

The nodal point here is — of course — that although Bayes’ theorem is mathematically unquestionable, that doesn’t qualify it as indisputably applicable to scientific questions.

Bayesian probability calculus is far from the automatic inference engine that its protagonists maintain it is. Where do the priors come from? Wouldn’t it be better in science if we did some scientific experimentation and observation if we are uncertain, rather than starting to make calculations based on people’s often vague and subjective personal beliefs? Is it, from an epistemological point of view, really credible to think that the Bayesian probability calculus makes it possible to somehow fully assess people’s subjective beliefs? And are — as Bayesians maintain — all scientific controversies and disagreements really possible to explain in terms of differences in prior probabilities? I’ll be dipped!

## 5 Comments

Sorry, the comment form is closed at this time.

Blog at WordPress.com.

Entries and comments feeds.

Good for Chalmers, who is a proper Popperian. Bayesianism only works in closed worlds where one might imagine exhausting all possible hypotheses. Science is open and requires testing to find flaws in conjectures. Further, the only truly objective priors (not what goes by that name these days) need to be based on frequencies, and except in very special cases, frequencies of hypotheses being true are unknown.

Comment by Mayo— 26 February, 2016 #

Isn’t the response to Chalmers’ technical point that in the case of infinitely many hypotheses, the prior is a probability density rather than a distribution? The probability mass of any single hypothesis may be zero, but the density could be non-zero. This criticism seems empty.

Comment by Tom Dietterich— 26 February, 2016 #

I agree, it seems like a modern day Zeno’s paradox. Integral calculus solves the technical aspect of the criticism.

Comment by pontus— 27 February, 2016 #

Chalmers problem is not how you assign probabilities (or densities) to some given list (or space) of hypotheses, but where the set of hypotheses comes from in the first place.Given two definite hypotheses one can often determine a likelihood ratio, and this can be fairly non-controversial. As I understand it, in science one should only ever claim that a hypothesis is much more likely than all the others that have been thought of. The notion of probability seems redundant and potentially misleading.

I assume that most ‘accepted’ empirical theories are more likely than the alternatives that have been studied. But in what sense does this make them ‘probable’? If they are, presumably we should stop trying to develop the science further.

Comment by Dave Marsay— 28 February, 2016 #

I agree with the issue about where the set of hypothesis should come from (but I also believe this is not unique to Bayesianism, but also frequentism — the turkey’s frequentist assessment of its probability of being slaughtered on Christmas day is still pretty damn low). However, Chalmers explicitly writes “It might well be thought that the number of possible hypotheses in any domain is infinite, which would yield zero for the probability of each and the Bayesian game cannot get started. All theories have zero probability and Popper wins the day.”

.

I don’t think that conclusion is valid.

Comment by pontus— 29 February, 2016 #