Beyond Bayesian probabilism

13 Jan, 2022 at 23:44 | Posted in Theory of Science & Methodology | 7 Comments

Although Bayes’ theorem is mathematically unquestionable, that doesn’t qualify it as indisputably applicable to scientific questions. Bayesian statistics is one thing, and Bayesian epistemology something else. Science is not reducible to betting, and scientific inference is not a branch of probability theory. It always transcends mathematics. The unfulfilled dream of constructing an inductive logic of probabilism — the Bayesian Holy Grail — will always remain unfulfilled.

Bayesian probability calculus is far from the automatic inference engine that its protagonists maintain it is. That probabilities may work for expressing uncertainty when we pick balls from an urn, does not automatically make it relevant for making inferences in science. Where do the priors come from? Wouldn’t it be better in science if we did some scientific experimentation and observation if we are uncertain, rather than starting to make calculations based on often vague and subjective personal beliefs? People have a lot of beliefs, and when they are plainly wrong, we shall not do any calculations whatsoever on them. We simply reject them. Is it, from an epistemological point of view, really credible to think that the Bayesian probability calculus makes it possible to somehow fully assess people’s subjective beliefs? And are — as many Bayesians maintain — all scientific controversies and disagreements really possible to explain in terms of differences in prior probabilities? I strongly doubt it.

unknown I want to know what my personal probability ought to be, partly because I want to behave sensibly and much more importantly because I am involved in the writing of a report which wants to be generally convincing. I come to the conclusion that my personal probability is of little interest to me and of no interest whatever to anyone else unless it is based on serious and so far as feasible explicit information. For example, how often have very broadly comparable laboratory studies been misleading as regards human health? How distant are the laboratory studies from a direct process affecting health? The issue is not to elicit how much weight I actually put on such considerations but how much I ought to put. Now of course in the personalistic  [Bayesian] approach having (good) information is better than having none but the point is that in my view the personalistic probability is virtually worthless for reasoned discussion​ unless it is based on information, often directly or indirectly of a broadly frequentist kind. The personalistic approach as usually presented is in danger of putting the cart before the horse.

David Cox

[Added 21.15: Those interested in these questions, do read Sander Greenland’s insightful comment.]


  1. Science is about causal epistemology.
    The idea that beliefs can be characterized probabilistically is simply a mishandling of certain idiosyncrasies of the English language.

    • Not only science but applied algorithmic statistics depends on causal epistemology [1].
      Still, there are some beliefs that can be quite usefully encoded or modeled probabilistically, e.g., rational bets in games of chance, and the many physical engineering systems that behave like those games in important respects. I think our main point is not to overextend that sort of probabilism to all of inference – as opposed to what some statistical philosophy seems to strive for in a sort of probabilistic imperialism of thought.
      [1] Greenland, S. (2022). The causal foundations of applied probability and statistics. Ch. 31 in: Dechter, R., Halpern, J., and Geffner, H., eds. Probabilistic and Causal Inference: The Works of Judea Pearl. ACM Books, in press.

      • The crowning achievement of probability theory is the notion of independence. Statistics used it to operationalize ‘association’ as the lack of independence.

        In scientific discovery, we see events that tend to change together, i.e., covariation. We reason both causation and covariation manifest themselves as an association.

        And here is the causal epistemology in a nutshell: We see association, we rule out covariation, we declare causation. Covariation is ruled out by blocking all influences on putative cause. In a real or thought experiment.

  2. Superb quote from Cox; just to be clear though I’d add “explicit” before “information” to get
    “personalistic probability is virtually worthless for reasoned discussion​ unless it is based on explicit information, often directly or indirectly of a broadly frequentist kind.”
    – Why? So-called “objective Bayes” methods include utterly personalistic judgements including spikes of 50% probability on the null based on no explicit information derived from the actual topic at hand. Sometimes we get hand-waving appeals to “indifference” when in reality such spikes represent bias favoring the null over similarly plausible alternatives. If there are data to justify this bias, it needs to be shown or at least cited.

    I’d also add that the excesses of Bayesian philosophy should not be used to impugn Bayesian statistical methods, which, when properly and explicitly informed, can supply some of the best-performing algorithmic procedures under both Bayesian and frequentist criteria.

    • Sander, I do write “Bayesian statistics is one thing, and Bayesian epistemology something else”, so here I think we are in agreement. When it comes to applying Bayesian statistics to epistemological and inferential questions beyond the formalism of ‘urn models’ and into real-world genuine uncertainties and attempts to reduce scientific confirmation and explanation to ‘predictive success’, I — as I make clear in my earlier post on Miller’s critique of Bayesianism — am not at all convinced the Bayesian approach takes us very far.

      • We do agree. My emphasis on explication of distinctions arises from seeing writers fuse algorithms to epistemologies, in the naive belief that using an algorithm for data reduction (whether one labeled “Bayesian” or “frequentist” or…) commits one to an interpretation or even a philosophy with the same label. I think it fair to say that the bulk of 20th-century statistics literature operated on this false belief, and users followed suit. Things have improved little since then (e.g., [1] discusses an example of the problem).

        I think this confusion of algorithms with philosophy was in part fueled by an obsession with mathematics over empirical activity, and a desire to render all thought into programmable algorithmic form – in modern terms, a compulsion to establish a form of digital AI as a master program for science. You have written much on the damage that did to economics (via econometrics), and I would add that it has been as damaging in other “soft sciences” including medicine.

        In my view we should strive to avoid commitments to algorithms, interpretations, and philosophies, and instead have on hand a diverse and even mutually inconsistent variety of each, selecting among them as seems best to suit the needs of the real-world questions at hand. This multi-perspective view may lead to use of several different approaches and interpretations; apparent inconsistencies between their conclusions suggest there are gaps in our understanding, not that one view is “correct” and the other “incorrect”.

        [1] Greenland, S., and Hofman, A. (2019). Multiple comparisons controversies are about context and costs, not frequentism vs. Bayesianism. European Journal of Epidemiology, 34(9), 801-808, DOI 10.1007/s10654-019-00552-z, open access at

        Click to access 10.1007%2Fs10654-019-00552-z.pdf

Sorry, the comment form is closed at this time.

Blog at
Entries and Comments feeds.