## Bayesianism — a dangerous religion that harms science

22 Jul, 2014 at 19:57 | Posted in Theory of Science & Methodology | 9 CommentsOne of my favourite bloggers — Noah Smith — has a nice post up today on Bayesianism:

Consider Proposition H: “God is watching out for me, and has a special purpose for me and me alone. Therefore, God will not let me die. No matter how dangerous a threat seems, it cannot possibly kill me, because God is looking out for me – and only me – at all times.”

Suppose that you believe that there is a nonzero probability that H is true. And suppose you are a Bayesian – you update your beliefs according to Bayes’ Rule. As you survive longer and longer – as more and more threats fail to kill you – your belief about the probability that H is true must increase and increase. It’s just mechanical application of Bayes’ Rule:P(H|E) = (P(E|H)P(H))/P(E)

Here, E is “not being killed,” P(E|H)=1, and P(H) is assumed not to be zero. P(E) is less than 1, since under a number of alternative hypotheses you might get killed (if you have a philosophical problem with this due to the fact that anyone who observes any evidence must not be dead, just slightly tweak H so that it’s possible to receive a “mortal wound”).

So P(H|E) is greater than P(H) – every moment that you fail to die increases your subjective probability that you are an invincible superman, the chosen of God. This is totally and completely rational, at least by the Bayesian definition of rationality.

The nodal point here is — of course — that although Bayes’ Rule is *mathematically* unquestionable, that doesn’t qualify it as indisputably applicable to *scientific* questions. As another of my favourite bloggers — statistician Andrew Gelman — puts it:

The fundamental objections to Bayesian methods are twofold: on one hand, Bayesian methods are presented as an automatic inference engine, and this raises suspicion in anyone with applied experience, who realizes that di erent methods work well in different settings … Bayesians promote the idea that a multiplicity of parameters can be handled via hierarchical, typically exchangeable, models, but it seems implausible that this could really work automatically. In contrast, much of the work in modern non-Bayesian statistics is focused on developing methods that give reasonable answers using minimal assumptions.

The second objection to Bayes comes from the opposite direction and addresses the subjective strand of Bayesian inference: the idea that prior and posterior distributions represent subjective states of knowledge. Here the concern from outsiders is, first, that as scientists we should be concerned with objective knowledge rather than subjective belief, and second, that it’s not clear how to assess subjective knowledge in any case.

Beyond these objections is a general impression of the shoddiness of some Bayesian analyses, combined with a feeling that Bayesian methods are being oversold as an allpurpose statistical solution to genuinely hard problems. Compared to classical inference, which focuses on how to extract the information available in data, Bayesian methods seem to quickly move to elaborate computation. It does not seem like a good thing for a generation of statistics to be ignorant of experimental design and analysis of variance, instead becoming experts on the convergence of the Gibbs sampler. In the short-term this represents a dead end, and in the long term it represents a withdrawal of statisticians from the deeper questions of inference and an invitation for econometricians, computer scientists, and others to move in and fill in the gap …

Bayesian inference is a coherent mathematical theory but I don’t trust it in scientific applications. Subjective prior distributions don’t transfer well from person to person, and there’s no good objective principle for choosing a noninformative prior (even if that concept were mathematically defined, which it’s not). Where do prior distributions come from, anyway? I don’t trust them and I see no reason to recommend that other people do, just so that I can have the warm feeling of philosophical coherence …

As Brad Efron wrote in 1986, Bayesian theory requires a great deal of thought about the given situation to apply sensibly, and recommending that scientists use Bayes’ theorem is like giving the neighborhood kids the key to your F-16 …

## 9 Comments

Sorry, the comment form is closed at this time.

Blog at WordPress.com.

Entries and Comments feeds.

[…] she might not have known) How long was the average Roman foot, and what was their shoe size? Bayesianism — a dangerous religion that harms science Sequenced in the U.S.A.: A Desperate Town Hands Over Its DNA Conducting a Microbiome […]

Pingback by Links 7/27/14 | Mike the Mad Biologist— 27 Jul, 2014 #

I comment on Noah’s blog that his example has a technical flaw, which just goes to support Gelman’s quote of Efron. As a mathematician I see two versions of Bayesianism. The first regards the axioms of Bayesian probability as mathematical axioms, the other as dogma. Sometime, the difference makes a difference.

Comment by Dave Marsay— 27 Jul, 2014 #

I finally followed that Gelman link. Since you described Gelman as one of your favorite “bloggers,” I initially assumed that you were quoting another blog entry, analogous to Smith’s. But the link took me to an academic article in the journal BAYESIAN ANALYSIS. Further, I learned there that Gelman wasn’t offering these critical comments on Bayesianism in his own voice, but hypothetically.

The article in question describes itself as “a Bayesian’s attempt to see the other side.” If Gelman’s insights are sufficient for you to cite him as an authority, then you presumably should also consider why they aren’t sufficient for him to switch sides. He is still a “Bayesian,” so he presumably believes the “other side” does not prevail. Why not?

Comment by Chris Faille (@ccfaille)— 23 Jul, 2014 #

Andrew is one of the few statisticians I have on my blogroll.

Although he is sort of Bayesian, he is also, as the article you refer to clearly shows, capable of seeing the limits of the Bayesian approch and definitely not an advocate of Bayesianism as a general induction strategy or scientific paradigm. And his blog is always a good read, even though i don’t always agree with him!

Comment by Lars Syll— 23 Jul, 2014 #

I’m familiar from my own work with the use of the two leading theories of probability in the world of finance, and comment from that PoV here. http://allaboutalpha.com/blog/2014/07/23/a-challenge-to-bayesian-probability/ I’ll certainly be coming back to these issues. BTW, the Australian finance scholar I reference there is D.J. Johnstone of the University of Sydney Business School. Here is a link to one of his discussions: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2231801&download=yes

Comment by Chris Faille (@ccfaille)— 24 Jul, 2014 #

Somehow I think that Gelman, who has cowritten a popular textbook about Bayesian statistics and uses Bayesian methods extensively in his research, would find the notion that he’s complicit in propagating, as you put it, “dangerous religion that harms science” rather questionable.

Comment by ivansml— 22 Jul, 2014 #

I’m not sure that Noah has really removed the survivorship bias in his argument. He admits that is a problem, then waves his hands around it a bit.

Comment by Christopher Faille— 22 Jul, 2014 #

read page 152 of Keynes’s THE GENERAL THEORY to see that Keynes argued that Bayesian “equal probabilities “based on a state of ignoranve leads to absurdities”

Comment by paul davidson— 22 Jul, 2014 #

One area that Bayesian updating seems applicable is prediction. If an analyst makes a prediction that does not occur and continues making a similar prediction even though events never actually occur that correspond to the prediction, then the analyst should, rationally?, update their predictions with the data of actual events.

Bruce Bueno de Mesquita’s,’The Predictioneer’s Game’ touches on this issue of why analysts fail to update their failed predictions after a pattern or rate of failed predictions. Others have addressed the same problem in Jeffrey Friedman’s ‘Critical Review’ (several past issues). The problem is that predicting is notoriously poor and that some predictors fail to address data or to change their failing patterns by updating. Even the best predictor’s have a low rate of success!

Comment by Fredrick Welfare— 22 Jul, 2014 #