Objections to Bayesian statistics

11 Sep, 2013 at 09:46 | Posted in Statistics & Econometrics | 6 Comments

The fundamental objections to Bayesian methods are twofold: on one hand, Bayesian methods are presented as an automatic inference engine, and this raises suspicion in anyone with applied experience, who realizes that di erent methods work well in different settings … Bayesians promote the idea that a multiplicity of parameters can be handled via hierarchical, typically exchangeable, models, but it seems implausible that this could really work automatically. In contrast, much of the work in modern non-Bayesian statistics is focused on developing methods that give reasonable answers using minimal assumptions.

gelmanThe second objection to Bayes comes from the opposite direction and addresses the subjective strand of Bayesian inference: the idea that prior and posterior distributions represent subjective states of knowledge. Here the concern from outsiders is, first, that as scientists we should be concerned with objective knowledge rather than subjective belief, and second, that it’s not clear how to assess subjective knowledge in any case.

Beyond these objections is a general impression of the shoddiness of some Bayesian analyses, combined with a feeling that Bayesian methods are being oversold as an allpurpose statistical solution to genuinely hard problems. Compared to classical inference, which focuses on how to extract the information available in data, Bayesian methods seem to quickly move to elaborate computation. It does not seem like a good thing for a generation of statistics to be ignorant of experimental design and analysis of variance, instead becoming experts on the convergence of the Gibbs sampler. In the short-term this represents a dead end, and in the long term it represents a withdrawal of statisticians from the deeper questions of inference and an invitation for econometricians, computer scientists, and others to move in and fill in the gap.

Bayesian inference is a coherent mathematical theory but I don’t trust it in scientific applications. Subjective prior distributions don’t transfer well from person to person, and there’s no good objective principle for choosing a noninformative prior (even if that concept were mathematically defined, which it’s not). Where do prior distributions come from, anyway? I don’t trust them and I see no reason to recommend that other people do, just so that I can have the warm feeling of philosophical coherence …

As Brad Efron wrote in 1986, Bayesian theory requires a great deal of thought about the given situation to apply sensibly, and recommending that scientists use Bayes’ theorem is like giving the neighborhood kids the key to your F-16 …

The mathematics of Bayesian decision theory lead inexorably to the idea that random sampling and random treatment allocation are inecient, that the best designs are deterministic. I have no quarrel with the mathematics here|the mistake lies deeper in the philosophical foundations, the idea that the goal of statistics is to make an optimal decision. A Bayes estimator is a statistical estimator that minimizes the average risk, but when we do statistics, we’re not trying to “minimize the average risk,” we’re trying to do estimation and hypothesis testing. If the Bayesian philosophy of axiomatic reasoning implies that we shouldn’t be doing random sampling, then that’s a strike against the theory right there. Bayesians also believe in the irrelevance of stopping times|that, if you stop an experiment based on the data, it doesn’t change your inference. Unfortunately for the Bayesian theory, the p-value does change when you alter the stopping rule, and no amount of philosophical reasoning will get you around that point.

Andrew Gelman

6 Comments

  1. You do know that you are quoting from an article where Andrew Gelman puts forward objections to Bayesian statistics from the point of view of a hypothetical non-Bayesian, right? For example, when he writes

    “Where do prior distributions come from, anyway? I don’t trust them and I see no reason to recommend that other people do, just so that I can have the warm feeling of philosophical coherence.”

    that’s NOT Andrew Gelman’s opinion, but rather that of a hypothetical non-Bayesian that he created to voice the criticisms before he could respond to them.

    You MUST know this, because at the start of the article, which can be found at

    Click to access badbayesmain.pdf

    Andrew Gelman says:

    “The present article is unusual in representing a Bayesian’s presentation of what he views as the strongest non-Bayesian arguments. Although this originated as an April Fool’s blog entry (Gelman, 2008), I realized that these are strong arguments to be taken seriously—and ultimately accepted in some settings and refuted in others.”

    Why did you not quote this in your blog too? Were you trying to imply that the Objections to Bayesian Statistics are being raised by Andrew Gelman himself? Because that’s exactly what it looks like. You title your post Objections to Bayesian Statistics, remove all explanations from the original article clarifying that these are objection from a hypothetical non-Bayesian, and finish it with Andrew Gelman’s name, clearly implying that these are his opinions.

    I find this to be irredeemably misleading, so I’m really interested to know your reasons for doing it.

    Most importantly, however, are you going quote as prominently from the rejoinder article that Andrew Gelman wrote:

    Click to access badbayesresponsemain.pdf

    whose abstract states: “In the main article I presented a series of objections to Bayesian in- ference, written in the voice of a hypothetical anti-Bayesian statistician. Here I respond to these objections along with some other comments made by four discussants.”

    • One reason — among others — I quote from eminent statistician Andrew Gelman’s article in Bayesian analysis (2008) is that the author — besides the other qualifications for his writing this unusual piece that you so kindly referenced to — “realized that these are strong arguments to be taken seriously—and ultimately accepted in some settings and refuted in others.” I agree with Andrew on this — they are indeed strong arguments! Now that you have also supplied the readers with a link to Andrew’s rejoinder article they can themselves make up their minds on how to value pros and cons of Bayesian inference in science.

      • But why did you not say anywhere (not in a link, not in a clarification, not by inserting the relevant passages from his article) that these were not his views, but rather those of a hypothetical non-Bayesian? For example, the article says:

        “Here follows the list of objections from a hypothetical or paradigmatic non-Bayesian:
        Bayesian inference is a coherent mathematical theory but I don’t trust it in scientific applications.”

        so it’s clear that “I” means “hypothetical non-Bayesian”. Instead, you removed the first sentence, so that anyone who didn’t read the article (for which you did not provide a link) would automatically (and rightly so) interpret “I” as “Andrew Gelman”.

        It’s fine to quote from eminent statisticians, but why do so in a way that implies that they are criticizing Bayesian inference when, in fact, they are defending it?

        • 1) A quote is — yes — a quote. Nothing more, nothing less.
          2) When I quote people on the blog there usually is a link (in this case “Andrew Gelman” at the bottom of the quote is blued, and clicking on it you are directed to the original text).
          3) Andrew Gelman — as probably everyone reading this blog knows — is a Bayesian of sort (and that’s — goes without saying — also one reason why it’s interesting to quote him when he is putting forward “inside” critique). But I think he is a much more qualified and open-mindede Bayesian than many others of the same ilk … In his article “Induction and Deduction in Bayesian Data Analysis” (http://www.stat.columbia.edu/~gelman/research/published/philosophy_online4.pdf), e. g., he writes:
          “But I do not make these decisions on altering, rejecting, and expanding models based on the posterior probability that a model is true. Rather, knowing ahead of time that my assumptions are false, I abandon a model when a new model allows me to incorporate new data or to fit existing data better. At a technical level, I do not trust Bayesian induction over the space of models
          because the posterior probability of a continuous-parameter model depends crucially on untestable aspects of its prior distribution. (For any parameters that are identifiable by the data, the behavior of the prior in the far tails of the distribution is irrelevant to inference within the model but can have arbitrarily large effects on the model’s marginal posterior probability.)”
          Again I can’t but concur.

      • “Beyond these objections is a general impression of the shoddiness of some Bayesian analyses, combined with a feeling that Bayesian methods are being oversold as an allpurpose statistical solution to genuinely hard problems.”

        Bing bing bing, we have a winner….

        Grasselli, pay attention. You may learn a thing or two. What applies in HIV tests may not apply in economics.

    • My reading of Gelman is that he doesn’t justify what I call strong Bayesianism (i.e., that the Bayesian approach is always reasonable) but that, quite reasonably, he justifies it by its results in the type of application with which he is familiar, he justifies its assumptions by noting that the alternative approaches make similar assumptions, and he repeats the kind of general justifications that Keynes. for example, supported in some cases (‘small worlds’) and criticised in others.

      Thus it seems quite reasonable to agree with both Gelman and Binmore, as I do. Or to put it another way, whilst, as you quote, Gelman refutes the objections in some settings (arguably, many) he does, as per your quote, accept them in others. Both sides of this coin matter.


Sorry, the comment form is closed at this time.

Blog at WordPress.com.
Entries and Comments feeds.