## The real harm done by Bayesianism

30 November, 2017 at 09:02 | Posted in Theory of Science & Methodology | 2 CommentsThe bias toward the superficial and the response to extraneous influences on research are both examples of real harm done in contemporary social science by a roughly Bayesian paradigm of statistical inference as the epitome of empirical argument. For instance the dominant attitude toward the sources of black-white differential in United States unemployment rates (routinely the rates are in a two to one ratio) is “phenomenological.” The employment differences are traced to correlates in education, locale, occupational structure, and family background. The attitude toward further, underlying causes of those correlations is agnostic … Yet on reflection, common sense dictates that racist attitudes and institutional racism

mustplay an important causal role. People do have beliefs that blacks are inferior in intelligence and morality, and they are surely influenced by these beliefs in hiring decisions … Thus, an overemphasis on Bayesian success in statistical inference discourages the elaboration of a type of account of racial disadavantages that almost certainly provides a large part of their explanation.

For all scholars seriously interested in questions on what makes up a good scientific explanation, Richard Miller’s *Fact and Method *is a must read. His incisive critique of Bayesianism is still unsurpassed.

One of yours truly’s favourite ‘problem situating lecture arguments’ against Bayesianism goes something like this: Assume you’re a Bayesian turkey and hold a nonzero probability belief in the hypothesis H that “people are nice vegetarians that do not eat turkeys and that every day I see the sunrise confirms my belief.” For every day you survive, you update your belief according to Bayes’ Rule

P(H|e) = [P(e|H)P(H)]/P(e),

where evidence e stands for “not being eaten” and P(e|H) = 1. Given that there do exist other hypotheses than H, P(e) is less than 1 and *a fortiori* P(H|e) is greater than P(H). Every day you survive increases your probability belief that you will not be eaten. This is totally rational according to the Bayesian definition of rationality. Unfortunately — as Bertrand Russell famously noticed — for every day that goes by, the traditional Christmas dinner also gets closer and closer …

For more on my own objections to Bayesianism, see my Bayesianism — a patently absurd approach to science and One of the reasons I’m a Keynesian and not a Bayesian.

## 2 Comments »

RSS feed for comments on this post. TrackBack URI

### Leave a Reply

Blog at WordPress.com.

Entries and comments feeds.

I agree that Bayesianism as often used for economic predictions can be harmful, mostly because it assumes stability. But I think the Turkey example a poor one.

A hypothesis such as that people do not eat Turkeys has a complement that is compound, and it is perfectly reasonable to be a Bayesian who recognizes the problem with probabilities and likelihoods for compound hypotheses, and in particular with the application of Bayes’ rule.

If we want to apply Bayes rule, it makes more sense to take the hypothesis ‘Up to now, no-one wants to ‘harvest’ Turkeys’. Bayes’ rule then seems harmless. On the other hand, it no longer seems to provide people with the kind of ‘logically-based prediction’ that they seek.

So, while the misuse of Bayesianism is definitely harmful, one needs other examples to see the harm (if there is any) with a more purely logical Bayesianism. (I blog on this.)

Comment by Dave Marsay— 30 November, 2017 #

I’m reminded of Professor Chris Manning’s remarks in https://see.stanford.edu/materials/ainlpcs224n/transcripts/NaturalLanguageProcessing-Lecture03.html :

To make just one little side remark at this point, in a lot of areas of probability, everyone, especially in computer science, everyone is very into Bayesian stuff and doing Bayesian probability models. A kind of funny thing about the state of the art of using probabilistic smoothing in NLP is that all the really good ideas like this have actually come from people scratching their heads and looking at the data and what happens in the estimation and why does it go wrong, and by the seat of their pants, coming up with some formula that seems to capture the right properties.

And then what happens after that is then three years later, someone writes a conference paper saying how this formula can be interpreted as a Bayesian prior and does a big derivation of that, and that’s been done for both Good-Turing smoothing and also just recently for kinesthenized smoothing. And so there’s a link on the syllabus for a paper by [inaudible] of how to interpret kinesthenized smoothing as a Bayesian prior.

But the funny thing is that none of the actual good ideas of how to come up with a better smoothing method actually seem to come from the theory. They actually seem to come from people staring at bits of data.

Comment by Robert Mitchell— 1 December, 2017 #