Bayesianism — a dangerous scientific cul-de-sac

24 Apr, 2015 at 18:24 | Posted in Economics | 7 Comments

419Fn8sV1FL._SY344_BO1,204,203,200_The bias toward the superficial and the response to extraneous influences on research are both examples of real harm done in contemporary social science by a roughly Bayesian paradigm of statistical inference as the epitome of empirical argument. For instance the dominant attitude toward the sources of black-white differential in United States unemployment rates (routinely the rates are in a two to one ratio) is “phenomenological.” The employment differences are traced to correlates in education, locale, occupational structure, and family background. The attitude toward further, underlying causes of those correlations is agnostic … Yet on reflection, common sense dictates that racist attitudes and institutional racism must play an important causal role. People do have beliefs that blacks are inferior in intelligence and morality, and they are surely influenced by these beliefs in hiring decisions … Thus, an overemphasis on Bayesian success in statistical inference discourages the elaboration of a type of account of racial disadavantages that almost certainly provides a large part of their explanation.

For all scholars seriously interested in questions on what makes up a good scientific explanation, Richard Miller’s Fact and Method is a must read. His incisive critique of Bayesianism is still unsurpassed.


  1. Lars, As a mathematician I tend to agree that what social scientists think of as Bayesianism is very deeply misguided and implicated in many of our ills. But I am also something of a fan of Keynes, Turing and Good, and suppose that a logically sound version of Bayesianism could actually be a part of the solution. Does RWM make this distinction? Do any of his criticisms apply to ‘logical Bayesianism’, or just to the social sciences version? Thanks, and keep up the good work.

    • Thanks for your comment. Miller’s critique is applicable to all kinds of Bayesianism, including “logical Bayesiansim”.
      As I wrote, the book is well worth reading.

      • Does he have an accessible quote where the approach of Keynes in his Treatise (as commended by Whitehead and Russell and developed by Turing and Good) leads to the conclusion that there is strong evidence for one theory over another where Miller has a better argument for the converse? If there is a flaw in Keynes’ logic, it would be good to know of it!

  2. It seems that Richard Miller is an ultra-Bayesian in that he has an overwhelming prior belief (i.e. prejudice) that his own “common sense” is a better guide to causal reality than any possible statistical analysis.

    “common sense DICTATES that xxx MUST play an important causal role. People do have beliefs that xxx … they are SURELY influenced by these beliefs … Thus, an OVEREMPHASIS on Bayesian success in statistical inference discourages the elaboration of a type of account of xxx that ALMOST CERTAINLY provides a LARGE part of their explanation.”

    • Without having read the book, I cannot really say, but when Miller talks about superficiality and “statistical inference as the epitome of empirical argument” he is referring to what has seemed to me to be a certain naivete in modern Bayesianism, something that I think that Keynes, Good, and Pearl avoid. Let me try to give a clear example based upon what I remember from Keynes.

      Based upon certain evidence, back in the 19th century the sun was taken to have risen a certain large number of times (I forget how many), and using Laplace’s rule of succession, the odds that the sun would rise tomorrow was calculated to be quite high. But then some German professor (IIRC) calculated that the odds that the sun would rise every day for the next 5,000 years were only 2:1, an unacceptably low number.

      Why is such a number unacceptably low? Because our belief that the sun will rise tomorrow is not based merely upon the statistical evidence that it has risen for so many days, but upon our knowledge of astronomy, the rotation of the earth, and tidal effect of the moon. We do not regard statistical inference as the only, or even the primary reason for believing something.

      Thus, deriving the different unemployment rates of blacks and whites in the US without addressing racism is shallow reasoning, “phenomenological”, as Miller states. Not that frequentist statistics does better, but it does not lay a trap for the unwary.

  3. As a kid I learned Fisherian statistics, but, like a lot of people, misunderstood its philosophy until I took statistics in college. I studied Bayesian statistics on my own, starting with Keynes and Good. Keynes does a good job of debunking earlier Bayesianism, which lost out to Fisher’s approach.

    The Bayesian comeback took me by surprise. I was aware of the shortcomings of what had become the standard approach, but that sort of thing is not enough to explain why a defeated rival should have a rebirth.

    FWIW, I think that a major reason for its renascence is the success of machine learning using a Bayesian approach. After all, science is essentially learning. If Bayesian statistics is a useful learning tool, why not use it?

  4. I agree with all the points made above by Min.
    In particular, I agree with Min that “Bayesian statistics is a useful learning tool”.

    Perhaps the studies which Miller criticizes have substantial unexplained residuals which could in part be due to racism.
    Perhaps an indicator of racism could be introduced into the analyses.
    Perhaps some of the “phenomenological” explanatory variables are correlated with racist attitudes, so the results may partially confirm Miller’s prejudices rather than being in conflict with his “common sense”.
    Perhaps the studies could otherwise be improved.

    Miller appears to dismiss such lines of inquiry. The argument “common sense dictates xxx” is not helpful.

Sorry, the comment form is closed at this time.

Blog at
Entries and comments feeds.