Statistical philosophies and idealizations

18 Jan, 2022 at 18:16 | Posted in Theory of Science & Methodology | 2 Comments

As has been long and widely emphasized in various terms … frequentism and Bayesianism are incomplete both as learning theories and as philosophies of statistics, in the pragmatic sense that each alone are insufficient for all sound applications. Notably, causal justifications are the foundation for classical frequentism, which demands that all model constraints be deduced from real mechanical constraints on the physical data-generating process. Nonetheless, it seems modeling analyses in health, medical, and social sciences rarely have such physical justification …

The deficiency of strict coherent (operational subjective) Bayesianism is its assumption that all aspects of this uncertainty have been captured by the prior and likelihood, thus excluding the possibility of model misspecification. DeFinetti himself was aware of this limitation:

“…everything is based on distinctions which are themselves uncertain and vague, and which we conventionally translate into terms of certainty only because of the logical formulation…In the mathematical formulation of any problem it is necessary to base oneself on some appropriate idealizations and simplification. This is, however, a disadvantage; it is a distorting factor which one should always try to keep in check, and to approach circumspectly. It is unfortunate that the reverse often happens. One loses sight of the original nature of the problem, falls in love with the idealization, and then blames reality for not conforming to it.” [DeFinetti 1975, p. 279]

By asking for physically causal justifications of the data distributions employed in statistical analyses (whether those analyses are labeled frequentist or Bayesian), we may minimize the excessive certainty imposed by simply assuming a probability model and proceeding as if that idealization were a known fact.

Sander Greenland


  1. 《physically causal justifications of the data distributions employed in statistical analyses 》
    What are the physical causes of the Fed deciding to pause interest rate cuts (extending Nanikore’s example in a recent comment), right after Lehman, in September 2008, and why does Bernanke mention “feng shui” in that particular FOMC transcript?
    What is the physical cause of the Fed tightening now? Will future Feds see this as a policy error, because we can handle nominal inflation in better ways such as indexation?
    If you properly scale a financial sector in a mainstream DSGE model as in , do you end up with policy implications I arrive at in other ways? So is this a rare example of valuable research, as Fischer Black might say?
    Interestingly, does Roger Farmer also come to many conclusions I derive otherwise by using mainstream assumptions to formalize a multi-equilibrium model with shifts based on (pretty irrational?) beliefs? (And is that multi-equilibrium model sketched out by Black in “Noise”?)
    In other words if you include finance in your model does it swamp all other effects? Should we be treating inflation as a payment systems problem, and fixing it with more liquidity for price takers?

  2. Greenland’s understanding of econometrics is “obsolete”.
    This is well explained in chapter 5 of Peter Kennedy’s excellent book – ‘A Guide to Econometrics’ 6e 2008.
    A Guide to Econometrics. 6th edition by Peter Kennedy (
    “At one time, econometricians tended to assume that the model provided by economic theory represented accurately the real-world mechanism generating the data, and viewed their role as one of providing “good” estimates for the key parameters of that model. If any uncertainty was expressed about the model specification, there was a tendency to think in terms of using econometrics to “find” the real-world data-generating mechanism. Both these views of econometrics are obsolete.

    In light of this recognition, econometricians have been forced to articulate more clearly what econometric models are, one view being that they “are simply rough guides to understanding” (Quah, 1995, p. 1 596). There is some consensus that models are metaphors, or windows, through which researchers view the observable world, and that their adoption depends not upon whether they can be deemed “true” but rather upon whether they can be said to ( 1 ) correspond to the facts and (2) be useful.
    Econometric specification analysis therefore is a means of formalizing what is meant by “corresponding to the facts” and “being useful,” thereby defining what is meant by a “correctly specified model.” “

Sorry, the comment form is closed at this time.

Blog at
Entries and Comments feeds.