Searching for causality — statistics vs. history

17 Mar, 2014 at 11:37 | Posted in Theory of Science & Methodology | 4 Comments

History and statistics serve a common purpose: to understand the causal force of some phenomenon. It seems to me, moreover, that statistics is a simplifying tool to understand causality, whereas history is a more elaborate tool. And by “more elaborate” I mean that history usually attempts to take into account both more variables as well as fundamentally different variables in our quest to understand causality.

einstein-relativityTo make this point clear, think about what a statistical model is: it is a representation of some dependent variable as a function of one or more independent variables, which we think, perhaps because of some theory, have a causal influence on the dependent variable in question. A historical analysis is a similar type of model. For example, a historian typically starts by acknowledging some development, say a war, and then attempts to describe, in words, the events that led to the particular development. Now, it is true that historians typically delve deeply into the details of the events predating the development – e.g., by examining written correspondence between officials, by reciting historical news clippings to understand the public mood, etc. – but this simply means that the historian is examining more variables than the simplifying statistician. If the statistician added more variables to his regression, he would be on his way to producing a historical analysis.

There is, however, one fundamental way in which the historian’s model is different from the statistician’s: namely, the statistician is limited by the fact that he can only consider precisely quantified variables in his model. The historian, in contrast, can add whatever variables he wants to his model. Indeed, the historian’s model is non-numeric …

It is my view that what differentiates whether history or statistics will be successful relates to the subject area to which each tool is applied. In subjects where precisely quantified variables are all we need to confidently determine the causal force of some phenomenon, statistics will be preferable; in subjects where imprecisely quantified variables play an important causal role, we need to rely on history.

It seems to me, moreover, that the line dividing the subjects to which we apply our historical or statistical tools cuts along the same seam as does the line dividing the social sciences from the natural sciences. In the latter, we can ignore imprecisely quantified variables, such as human beliefs, as these variables don’t play an important causal role in the movement of natural phenomena. In the former, such imprecisely quantified variables play a central role in the construction and the stability of the laws that govern society at any given moment.

Econolosophy

4 Comments

  1. Reblogged this on robertoviera1 and commented:
    Hay que desmitificar la estadistica que se usa para engañar incautos.

  2. Economics have the same problem as statistics, it reduces facts to only to quantifiable things. Historic approach like Fernand Braudel make a deep analysis of economics realities that is omitted by economists.

    • Except that Braudel’s or Wallerstein’s devotion to some putative secular trend so often serves to obscure the real causes of current economic conditions; their analysis amounts to little more than thinly-veiled support for confused neo-classical claims of policy irrelevance.

  3. For those who missed it some real good read :

    The Cult of Statistical Significance:
    -How the Standard Error Costs Us Jobs, Justice, and Lives

    Stephen T. Ziliak and Deirdre N. McCloskey

    How the most important statistical method used in many of the sciences doesn’t pass the test for basic common sense

    “The Cult of Statistical Significance shows, field by field, how “statistical significance,” a technique that dominates many sciences, has been a huge mistake. The authors find that researchers in a broad spectrum of fields, from agronomy to zoology, employ “testing” that doesn’t test and “estimating” that doesn’t estimate. The facts will startle the outside reader: how could a group of brilliant scientists wander so far from scientific magnitudes? This study will encourage scientists who want to know how to get the statistical sciences back on track and fulfill their quantitative promise. The book shows for the first time how wide the disaster is, and how bad for science, and it traces the problem to its historical, sociological, and philosophical roots.”
    http://www.press.umich.edu/script/press/186351#sthash.tnOJani9.dpuf

    and

    Capturing causality in economics and the limits of
    statistical inference

    Lars Pålsson Syll Malmö University, Sweden

    “Causal inference from observational data presents many difficulties,
    especially when underlying mechanisms are poorly understood. There is a
    natural desire to substitute intellectual capital for labor, and an equally natural
    preference for system and rigor over methods that seem more haphazard.
    These are possible explanations for the current popularity of statistical
    models.

    Indeed, far-reaching claims have been made for the superiority of a
    quantitative template that depends on modeling – by those who manage to
    ignore the far-reaching assumptions behind the models. However, the
    assumptions often turn out to be unsupported by the data. If so, the rigor of
    advanced quantitative methods is a matter of appearance rather than
    substance.” David Freedman: Statistical Models and Causal Inference”

    Click to access Syll64.pdf


Sorry, the comment form is closed at this time.

Blog at WordPress.com.
Entries and Comments feeds.