Causality in econometrics

3 Feb, 2022 at 18:29 | Posted in Statistics & Econometrics | 3 Comments


A popular idea in quantitative social sciences is to think of a cause (C) as something that increases the probability of its effect or outcome (O). That is:

P(O|C) > P(O|-C)

However, as is also well-known, a correlation between two variables, say A and B, does not necessarily imply that that one is a cause of the other, or the other way around, since they may both be an effect of a common cause, C.

In statistics and econometrics we usually solve this “confounder” problem by “controlling for” C, i. e. by holding C fixed. This means that we actually look at different “populations” – those in which C occurs in every case, and those in which C doesn’t occur at all. This means that knowing the value of A does not influence the probability of C [P(C|A) = P(C)]. So if there then still exist a correlation between A and B in either of these populations, there has to be some other cause operating. But if all other possible causes have been “controlled for” too, and there is still a correlation between A and B, we may safely conclude that A is a cause of B, since by “controlling for” all other possible causes, the correlation between the putative cause A and all the other possible causes (D, E,. F …) is broken.

This is of course a very demanding prerequisite, since we may never actually be sure to have identified all putative causes. Even in scientific experiments the number of uncontrolled causes may be innumerable. Since nothing less will do, we do all understand how hard it is to actually get from correlation to causality. This also means that only relying on statistics or econometrics is not enough to deduce causes from correlations.

Some people think that randomization may solve the empirical problem. By randomizing we are getting different “populations” that are homogeneous in regards to all variables except the one we think is a genuine cause. In that way we are supposed being able not having to actually know what all these other factors are.

If you succeed in performing an ideal randomization with different treatment groups and control groups that is attainable. But it presupposes that you really have been able to establish – and not just assume – that the probability of all other causes but the putative (A) have the same probability distribution in the treatment and control groups, and that the probability of assignment to treatment or control groups are independent of all other possible causal variables.

Unfortunately, real experiments and real randomizations seldom or never achieve this. So, yes, we may do without knowing all causes, but it takes ideal experiments and ideal randomizations to do that, not real ones. That means that in practice we do have to have sufficient background knowledge to deduce causal knowledge. Without old knowledge, we can’t get new knowledge – and, no causes in, no causes out.

Just so that you do not think this assertion is some idiosyncrasy of yours truly, let me back up my claim with quotes from two eminent researchers.

As I have written about earlier (e.g. here and here), John Maynard Keynes was very critical of the way statistical tools were used in social sciences. In his criticism of the application of inferential statistics and regression analysis in the early development of econometrics, Keynes in a critical review of the early work of Tinbergen, writes:

Am I right in thinking that the method of multiple correlation analysis essentially depends on the economist having furnished, not merely a list of the significant causes, which is correct so far as it goes, but a complete list? For example, suppose three factors are taken into account, it is not enough that these should be in fact verce causce; there must be no other significant factor. If there is a further factor, not taken account of, then the method is not able to discover the relative quantitative importance of the first three. If so, this means that the method is only applicable where the economist is able to provide beforehand a correct and indubitably complete analysis of the significant factors. The method is one neither of discovery nor of criticism. It is a means of giving quantitative precision to what, in qualitative terms, we know already as the result of a complete theoretical analysis.

This, of course, is absolutely right. Once you include all actual causes into the original (over)simple model, it may well be that the causes are no longer independent or linear, and that a fortiori the coefficients in the econometric equations no longer are identifiable. And so, since all causal factors are not included in the original econometric model, it is not an adequate representation of the real causal structure of the economy that the model is purportedly meant to represent.

In his Statistical Models and Causal Inference (2010) eminent mathematical statistician David Freedman writes:

Causal inference from observational data presents many difficulties, especially when underlying mechanisms are poorly understood. There is a natural desire to substitute intellectual capital for labor, and an equally natural preference for system and rigor over methods that seem more haphazard. These are possible explanations for the current popularity of statistical models.

Indeed, far-reaching claims have been made for the superiority of a quantitative template that depends on modeling – by those who manage to ignore the far-reaching assumptions behind the models. However, the assumptions often turn out to be unsupported by the data. If so, the rigor of advanced quantitative methods is a matter of appearance rather than substance.

Econometrics is basically a deductive method. Given  the assumptions (such as manipulability, transitivity, exchangeability, monotonicity, ignorability, Reichenbach probability principles, separability, additivity, linearity etc) it delivers deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. Real target systems are seldom epistemically isomorphic to axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of  the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by statistical/econometric procedures may be valid in “closed” models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.

Advocates of econometrics want  to have deductively automated answers to  fundamental causal questions. But to apply “thin” methods we have to have “thick” background knowledge of  what’s going on in the real world, and not in idealized models. Conclusions  can only be as certain as their premises — and that also applies to the quest for causality in econometrics.


  1. Prof Syll confuses deduction with induction when he writes:
    “Econometrics is basically a deductive method. Given the assumptions … it delivers deductive inferences”
    This should be rewritten as:
    “Econometrics is basically a INDUCTIVE method. Given the assumptions … it delivers INDUCTIVE inferences.”
    From the Oxford Dictionary:
    Inference = “a conclusion reached on the basis of evidence and reasoning”
    Induction = “the inference of a general law from particular instances”
    Induction = Reasoning from the particular to the general
    Induction = Evidence (data) + assumptions + logic –> general law
    This is what econometrics tries to do.
    In contrast, from the Oxford Dictionary:
    Deduction = “the inference of particular instances by reference to a general law or principle”
    Deduction = Reasoning from the general to the particular
    Deduction = General law + assumptions + logic –> particular instances
    This is NOT what econometrics tries to do.

    • In the context of statistical modelling it is a deductive approach, since it presumes the correctness of given (model/data) assumptions which are hardly testable (and mostly ignored). The validity of your results (estimations of model parameters) are then taken to be “correct” given assumptions compatibility.

      • @ Roman
        It would be very surprising if there were a special and radically different meaning of deduction in the context of statistical modelling. Where does your claim come from?
        There is no mention of this in Wikipedia or any other reference which I know.

Sorry, the comment form is closed at this time.

Blog at
Entries and Comments feeds.