Why econometrics still hasn’t delivered (wonkish)

23 December, 2012 at 12:04 | Posted in Statistics & Econometrics | 3 Comments

In the article The Scientific Model of Causality renowned econometrician and Nobel laureate James Heckman writes (emphasis added):

 A model is a set of possible counterfactual worlds constructed under some rules. The rules may be laws of physics, the consequences of utility maximization, or the rules governing social interactions … A model is in the mind. As a consequence, causality is in the mind.

Even though this is a standard view among econometricians, it’s – at least from a realist point of view – rather untenable. The reason we as scientists are interested in causality is that it’s a part of the way the world works. We represent the workings of causality in the real world by means of models, but that doesn’t mean that causality isn’t a fact pertaining to relations and structures that exist in the real world. If it was only “in the mind,” most of us couldn’t care less.

icebergsThe reason behind Heckman’s and most other econometricians’ nominalist-positivist view of science and models, is the belief that science can only deal with observable regularity patterns of a more or less lawlike kind. Only data matters and trying to (ontologically) go beyond observed data in search of the underlying real factors and relations that generate the data is not admissable. All has to take place in the econometric mind’s model since the real factors and relations according to the econometric (epistemologically based) methodology are beyond reach since they allegedly are both unobservable and unmeasurable. This also means that instead of treating the model-based findings as interesting clues for digging deepeer into real structures and mechanisms, they are treated as the end points of the investigation. Or as Asad Zaman puts it in Methodological Mistakes and Econometric Consequences:

Instead of taking it as a first step, as a clue to explore, conventional econometric methodology terminates at the discovery of a good fit … Conventional econometric methodology is a failure because it is merely an attempt to find patterns in the data, without any tools to assess whether or not the given pattern reflects some real forces which shape the data.

The critique put forward here is in line with what mathematical statistician David Freedman writes in  Statistical Models and Causal Inference (2010):

In my view, regression models are not a particularly good way of doing empirical work in the social sciences today, because the technique depends on knowledge that we do not have. Investigators who use the technique are not paying adequate attention to the connection – if any – between the models and the phenomena they are studying. Their conclusions may be valid for the computer code they have created, but the claims are hard to transfer from that microcosm to the larger world …

Given the limits to present knowledge, I doubt that models can be rescued by technical fixes. Arguments about the theoretical merit of regression or the asymptotic behavior of specification tests for picking one version of a model over another seem like the arguments about how to build desalination plants with cold fusion and the energy source. The concept may be admirable, the technical details may be fascinating, but thirsty people should look elsewhere …

Causal inference from observational data presents may difficulties, especially when underlying mechanisms are poorly understood. There is a natural desire to substitute intellectual capital for labor, and an equally natural preference for system and rigor over methods that seem more haphazard. These are possible explanations for the current popularity of statistical models.

Indeed, far-reaching claims have been made for the superiority of a quantitative template that depends on modeling – by those who manage to ignore the far-reaching assumptions behind the models. However, the assumptions often turn out to be unsupported by the data. If so, the rigor of advanced quantitative methods is a matter of appearance rather than substance.

Econometrics is  basically a deductive method. Given  the assumptions (such as manipulability, transitivity, Reichenbach probability principles, separability, additivity, linearity etc) it delivers deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. Real target systems are seldom epistemically isomorphic to axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of  the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by statistical/econometric procedures like regression analysis may be valid in “closed” models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.

Most advocates of econometrics and regression analysis want  to have deductively automated answers to  fundamental causal questions. Econometricians think – as David Hendry expressed it in Econometrics – alchemy or science? (1980) –  they “have found their Philosophers’ Stone; it is called regression analysis and is used for transforming data into ‘significant results!'” But as David Freedman poignantly notes in Statistical Models: “Taking assumptions for granted is what makes statistical techniques into philosophers’ stones.” To apply “thin” methods we have to have “thick” background knowledge of  what’s going on in the real world, and not in idealized models. Conclusions  can only be as certain as their premises – and that also applies to the quest for causality in econometrics and regression analysis.

Without requirements of depth, explanations most often do not have practical significance. Only if we search for and find fundamental structural causes, can we hopefully also take effective measures to remedy problems like e.g. unemployment, poverty, discrimination and underdevelopment. A social science must try to establish what relations exist between different phenomena and the systematic forces that operate within the different realms of reality. If econometrics is to progress, it has to abandon its outdated nominalist-positivist view of science and the belief that science can only deal with observable regularity patterns of a more or less law-like kind. Scientific theories ought to do more than just describe event-regularities and patterns – they also have to analyze and describe the mechanisms, structures, and processes that give birth to these patterns and eventual regularities.



  1. Every time I read something like this, it seems more and more to me that econometrics as it is usually practiced has a neo-liberal bias, for major strains of (neo-) classical economics have long tended to pride itself on their deliberate disregard of empirical economic evidence and fixation on policies without theoretical foundation. For instance, Ludwig von Mises insisted that “New experience can force us to discard or modify inferences we have drawn from previous experience. But no kind of experience can ever force us to discard or modify a priori theorems. They are not derived from experience; they are logically prior to it and cannot be either proved by corroborative experience or disproved by experience to the contrary. We can comprehend action only by means of a priori theorems. Nothing is more clearly an inversion of the truth than the thesis of empiricism that theoretical propositions are arrived at through induction on the basis of a presuppositionless observation of ‘facts.’ It is only with the aid of a theory that we can determine what the facts are.” Epistemological Problems of Economics (New York: NY UP, 1933, 1976) http://www.mises.org/epofe/c1p2sec2.asp

    Why bother with Great Depression-level unemployment in the “peripheral” European countries if it conflicts with your a priori theorem?

  2. I got the paper and I look forward to reading it. You might want to look at this: “THE INEFFICIENT MARKETS HYPOTHESIS: WHY FINANCIAL MARKETS DO NOT WORK WELL IN THE REAL WORLD” Roger E.A. Farmer Carine Nourry Alain Venditti

  3. […] truly can’t but concur (having touched upon this before here), especially on the “decreasing order of importance” of the assumptions. But then, of course, […]

Sorry, the comment form is closed at this time.

Create a free website or blog at WordPress.com.
Entries and comments feeds.