When should we believe the unconfoundedness assumption?

26 Sep, 2018 at 09:38 | Posted in Statistics & Econometrics | 1 Comment

Economics may be an informative tool for research. But if its practitioners do not investigate and make an effort of providing a justification for the credibility of the assumptions on which they erect their building, it will not fulfil its task. There is a gap between its aspirations and its accomplishments, and without more supportive evidence to substantiate its claims, critics — like yours truly — will continue to consider its ultimate arguments as a mixture of rather unhelpful metaphors and metaphysics.

In mainstream economics, there is an excessive focus on formal modelling and statistics. The models and the statistical (econometric) machinery build on — often hidden and non-argued for — assumptions that are unsupported by data and whose veracity is highly uncertain.

Econometrics fails miserably over and over again. One reason is that the unconfoundedness assumption does not hold. Another important reason why it does is that the error term in the regression models used is thought of as representing the effect of the variables that were omitted from the models. The error term is somehow thought to be a ‘cover-all’ term representing omitted content in the model and necessary to include to ‘save’ the assumed deterministic relation between the other random variables included in the model. Error terms are usually assumed to be orthogonal (uncorrelated) to the explanatory variables. But since they are unobservable, they are also impossible to empirically test. And without justification of the orthogonality assumption, there is, as a rule, nothing to ensure identifiability:

Paul-Romer-727x727With enough math, an author can be confident that most readers will never figure out where a FWUTV (facts with unknown truth value) is buried. A discussant or referee cannot say that an identification assumption is not credible if they cannot figure out what it is and are too embarrassed to ask.

Distributional assumptions about error terms are a good place to bury things because hardly anyone pays attention to them. Moreover, if a critic does see that this is the identifying assumption, how can she win an argument about the true expected value the level of aether? If the author can make up an imaginary variable, “because I say so” seems like a pretty convincing answer to any question about its properties.

Paul Romer

1 Comment

  1. The error term in GDP calculations is conveniently dropped; I bet it would be plus-or-minus 50% or 100% if reported in an intellectually honest and statistically rigorous way.

    Why then do economists still fetishize the GDP number and proclaim that public policy should maximize output, as measured by the holy GDP that hides a sizeable amount of its error term in imputed variables such as Capital Depreciation or rental income of persons (because homeowners are assumed to pay themselves rent on their houses)?

    GDP measures insurance company output as the difference between premiums and claims, ignoring the fact that most insurance companies make more from re-insurance and financial investments than from premiums. BEA statisticians also impute claims because they can’t rely on real claim figures in years like 2001 (because of September 11).

    Intellectually honest economists should call out the idea that public policy should seek to maximize GDP because GDP is an imputed number that does not measure progress.

Sorry, the comment form is closed at this time.

Blog at WordPress.com.
Entries and comments feeds.