Econometrics — science built on beliefs and untestable assumptions21 November, 2016 at 12:39 | Posted in Statistics & Econometrics | 1 Comment
What is distinctive about structural models, in contrast to forecasting models, is that they are supposed to be – when successfully supported by observation – informative about the impact of interventions in the economy. As such, they carry causal content about the structure of the economy. Therefore, structural models do not model mere functional relations supported by correlations, their functional relations have causal content which support counterfactuals about what would happen under certain changes or interventions.
This suggests an important question: just what is the causal content attributed to structural models in econometrics? And, from the more restricted perspective of this paper, what does this imply with respect to the interpretation of the error term? What does the error term represent causally in structural equation models in econometrics? And finally, what constraints are imposed on the error term for successful causal inference? …
I now consider briefly a key constraint that may be necessary for the error term to meet for using the model for causal inference. To keep the discussion simple, I look only at the simplest model
The obvious experiment that comes to mind is to vary x, to see by how much y changes as a result. This sounds straight forward, one changes x, y changes and one calculates α as follows.
α = ∆y/ ∆x
Everything seems straightforward. However there is a concern since u is unobservable: how does one know that u has not also changed in changing x? Suppose that u does change so that there is hidden in the change in y a change in u, that is, the change in y is incorrectly measured by
∆yfalse= ∆y + ∆u
And thus that α is falsely measured as
αfalse =∆yfalse/∆x = ∆y/ ∆x +∆u/ ∆x = α + ∆u/ ∆x
Therefore, in order for the experiment to give the correct measurement for α, one needs either to know that u has not also changed or know by how much it has changed (if it has.) Since u is unobservable it is not known by how much u has changed. This leaves as the only option the need to know that in changing x, u has not also been unwittingly changed. Intuitively, this requires that it is known that whatever cause(s) of x which are used to change x, not also be causes of any of the factors hidden in u …
More generally, the example above shows a need to constrain the error term in the equation in a non-simultaneous structural equation model as follows. It requires that each right hand variable have a cause that causes y but not via any factor hidden in the error term. This imposes a limit on the common causes the factors in the error term can have with those factors explicitly modelled …
Consider briefly the testability of the two key assumptions brought to light in this section: (i) that the error term denotes the net impact of a set of omitted causal factors and (ii) that the each error term have at least one cause which does not cause the error term. Given these assumptions directly involve the factors omitted in the error term, testing these empirically seems impossible without information about what is hidden in the error term. This places the modeller in a difficult situation, how to know that something important has not been hidden. In practice, there will always be element of faith in the assumptions about the error term, assuming that assumptions like (i) and (ii) have been met, even if it is impossible to test these conclusively.
In econometrics textbooks it is often said that the error term in the regression models used represents the effect of the variables that were omitted from the model. The error term is somehow thought to be a ‘cover-all’ term representing omitted content in the model and necessary to include to ‘save’ the assumed deterministic relation between the other random variables included in the model. Error terms are usually assumed to be orthogonal (uncorrelated) to the explanatory variables. But since they are unobservable, they are also impossible to empirically test. And without justification of the orthogonality assumption, there is as a rule nothing to ensure identifiability:
With enough math, an author can be confident that most readers will never figure out where a FWUTV (facts with unknown truth value) is buried. A discussant or referee cannot say that an identification assumption is not credible if they cannot figure out what it is and are too embarrassed to ask.
Distributional assumptions about error terms are a good place to bury things because hardly anyone pays attention to them. Moreover, if a critic does see that this is the identifying assumption, how can she win an argument about the true expected value the level of aether? If the author can make up an imaginary variable, “because I say so” seems like a pretty convincing answer to any question about its properties.