## Econometrics and the problem of unjustified assumptions

21 Aug, 2019 at 11:03 | Posted in Statistics & Econometrics | 1 CommentThere seems to be a pervasive human aversion to uncertainty, and one way to reduce feelings of uncertainty is to invest faith in deduction as a sufficient guide to truth. Unfortunately, such faith is as logically unjustified as any religious creed, since a deduction produces certainty about the real world only when its assumptions about the real world are certain …

Unfortunately, assumption uncertainty reduces the status of deductions and statistical computations to exercises in hypothetical reasoning – they provide best-case scenarios of what we could infer from specific data (which are assumed to have only specific, known problems). Even more unfortunate, however, is that this exercise is deceptive to the extent it ignores or misrepresents available information, and makes hidden assumptions that are unsupported by data …

Econometrics supplies dramatic cautionary examples in which complex modelling has failed miserably in important applications …

Yes, indeed, econometrics fails miserably over and over again.

One reason why it does, is that the error term in the regression models used is thought of as representing the effect of the variables that were omitted from the models. The error term is somehow thought to be a ‘cover-all’ term representing omitted content in the model and necessary to include to ‘save’ the assumed deterministic relation between the other random variables included in the model. Error terms are usually assumed to be orthogonal (uncorrelated) to the explanatory variables. But since they are unobservable, they are also impossible to empirically test. And without justification of the orthogonality assumption, there is, as a rule, nothing to ensure identifiability:

Distributional assumptions about error terms are a good place to bury things because hardly anyone pays attention to them. Moreover, if a critic does see that this is the identifying assumption, how can she win an argument about the true expected value the level of aether? If the author can make up an imaginary variable, “because I say so” seems like a pretty convincing answer to any question about its properties.

Nowadays it has almost become a self-evident truism among economists that you cannot expect people to take your arguments seriously unless they are based on or backed up by advanced econometric modelling. So legions of mathematical-statistical theorems are proved — and heaps of fiction are being produced, masquerading as science. The rigour of the econometric modelling and the far-reaching assumptions they are built on is frequently not supported by data.

Econometrics is basically a deductive method. Given the assumptions, it delivers deductive inferences. The problem, of course, is that we almost never know when the assumptions are right. Conclusions can only be as certain as their premises — and that also applies to econometrics.

Econometrics doesn’t establish the truth value of facts. Never has. Never will.

## 1 Comment

Sorry, the comment form is closed at this time.

Blog at WordPress.com.

Entries and comments feeds.

Another problem with error terms is that the standard error is not even reported. GDP should have a confidence interval. When you scale another variable such as national debt or budget by GDP, the error terms propagate and increase. The IMF class I took often scaled variables already scaled by GDP to GDP, again. The IMF spreadsheets did not report or carry error terms. The reason, of course, is that the statistics the IMF uses to tell a story about austerity would have error margins so wide, you could easily tell other contradictory stories.

Comment by Robert S Mitchell— 22 Aug, 2019 #