## Econometrics — science based on whimsical assumptions

22 Apr, 2021 at 14:36 | Posted in Statistics & Econometrics | 3 CommentsIt is often said that the error term in a regression equation represents the effect of the variables that were omitted from the equation. This is unsatisfactory …

There is no easy way out of the difficulty. The conventional interpretation for error terms needs to be reconsidered. At a minimum, something like this would need to be said:

The error term represents the combined effect of the omitted variables, assuming that

(i) the combined effect of the omitted variables is independent of each variable included in the equation,

(ii) the combined effect of the omitted variables is independent across subjects,

(iii) the combined effect of the omitted variables has expectation 0.This is distinctly harder to swallow.

Yes, indeed, that *is* harder to swallow.

Those conditions on the error term actually mean that we are being able to construct a model where all relevant variables are included and correctly specify the functional relationships that exist between them.

But that is actually impossible to fully manage in reality!

The theories we work with when building our econometric regression models are insufficient. No matter what we study, there are always some variables missing, and we don’t know the correct way to functionally specify the relationships between the variables (usually just *assuming* linearity).

*Every* regression model constructed is misspecified. There is always an endless list of possible variables to include, and endless possible ways to specify the relationships between them. So every applied econometrician comes up with his own specification and ‘parameter’ estimates. No wonder that the econometric Holy Grail of consistent and stable parameter-values is still nothing but a dream.

In order to draw inferences from data as described by econometric texts, it is necessary to make whimsical assumptions. The professional audience consequently and properly withholds belief until an inference is shown to be adequately insensitive to the choice of assumptions. The haphazard way we individually and collectively study the fragility of inferences leaves most of us unconvinced that any inference is believable. If we are to make effective use of our scarce data resource, it is therefore important that we study fragility in a much more systematic way. If it turns out that almost all inferences from economic data are fragile, I suppose we shall have to revert to our old methods …

A rigorous application of econometric methods in economics really presupposes that the phenomena of our real-world economies are ruled by stable causal relations between variables. Parameter-values estimated in specific spatio-temporal contexts are *presupposed* to be exportable to totally different contexts. To warrant this assumption one, however, has to convincingly establish that the targeted acting causes are stable and invariant so that they maintain their parametric status after the bridging. The endemic lack of predictive success of the econometric project indicates that this hope of finding fixed parameters is a hope for which there really is no other ground than hope itself.

Real-world social systems are not governed by stable causal mechanisms or capacities. As Keynes noticed when he first launched his attack against econometrics and inferential statistics already in the 1920s:

The atomic hypothesis which has worked so splendidly in Physics breaks down in Psychics. We are faced at every turn with the problems of Organic Unity, of Discreteness, of Discontinuity – the whole is not equal to the sum of the parts, comparisons of quantity fails us, small changes produce large effects, the assumptions of a uniform and homogeneous continuum are not satisfied. Thus the results of Mathematical Psychics turn out to be derivative, not fundamental, indexes, not measurements, first approximations at the best; and fallible indexes, dubious approximations at that, with much doubt added as to what, if anything, they are indexes or approximations of.

The kinds of laws and relations that econom(etr)ics has established, are laws and relations about entities in models that presuppose causal mechanisms being atomistic and additive. When causal mechanisms operate in real-world social target systems they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made “nomological machines” they are rare, or even non-existent. Unfortunately, that also makes most of the achievements of econometrics – as most of the contemporary endeavours of economic theoretical modelling – rather useless.

Regression models are widely used by social scientists to make causal inferences; such models are now almost a routine way of demonstrating counterfactuals.

However, the “demonstrations” generally turn out to depend on a series of untested, even unarticulated, technical assumptions …Developing appropriate models is a serious problem in statistics; testing the connection to the phenomena is even more serious …In our days, serious arguments have been made from data. Beautiful, delicate theorems have been proved, although the connection with data analysis often remains to be established. And

an enormous amount of fiction has been produced, masquerading as rigorous science.

The theoretical conditions that have to be fulfilled for regression analysis and econometrics to really work are nowhere even closely met in reality. Making outlandish statistical assumptions does not provide a solid ground for doing relevant social science and economics. Although regression analysis and econometrics have become the most used quantitative methods in social sciences and economics today, it’s still a fact that most of the inferences made from them are invalid.

## 3 Comments

Sorry, the comment form is closed at this time.

Blog at WordPress.com.

Entries and Comments feeds.

I read for Freedman in the early 1970s and he was a major influence in my development. So was Leamer; I highly recommend his 1978 book “Specification Searches” available as a free download at https://www.anderson.ucla.edu/faculty_pages/edward.leamer/books/specification_searches.htm – while technically a bit dated its overall views remain solid, and I highly recommend it everyone interested in statistical modeling and its limits.

Comment by Sander Greenland— 22 Apr, 2021 #

Freedman and Leamer are two of my statistics penates. Indeed highly recommended reading!

Comment by Lars Syll— 22 Apr, 2021 #

Leamer may have taken the “con” out of econometrics however, does the “tric” still remain?

Comment by Henry Rech— 22 Apr, 2021 #