Econometrics — science based on unwarranted assumptions

15 Jul, 2021 at 11:52 | Posted in Statistics & Econometrics | 1 Comment

Machine Learning or Econometrics? | by Dr. Dataman | Analytics Vidhya |  MediumThere is first of all the central question of methodology — the logic of applying the method of multiple correlation to unanalysed economic material, which we know to be non-homogeneous through time. If we are dealing with the action of numerically measurable, independent forces, adequately analysed so that we were dealing with independent atomic factors and between them completely comprehensive, acting with fluctuating relative strength on material constant and homogeneous through time, we might be able to use the method of multiple correlation with some confidence for disentangling the laws of their action … In fact we know that every one of these conditions is far from being satisfied by the economic material under investigation.

Letter from John Maynard Keynes to Royall Tyler (1938)

Mainstream economists often hold the view that criticisms of econometrics are the conclusions of sadly misinformed and misguided people who dislike and do not understand much of it. This is a gross misapprehension. To be careful and cautious is not equivalent to dislike.

Keynes' critique of econometrics — as valid today as it was in 1939 | LARS  P. SYLL

The ordinary deductivist ‘textbook approach’ to econometrics views the modelling process as foremost an estimation problem since one (at least implicitly) assumes that the model provided by economic theory is a well-specified and ‘true’ model. The more empiricist, general-to-specific-methodology (often identified as the ‘LSE approach’) on the other hand views models as theoretically and empirically adequate representations (approximations) of a data generating process (DGP). Diagnostics tests (mostly some variant of the F-test) are used to ensure that the models are ‘true’ – or at least ‘congruent’ – representations of the DGP. The modelling process is here more seen as a specification problem where poor diagnostics results may indicate a possible misspecification requiring re-specification of the model. The objective is standardly to identify models that are structurally stable and valid across a large time-space horizon. The DGP is not seen as something we already know, but rather something we discover in the process of modelling it. Considerable effort is put into testing to what extent the models are structurally stable and generalizable over space and time.

Although yours truly has some sympathy for this approach in general, there are still some unsolved ‘problematics’ with its epistemological and ontological presuppositions. There is, e. g., an implicit assumption that the DGP fundamentally has an invariant property and that models that are structurally unstable just have not been able to get hold of that invariance. But one cannot just presuppose or take for granted that kind of invariance. It has to be argued and justified. Grounds have to be given for viewing reality as satisfying conditions of model-closure. It is as if the lack of closure that shows up in the form of structurally unstable models somehow could be solved by searching for more autonomous and invariable ‘atomic uniformity.’ But if reality is ‘congruent’ to this analytical prerequisite has to be argued for, and not simply taken for granted.

A great many models are compatible with what we know in economics — that is to say, do not violate any matters on which economists are agreed. Attractive as this view is, it fails to draw a necessary distinction between what is assumed and what is merely proposed as hypothesis. This distinction is forced upon us by an obvious but neglected fact of statistical theory: the matters ‘assumed’ are put wholly beyond test, and the entire edifice of conclusions (e.g., about identifiability, optimum properties of the estimates, their sampling distributions, etc.) depends absolutely on the validity of these assumptions. The great merit of modern statistical inference is that it makes exact and efficient use of what we know about reality to forge new tools of discovery, but it teaches us painfully little about the efficacy of these tools when their basis of assumptions is not satisfied. 

Millard Hastay

Even granted that closures come in degrees, we should not compromise on ontology. Some methods simply introduce improper closures, closures that make the disjuncture between models and real-world target systems inappropriately large. ‘Garbage in, garbage out.’

Underlying the search for these immutable ‘fundamentals’ is the implicit view of the world as consisting of entities with their own separate and invariable effects. These entities are thought of as being able to be treated as separate and addible causes, thereby making it possible to infer complex interaction from a knowledge of individual constituents with limited independent variety. But, again, if this is a justified analytical procedure cannot be answered without confronting it with the nature of the objects the models are supposed to describe, explain or predict. Keynes thought it generally inappropriate to apply the ‘atomic hypothesis’ to such an open and ‘organic entity’ as the real world. As far as I can see these are still appropriate strictures all econometric approaches have to face. Grounds for believing otherwise have to be provided by the econometricians.

Trygve Haavelmo, the father of modern probabilistic econometrics, wrote (in ‘Statistical testing of business-cycle theories’, The Review of  Economics and Statistics, 1943) that he and other econometricians could not build a complete bridge between our models and reality by logical operations alone, but finally had to make “a non-logical jump” [1943:15]. A part of that jump consisted in that econometricians “like to believe … that the various a priori possible sequences would somehow cluster around some typical time shapes, which if we knew them, could be used for prediction” [1943:16]. But since we do not know the true distribution, one has to look for the mechanisms (processes) that “might rule the data” and that hopefully persist so that predictions may be made. Of possible hypotheses on different time sequences (“samples” in Haavelmo’s somewhat idiosyncratic vocabulary) most had to be ruled out a priori “by economic theory”, although “one shall always remain in doubt as to the possibility of some … outside hypothesis being the true one” [1943:18].

To Haavelmo and his modern followers, econometrics is not really in the truth business. The explanations we can give of economic relations and structures based on econometric models are “not hidden truths to be discovered” but rather our own “artificial inventions”. Models are consequently perceived not as true representations of DGP, but rather instrumentally conceived “as if”-constructs. Their ‘intrinsic closure’ is realized by searching for parameters showing “a great degree of invariance” or relative autonomy and the ‘extrinsic closure’ by hoping that the ‘practically decisive’ explanatory variables are relatively few, so that one may proceed (as he formulates it in ‘The probability approach in econometrics’, Supplement to Econometrica, 1944) “as if … natural limitations of the number of relevant factors exist” [Haavelmo 1944:29].

Haavelmo seems to believe that persistence and autonomy can only be found at the level of the individual, since individual agents are seen as the ultimate determinants of the variables in the economic system.

But why the ‘logically conceivable’ really should turn out to be the case is difficult to see. At least if we are not satisfied with sheer hope. Using unargued for and unjustified assumptions of complex structures in an open system being reducible to those of individuals doesn’t suffice. In real economies, it is unlikely that we find many ‘autonomous’ relations and events. And one could, of course, also raise the objection that to invoke a probabilistic approach to econometrics presupposes, e. g., that we have to be able to describe the world in terms of risk rather than genuine uncertainty.

And that is exactly what Haavelmo [1944:48] does: “To make this a rational problem of statistical inference we have to start out by an axiom, postulating that every set of observable variables has associated with it one particular ‘true’, but unknown, probability law.”

But to use this “trick of our own” and just assign “a certain probability law to a system of observable variables”, however, cannot – just as little as pure hope – build a firm bridge between model and reality. Treating phenomena as if they essentially were stochastic processes is not the same as showing that they essentially are stochastic processes.

Rigour and elegance in the analysis do not make up for the gap between reality and model. It is the distribution of the phenomena in itself and not its estimation that ought to be at the centre of the stage. A crucial ingredient to any economic theory that wants to use probabilistic models should be a convincing argument for the view that “there can be no harm in considering economic variables as stochastic variables” [Haavelmo 1943:13]. In most cases, no such arguments are given.

We have to accept that reality has no ‘correct’ representation in an economic or econometric model. There is no such thing as a ‘true’ model that can capture an open, complex and contextual system in a set of equations with parameters stable over space and time, and exhibiting invariant regularities. To just ‘believe,’ hope,’ or ‘assume’ that such a model possibly could exist is not enough. It has to be justified in relation to the ontological conditions of social reality.

A rigorous application of econometric methods in economics really presupposes that the phenomena of our real-world economies are ruled by stable causal relations between variables. A perusal of the leading econom(etr)ic journals shows that most econometricians still concentrate on fixed parameter models and that parameter-values estimated in specific spatio-temporal contexts are presupposed to be exportable to totally different contexts. To warrant this assumption one, however, has to convincingly establish that the targeted acting causes are stable and invariant so that they maintain their parametric status after the bridging. The endemic lack of predictive success of the econometric project indicates that this hope of finding fixed parameters is a hope for which there really is no other ground than hope itself.

1 Comment »

RSS feed for comments on this post. TrackBack URI

  1. This post is a preposterous slander which dismally fails to meet the standards set by Jesus:
    “By the standard you judge you will be judged…Why do you see the speck in your brother’s eye, but fail to see the beam of wood in your own?…You hypocrite! First remove the beam from your own eye, and then you can see clearly to remove the speck from your brother’s eye.”
    – Jesus: Matthew: chapter 7

    Prof. Syll claims that “Econometrics is a science based on unwarranted assumptions”.
    However, it is his own “realist” philosophy (and that of Keynes) which is arrogantly based on unwarranted assumptions.
    In stark contrast, most scientists including econometricians have a far more humble empiricist recognition of the limitations of their knowledge.
    Realists claim that they know about the existence of a “deeper reality” beyond everyday experiences and beyond the knowledge of science.
    For example, this post mentions several claims made by Keynes :
    “we know” that “unanalysed economic material” is “non-homogeneous through time”;
    and “In fact we know that every one of these [alleged assumptions of econometrics] is far from being satisfied”.
    How does Keynes “know” these things?
    Likewise Prof. Syll (in previous posts) claims that:
    “there are structures that are durable and independent of our knowledge or beliefs about them. There exists a reality beyond our theories and concepts of it…the main task of science is … to identify and explain the underlying structures/forces/powers/ mechanisms that produce the observed events.”
    There is zero empirical evidence for any of these the claims of realists. There is zero physical evidence, zero testimony from visitors to “deeper reality”, zero communications with the inhabitants, zero information from supernatural beings, and zero valid evidence from dreams, fantasies or UFOs.
    Using their own peculiar language, the theoretical ontological presumptions of Syll, Keynes. Bhaskar and other so-called “realists” lack any “theoretically and empirically adequate data generating process (DGP), lack “external validity”, and lack any “export licence”.
    These faults also apply to to the the ‘LSE approach’ which seems to be based on the realist notion that a data generating process (DGP) exists in a “deeper reality” beyond everyday experiences and beyond the knowledge of science. The ‘LSE approach’ tries to find a middle ground between metaphysical critical realism and empiricism, but it ends up with a complicated theoretical muddle.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Blog at
Entries and Comments feeds.