Econometrics — a critical-realist perspective

6 Apr, 2021 at 20:39 | Posted in Statistics & Econometrics | 11 Comments

Mainstream economists often hold the view that criticisms of econometrics are the conclusions of sadly misinformed and misguided people who dislike and do not understand much of it. This is a gross misapprehension. To be careful and cautious is not equivalent to dislike.

Keynes' critique of econometrics — as valid today as it was in 1939 | LARS  P. SYLLThe ordinary deductivist ‘textbook approach’ to econometrics views the modelling process as foremost an estimation problem since one (at least implicitly) assumes that the model provided by economic theory is a well-specified and ‘true’ model. The more empiricist, general-to-specific-methodology (often identified as the ‘LSE approach’) on the other hand views models as theoretically and empirically adequate representations (approximations) of a data generating process (DGP). Diagnostics tests (mostly some variant of the F-test) are used to ensure that the models are ‘true’ – or at least ‘congruent’ – representations of the DGP. The modelling process is here more seen as a specification problem where poor diagnostics results may indicate a possible misspecification requiring re-specification of the model. The objective is standardly to identify models that are structurally stable and valid across a large time-space horizon. The DGP is not seen as something we already know, but rather something we discover in the process of modelling it. Considerable effort is put into testing to what extent the models are structurally stable and generalizable over space and time.

Although yours truly has some sympathy for this approach in general, there are still some unsolved ‘problematics’ with its epistemological and ontological presuppositions. There is, e. g., an implicit assumption that the DGP fundamentally has an invariant property and that models that are structurally unstable just have not been able to get hold of that invariance. But one cannot just presuppose or take for granted that kind of invariance. It has to be argued and justified. Grounds have to be given for viewing reality as satisfying conditions of model-closure. It is as if the lack of closure that shows up in the form of structurally unstable models somehow could be solved by searching for more autonomous and invariable ‘atomic uniformity.’ But if reality is ‘congruent’ to this analytical prerequisite has to be argued for, and not simply taken for granted.

A great many models are compatible with what we know in economics — that is to say, do not violate any matters on which economists are agreed. Attractive as this view is, it fails to draw a necessary distinction between what is assumed and what is merely proposed as hypothesis. This distinction is forced upon us by an obvious but neglected fact of statistical theory: the matters ‘assumed’ are put wholly beyond test, and the entire edifice of conclusions (e.g., about identifiability, optimum properties of the estimates, their sampling distributions, etc.) depends absolutely on the validity of these assumptions. The great merit of modern statistical inference is that it makes exact and efficient use of what we know about reality to forge new tools of discovery, but it teaches us painfully little about the efficacy of these tools when their basis of assumptions is not satisfied. 

Millard Hastay

Even granted that closures come in degrees, we should not compromise on ontology. Some methods simply introduce improper closures, closures that make the disjuncture between models and real-world target systems inappropriately large. ‘Garbage in, garbage out.’

Underlying the search for these immutable ‘fundamentals’ is the implicit view of the world as consisting of entities with their own separate and invariable effects. These entities are thought of as being able to be treated as separate and addible causes, thereby making it possible to infer complex interaction from a knowledge of individual constituents with limited independent variety. But, again, if this is a justified analytical procedure cannot be answered without confronting it with the nature of the objects the models are supposed to describe, explain or predict. Keynes thought it generally inappropriate to apply the ‘atomic hypothesis’ to such an open and ‘organic entity’ as the real world. As far as I can see these are still appropriate strictures all econometric approaches have to face. Grounds for believing otherwise have to be provided by the econometricians.

Trygve Haavelmo, the father of modern probabilistic econometrics, wrote (in ‘Statistical testing of business-cycle theories’, The Review of  Economics and Statistics, 1943) that he and other econometricians could not build a complete bridge between our models and reality by logical operations alone, but finally had to make “a non-logical jump” [1943:15]. A part of that jump consisted in that econometricians “like to believe … that the various a priori possible sequences would somehow cluster around some typical time shapes, which if we knew them, could be used for prediction” [1943:16]. But since we do not know the true distribution, one has to look for the mechanisms (processes) that “might rule the data” and that hopefully persist so that predictions may be made. Of possible hypotheses on different time sequences (“samples” in Haavelmo’s somewhat idiosyncratic vocabulary) most had to be ruled out a priori “by economic theory”, although “one shall always remain in doubt as to the possibility of some … outside hypothesis being the true one” [1943:18].

To Haavelmo and his modern followers, econometrics is not really in the truth business. The explanations we can give of economic relations and structures based on econometric models are “not hidden truths to be discovered” but rather our own “artificial inventions”. Models are consequently perceived not as true representations of DGP, but rather instrumentally conceived “as if”-constructs. Their ‘intrinsic closure’ is realized by searching for parameters showing “a great degree of invariance” or relative autonomy and the ‘extrinsic closure’ by hoping that the ‘practically decisive’ explanatory variables are relatively few, so that one may proceed (as he formulates it in ‘The probability approach in econometrics’, Supplement to Econometrica, 1944) “as if … natural limitations of the number of relevant factors exist” [Haavelmo 1944:29].

Haavelmo seems to believe that persistence and autonomy can only be found at the level of the individual, since individual agents are seen as the ultimate determinants of the variables in the economic system.

But why the ‘logically conceivable’ really should turn out to be the case is difficult to see. At least if we are not satisfied with sheer hope. Using unargued for and unjustified assumptions of complex structures in an open system being reducible to those of individuals doesn’t suffice. In real economies, it is unlikely that we find many ‘autonomous’ relations and events. And one could, of course, also raise the objection that to invoke a probabilistic approach to econometrics presupposes, e. g., that we have to be able to describe the world in terms of risk rather than genuine uncertainty.

And that is exactly what Haavelmo [1944:48] does: “To make this a rational problem of statistical inference we have to start out by an axiom, postulating that every set of observable variables has associated with it one particular ‘true’, but unknown, probability law.”

But to use this “trick of our own” and just assign “a certain probability law to a system of observable variables”, however, cannot – just as little as pure hope – build a firm bridge between model and reality. Treating phenomena as if they essentially were stochastic processes is not the same as showing that they essentially are stochastic processes.

Rigour and elegance in the analysis do not make up for the gap between reality and model. It is the distribution of the phenomena in itself and not its estimation that ought to be at the centre of the stage. A crucial ingredient to any economic theory that wants to use probabilistic models should be a convincing argument for the view that “there can be no harm in considering economic variables as stochastic variables” [Haavelmo 1943:13]. In most cases, no such arguments are given.

We have to accept that reality has no ‘correct’ representation in an economic or econometric model. There is no such thing as a ‘true’ model that can capture an open, complex and contextual system in a set of equations with parameters stable over space and time, and exhibiting invariant regularities. To just ‘believe,’ hope,’ or ‘assume’ that such a model possibly could exist is not enough. It has to be justified in relation to the ontological conditions of social reality.

A rigorous application of econometric methods in economics really presupposes that the phenomena of our real-world economies are ruled by stable causal relations between variables. A perusal of the leading econom(etr)ic journals shows that most econometricians still concentrate on fixed parameter models and that parameter-values estimated in specific spatio-temporal contexts are presupposed to be exportable to totally different contexts. To warrant this assumption one, however, has to convincingly establish that the targeted acting causes are stable and invariant so that they maintain their parametric status after the bridging. The endemic lack of predictive success of the econometric project indicates that this hope of finding fixed parameters is a hope for which there really is no other ground than hope itself.


  1. @skippy:
    Show me any 80s economist who did not think a $28 trillion US national debt would surely result in unsustainable borrowing costs or hyperinflation or both. Economists are truly monolithic about inflation and/or budget constraints.
    Also, regulators such as Bill Black are not qualified to judge the social utility of Total Return Swaps, because they are clueless about ways to hedge the counterparty risk. The bank lenders will learn to hedge that risk. Why didn’t they use variance swaps to hedge against a sudden rise in volatility, instead of fire sales?
    @ Kingsley Lewis:
    Economic models do harm by assuming constraints such as inflation or borrowing costs, which have failed to appear despite a consensus among all models that they should have manifested long before now, given the massive surge in government borrowing.
    We could easily finance and inflation-proof a universal basic income today without raising taxes, but economists still stubbornly and uniformly stick to disproven models like epicyclists ignoring discrepancies in Mercury’s orbit …

    • Non sequitur.

  2. @rsm
    Yes, long range econometric forecasts, like weather forecasts, are never perfectly accurate.
    Even so, they are far more accurate and useful than random numbers.
    And they are far more accurate and useful than forecasts of “deep reality” by critical realist philosophers. The latter don’t even exist.

    • I think this example conflicts with your opinion.

      • Skippy,
        Your link is concerned with financial leverage and fraud. It says nothing about critical realism or econometrics.
        Maybe you intended to place your comment somewhere else?

        • What do you think informs regulatory frameworks or the views [actions] of the market participants E.g. hard to imagine how each of the prime brokers could have believed they were the primary lender, what is the financial & social utility of total return swaps? – all it does is hide risk.


          It was not misplaced, referred to Prof Black as an example in this specific case due to his experience and knowledge from an applied perspective contra to a subjective example.


          I would point out your previous comment use of the term “econometric forecasts” and remind that said numbers are symbolism for subjective philosophers opines. In this latter I would suggest the book ‘Weapons of Math Destruction’ and how they are opaque and difficult to contest with some. Then there is the issues with ingrained bias, per se rational agent et al.


          I would argue that this is a multidisciplinary problem set and econometrics too narrow and blinkered to rely on, not that past metrics were taken out of their authors intent or abused – see VaR in reference to abuse.

  3. Businesses, governments and households have to make vitally important decisions about expenditures, prices, work, etc.
    Here is Prof. Syll’s advice to them from this post and previous posts:-

    – Don’t listen to “experts”. Economies can’t be usefully represented by oversimplified models. Real world societies are the product of complex non-linear relationships between multiple factors many of which cannot be quantified.
    – Don’t “follow the science”. Don’t try to learn from historical or geographical data or experience. There are no stable causal relations between variables. Social relationships change over time and differ between societies, so data from particular times or places are unreliable as a guide for other times or locations.
    – Don’t base any decisions on the likelihood or probabilities of data or outcomes. This is just a “trick”.
    – Don’t look at any forecasts.The world is dominated by fundamental uncertainty so we just don’t know anything about the future.
    According to Prof. Syll: “Econometrics is not really in the truth business”.
    Instead, we must follow the ontology of heterodox critical realist philosophers to find “truth”.
    – Enlightened critical realist sages can discern the mysterious “real structures and mechanisms” which lie behind observable data. Knowledge of this “deep reality” is essential “if we really want to understand, explain and (possibly) predict things in the real world.”
    – Instead of using inductive empirical methods and statistical inference, we must use “Inference to the Best Alternative”. This approach has the virtue of no clear criteria or information requirements.
    Several centuries ago a wise comment was made which applies to Prof. Syll’s advice:
    “Spirits of a higher rank than those immersed in flesh may have clear ideas of the radical constitution of substances … but the manner how they come by that knowledge exceeds our conceptions.”
    – John Locke, 1689: An Essay concerning Human Understanding, Book III, chapter XI, §23

    • I confess I sometimes feel exasperation at the way Prof Syll distills his critique down to an equivalent of the quip, “nobody knows nuthin’ in this business”, which is unnecessarily nihilist in my view, and maybe misleadingly so.
      I take epistemology seriously, as parent to critical methods. Economics really ought to be much more interested, as not simply as a matter of its own methodology, but as a core object of scrutiny. The economy, as a system of production and exchange, is affected in every detail of every transaction and relationship by the distribution of knowledge and risk and economic theorists just barely acknowledge daintily a few isolated instances or examples of how asymmetric information, say, figures in strategic behavior expressed in abstract terms, never departing far from the Chicago model of everyone acting as if they know now all they will ever need to know.
      Uncertainty is pervasive. It shapes everything about the structure of the economy — adapting to uncertainty is why the economy has a persistent structure of massive commitment to sunk cost investments. My perception is that mainstream economists just ignore this, like they completely ignore most aspects of the actual economy. “Market economy” is their term and model for an economy with very few actual markets; thinking that price formation happens in market bidding, when most prices are administered is an act of wilful ignorance akin to madness in any other profession.
      That econometrics is most often just b.s. is purely incidental when job 1 is agnatology to feed the idiocracy.

      • Wellie in that paradigm the point of binary exchange determines utility, value, price, and most important of all ***INFORMATION*** which can be used by others to define reality and rationally[tm] position themselves accordingly E.g. those that lose are ***IRRATIONAL*** and serve as a behavioral conditioning tool to move the unwashed too light[tm] ….

        I mean its not unlike in antiquity where special stuff could be obtained by the unwashed with a token put in a slot or doors magically opened by the wave of the hand and some words spoken whilst steam physics obeyed their command.

        So I recon power/capital loathes uncertainty and makes sure they don’t experience it as a group, but have no issues with others, bargaining table antics aside between factions.

    • “Businesses, governments and households have to make vitally important decisions about expenditures, prices, work, etc.”
      What 1980s economists predicted that, by today, US debt would be 1000% higher while borrowing costs fell 90% and inflation remained below 2%?
      Why should we listen to economic predictions today?

      • Wellie …. quasi monetarism has a dark side you say, 80s Rubinomics, wages and productivity diverging, wages are inflationary ev’bal so ship Mfg offshore and recycle cheap external labour price back for feel good consumerist credit* fueled balance sheet flows, which pump equity prices for admin you say … those economists?

        Economics is not a monolith, have a care.

Sorry, the comment form is closed at this time.

Blog at
Entries and Comments feeds.