Econometric forecasting and mathematical ‘rigour’

21 December, 2016 at 20:32 | Posted in Statistics & Econometrics | 3 Comments

9780199679348There have been over four decades of econometric research on business cycles … The formalization has undeniably improved the scientific strength of business cycle measures …

But the significance of the formalization becomes more difficult to identify when it is assessed from the applied perspective, especially when the success rate in ex-ante forecasts of recessions is used as a key criterion. The fact that the onset of the 2008 financial-crisis-triggered recession was predicted by only a few ‘Wise Owls’ … while missed by regular forecasters armed with various models serves us as the latest warning that the efficiency of the formalization might be far from optimal. Remarkably, not only has the performance of time-series data-driven econometric models been off the track this time, so has that of the whole bunch of theory-rich macro dynamic models developed in the wake of the rational expectations movement, which derived its fame mainly from exploiting the forecast failures of the macro-econometric models of the mid-1970s recession …

These observations indicate … that econometric methods are limited by their statistical approach in analysing and forecasting business cycles, and more over, that the explanatory power of generalised and established theoretical relationships is highly limited when applied to particular economies during particular periods alone, that is, if none of the local and institution-specific factors are taken into serious consideration …

The wide conviction of the superiority of the methods of the science has converted the econometric community largely to a group of fundamentalist guards of mathematical rigour … So much so that the relevance of the research to business cycles is reduced to empirical illustrations. To that extent, probabilistic formalisation has trapped econometric business cycle research in the pursuit of means at the expense of ends.

The limits of econometric forecasting has, as noted by Qin, been critically pointed out many times before. Trygve Haavelmo — with the completion (in 1958) of the twenty-fifth volume of Econometrica — assessed the the role of econometrics in the advancement of economics, and although mainly positive of the “repair work” and “clearing-up work” done, Haavelmo also found some grounds for despair:

Haavelmo-intro-2-125397_630x210There is the possibility that the more stringent methods we have been striving to develop have actually opened our eyes to recognize a plain fact: viz., that the “laws” of economics are not very accurate in the sense of a close fit, and that we have been living in a dream-world of large but somewhat superficial or spurious correlations.

And Ragnar Frisch also shared some of Haavelmo’s doubts on the applicability of econometrics:

sp9997db.hovedspalteI have personally always been skeptical of the possibility of making macroeconomic predictions about the development that will follow on the basis of given initial conditions … I have believed that the analytical work will give higher yields – now and in the near future – if they become applied in macroeconomic decision models where the line of thought is the following: “If this or that policy is made, and these conditions are met in the period under consideration, probably a tendency to go in this or that direction is created”.

Ragnar Frisch

Maintaining that economics is a science in the ‘true knowledge’ business, I remain a skeptic of the pretences and aspirations of econometrics. The marginal return on its ever higher technical sophistication in no way makes up for the lack of serious under-labouring of its deeper philosophical and methodological foundations that already Keynes complained about. The rather one-sided emphasis of usefulness and its concomitant instrumentalist justification cannot hide that the legions of probabilistic econometricians who give supportive evidence for their considering it ‘fruitful to believe’ in the possibility of treating unique economic data as the observable results of random drawings from an imaginary sampling of an imaginary population, are scating on thin ice.

A rigorous application of econometric methods in economics really presupposes that the phenomena of our real world economies are ruled by stable causal relations between variables. The endemic lack of predictive success of the econometric project indicates that this hope of finding fixed parameters is a hope for which there, really, is no other ground than hope itself.

Advertisements

3 Comments »

RSS feed for comments on this post. TrackBack URI

  1. Formal cause in economics is an effective theory used in many modern sciences and no entailing conditions could be explicitly provided for the effects. In notation, |- economic effects . Economic effects are reflected in economic accounting systems.

    https://en.wikipedia.org/wiki/Effective_theory

    “In science, an effective theory is a scientific theory which proposes to describe a certain set of observations, but explicitly without the claim or implication that the mechanism employed in the theory has a direct counterpart in the actual causes of the observed phenomena to which the theory is fitted. I.e. the theory proposes to model a certain effect, without proposing to adequately model any of the causes which contribute to the effect”

  2. Isn’t the view you are critiquing stronger, namely that it is ” ‘fruitful to believe’ in the possibility of treating unique economic data as the observable results of random drawings from an imaginary sampling of [a stationary] imaginary population”?

    And isn’t it obvious that the nature of economies now affect the range of possible economies next year? So in a sense, aren’t they assuming a single economy for all time? For example, prior to 2008 weren’t they simply claiming that as long as the music kept playing, there would be no crash?

    • You’re right, but even in its weaker form the view is wrong.
      The parameter invariance that Frisch was envisioning in the 1930s as a precondition for ‘structural parameters’ existing later was essentially replaced by a less demanding ‘exogeneity’ condition. But the basic logical and epistemological problem remains — and randomisation of parameter estimates or models with ‘time-varying’ parameters are of little significance when it comes to simulation and policy matters.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Create a free website or blog at WordPress.com.
Entries and comments feeds.