Macroeconomic calibration – or why it is difficult for economists to take their own subject seriously (wonkish)

20 Jul, 2012 at 15:27 | Posted in Economics, Theory of Science & Methodology | 7 Comments

There are many kinds of useless economics held in high regard within mainstream economics establishment today . Few – if any – are less deserved than the macroeconomic theory – mostly connected with Nobel laureates Finn Kydland, Robert Lucas,  Edward Prescott and Thomas Sargent – called calibration.

In an interview by Seppo Honkapohja and Lee Evans (Macroeconomic Dynamics 2005, vol. 9) Thomas Sargent says:

Evans and Honkapohja: What were the profession’s most important responses to the Lucas Critique?

Sargent: There were two. The first and most optimistic response was complete rational expectations econometrics. A rational expectations equilibrium is a likelihood function. Maximize it.

Evans and Honkapohja: Why optimistic?

Sargent: You have to believe in your model to use the likelihood function. it provides a coherent way to estimate objects of interest (preferences, technologies, information sets, measurement processes) within the context of a trusted model.

Evans and Honkapohja: What was the second response?

Sargent: Various types of calibration. Calibration is less optimistic about what your theory can accomplish because you would only use it if you din’t fully trust your entire model, meaning that you think your model is partly misspecified or incompetely specified, or if you trusted someone else’s model and data set more than your own. My recollection is that Bob Lucas and Ed Prescott were initially very enthusiastic about rational expetations econometrics. After all, it simply involved imposing on ourselves the same high standards we had criticized the Keynesians for failing to live up to. But after about five years of doing likelihood ratio tests on rational expectations models, I recall Bob Lucas and Ed Prescott both telling me that those tests were rejecting too many good models. The idea of calibration is to ignore some of the probabilistic implications of your model but to retain others. Somehow, calibration was intended as a balanced response to professing that your model, although not correct, is still worthy as a vehicle for quantitative policy analysis….

Evans and Honkapohja: Do you think calibration in macroeconomics was an advance?

Sargent: In many ways, yes. I view it as a constructive response to Bob’ remark that “your likelihood ratio tests are rejecting too many good models”. In those days… there was a danger that skeptics and opponents would misread those likelihood ratio tests as rejections of an entire class of models, which of course they were not…. The unstated cse for calibration was that it was a way to continue the process of acquiring experience in matching rational expectations models to data by lowering our standards relative to maximum likelihood, and emphasizing those features of the data that our models could capture. Instead of trumpeting their failures in terms of dismal likelihood ratio statistics, celebrate the featuers that they could capture and focus attention on the next unexplained feature that ought to be explained. One can argue that this was a sensible response… a sequential plan of attack: let’s first devote resources to learning how to create a range of compelling equilibrium models to incorporate interesting mechanisms. We’ll be careful about the estimation in later years when we have mastered the modelling technology…

But is the Lucas-Kydland-Prescott-Sargent calibration really an advance?

Let’s see what two eminent econometricians have to say. In Journal of Economic Perspective (1996, vol. 10) Lars Peter Hansen and James J. Hickman writes:

It is only under very special circumstances that a micro parameter such as the intertemporal elasticity of substitution or even a marginal propensity to consume out of income can be ‘plugged into’ a representative consumer model to produce an empirically concordant aggregate model … What credibility should we attach to numbers produced from their ‘computational experiments’, and why should we use their ‘calibrated models’ as a basis for serious quantitative policy evaluation? … There is no filing cabinet full of robust micro estimats ready to use in calibrating dynamic stochastic equilibrium models … The justification for what is called ‘calibration’ is vague and confusing.

This is the view of econometric methodologist Kevin Hoover :

The calibration methodology, to date,  lacks any discipline as stern as that imposed by econometric methods.

And this is the verdict of Nobel laureate Paul Krugman :

The point is that if you have a conceptual model of some aspect of the world, which you know is at best an approximation, it’s OK to see what that model would say if you tried to make it numerically realistic in some dimensions.

But doing this gives you very little help in deciding whether you are more or less on the right analytical track. I was going to say no help, but it is true that a calibration exercise is informative when it fails: if there’s no way to squeeze the relevant data into your model, or the calibrated model makes predictions that you know on other grounds are ludicrous, something was gained. But no way is calibration a substitute for actual econometrics that tests your view about how the world works.

In physics it may possibly not be straining credulity too much to model processes as ergodic – where time and history do not really matter – but in social and historical sciences it is obviously ridiculous. If societies and economies were ergodic worlds, why do econometricians fervently discuss things such as structural breaks and regime shifts? That they do is an indication of the unrealisticness of treating open systems as analyzable with ergodic concepts.

The future is not reducible to a known set of prospects. It is not like sitting at the roulette table and calculating what the future outcomes of spinning the wheel will be. Reading Sargent and other calibrationists one comes to think of Robert Clower’s apt remark that

much economics is so far removed from anything that remotely resembles the real world that it’s often difficult for economists to take their own subject seriously.

Instead of assuming calibration and rational expectations to be right, one ought to confront the hypothesis with the available evidence. It is not enough to construct models. Anyone can construct models. To be seriously interesting, models have to come with an aim. They have to have an intended use. If the intention of calibration and rational expecteations  is to help us explain real economies, it has to be evaluated from that perspective. A model or hypothesis without a specific applicability is not really deserving our interest.

To say, as Edward Prescott that

one can only test if some theory, whether it incorporates rational expectations or, for that matter, irrational expectations, is or is not consistent with observations

is not enough. Without strong evidence all kinds of absurd claims and nonsense may pretend to be science. We have to demand more of a justification than this rather watered-down version of “anything goes” when it comes to rationality postulates. If one proposes rational expectatons one also has to support its underlying assumptions. None is given, which makes it rather puzzling how rational expectations has become the standard modeling assumption made in much of modern macroeconomics. Perhaps the reason is, as Paul Krugman has it, that economists often mistake

beauty, clad in impressive looking mathematics, for truth.

But I think Prescott’s view is also the reason why calibration economists are not particularly interested in empirical examinations of how real choices and decisions are made in real economies. In the hands of Lucas, Prescott and Sargent rational expectations has been transformed from an – in principle – testable hypothesis to an irrefutable proposition. Irrefutable propositions may be comfortable – like religious convictions or ideological dogmas – but it is not  science.

7 Comments

  1. The problem is not necessarily that rational expectations are wrong. The problem is the models in which they are embodied.
    The “irrefutable proposition” is not so much the theoretical element of rational expectations, than the conclusions of the neo-classical model (a free-market equilibrium is an optimum / inequalities reflect choices and productivity).
    All the elements of the model point towards those conclusions. But as soon as you introduce simple concepts, such as increasing returns on marketing expenses, or the lobbying power of rent-seeking corporations, the conclusions change radically (even if you retain rational expectations).

    But even if rational expectations are right, that does not make them useful either. On the contrary, it is an element that makes the model heavier to use and blurs its mechanisms. It also makes more difficult the introduction of more useful assumptions, and less visible that crucial assumptions are in fact missing. In the end, the complexity of the model substitutes for its relevance.

    • You’re right. I was a little bit sloppy in shortening “rational expectations hypothesis” (REH) to “rational expectations” – and then it also, of coure, comes down to exactly what you mean by “rational”.

    • You’ve incorrectly characterized what economists claim are the conclusions of the neoclassical model.

      There is nothing in neoclassical economics that says that free-market equilibria are optimal–I assume that you are referring to the first theorem of welfare economics, which explicitly lists out the (highly specific) assumptions that will guarantee that the market equilibrium is Pareto Efficient (meaning you can’t make anyone better off without hurting someone else).

      The second point, “inequalities reflect choices and productivity” is actually the exact opposite of what neoclassical economics claims: the second fundamental theorem of welfare economics actually implies that inequality is a result of the exogenous distribution of initial endowments, not choices on the part of agents. That is, the theorem states that any efficient allocation is supportable as a market equilibrium for some distribution of endowments, which means that which potential equilibrium we end up (some of which will have more inequality than others) at depends on the initial endowments. And again, that theorem states a specific list of assumptions sufficient for the result to hold, and there is nothing about the neoclassical model in general that requires any of these assumptions to be used.

  2. In my view, Deidre McCloskey’s “Has Formalization in Economics Gone too Far?” remains the best critique of it all!!!

    • It’s good, but I would also recommend Tony Lawson’s “Economics and reality”!

  3. I don’t think it’s fair to include Thomas Sargent as one of the calibration supporters. Like Krugman he doesn’t have a problem with people calibrating their model just out of intellectual curiosity, but he’s definitely in favor of formal econometric estimation as a way to scientifically test models. That’s what he meant when he said “A rational expectations equilibrium is a likelihood function. Maximize it.”

  4. […] stable even to changes in economic policies. As yours truly has argued in a couple of post (e. g. here and here), this, however, is a dead […]


Sorry, the comment form is closed at this time.

Blog at WordPress.com.
Entries and comments feeds.