Tractability hoax redux​

23 Apr, 2018 at 18:32 | Posted in Economics | 2 Comments

A ‘tractable’ model is one that you can solve, which means there are several types of tractability:​ analytical tractability (finding a solution to a theoretical model), empirical tractability (being able to estimate/calibrate your model) and computational tractability (finding numerical solutions). It is sometimes hard to discriminate between theoretical and empirical, or empirical and computational tractability …

canopenrWhat I’d like to capture is the effect of those choices economists make “for convenience,” to be able to reach solutions, to simplify, to ease their work, in short, to make a model tractable. While those assumptions are conventional and meant to be lifted as mathematical, theoretical and empirical skills and technology (hardware and software) ‘progress,’ their underlying rationale is often lost as they are taken up by other researchers, spread, and become standard (implicit in the last sentence is the idea that what a tractable model is evolves as new techniques and technologies are brought in) …

The tractability lens also helps me make sense of what is happening in economics now, and what might come next.  Right now, clusters of macroeconomists are each working on relaxing one or two tractability assumptions: research agendas span heterogeneity, non-rational expectations, financial markets, non-linearities, fat-tailed distributions, etc. But if you put all these adds-on together (assuming you can design a consistent model, and that adds-on are the way forward, which many critics challenge), you’re back to non-tractable. So what is the priority? How do macroeconomists rank these model improvements? And can the profession afford​ waiting 30 more years, 3 more financial crises and two trade wars before it can finally say it has a model rich enough to anticipate crises?

Beatrice Cherrier

Important questions that serious economists ought to ask themselves. Using ‘simplifying’ tractability assumptions — rational expectations, common knowledge, representative agents, linearity, additivity, ergodicity, etc — because otherwise they cannot ‘manipulate’ their models or come up with ‘rigorous ‘ and ‘precise’ predictions and explanations, does not exempt economists from having to justify their modelling choices. Being able to ‘manipulate’ things in models cannot per se be enough to warrant a methodological choice. If economists — as Cherrier conjectures — do not think their tractability assumptions make for good and realist models, it is certainly a just question to ask for clarification of the ultimate goal of the whole modelling endeavour.

Take for example the ongoing discussion on rational expectations as a modelling assumption. Those who want to build macroeconomics on microfoundations usually maintain that the only robust policies are those based on rational expectations and representative actors models. As yours truly has tried to show in On the use and misuse of theories and models in mainstream economics there is really no support for this conviction at all. If microfounded macroeconomics has nothing to say about the real world and the economic problems out there, why should we care about it? The final court of appeal for macroeconomic models is not if we — once we have made our tractability assumptions — can ‘manipulate’ them, but the real world. And as long as no convincing justification is put forward for how the inferential bridging de facto is made, macroeconomic modelbuilding is little more than hand-waving that give us rather a little warrant for making inductive inferences from models to real-world target systems. If substantive questions about the real world are being posed, it is the formalistic-mathematical representations utilized to analyze them that have to match reality, not the other way around.

Browning et al. (1999, p. 545) recognize in regard to DSGEs that “the microeconomic evidence is often incompatible with the macroeconomic model being calibrated.” These authors highlight three main criticalities feeding the micro–macro gap and that DSGEs largely neglect for reasons of computational tractability: heterogeneity, which empirical evidence widely documents, in preferences, constraints, and skills; uncertainty, for it is fundamental to distinguish between micro and macro uncertainty, and to deduce it from measurement error and model misspecification; and the synthesis of the micro evidence, since a plethora of micro studies often implies very different assumptions that prevent the (estimated) parameters from fitting any kind of context.

Finally, worth recalling is the problem of the intertemporal inconsistency of the rational expectations hypothesis with unanticipated structural breaks. The empirical importance of this point is very evident on considering the effects of the latest financial and economic crisis.

Roberto Marchionatti & Lisa Sella


  1. Human psychology does not always anticipate liking a “negative” result or realizing its importance.
    Finding that something is definitely true is obviously important, but may not always be possible. Finding that something cannot be true is often possible, but too easily disregarded as a failure to find the truth.
    Economists seem to like models that produce an equilibrium solution. Several equilibrium solutions, maybe a little less likeable. A model that produces no equilibrium solution is disregarded. Many, maybe most markets and “markets” do not clear — what kind of descriptive model of their operations could produce a solution when the actual market itself can produce no solution? Maybe the inability of an institution in particular circumstances to arrive at a stable solution is something worth noticing. How are economists to notice, if no one ever studies models that are intractable by design?

  2. Why, in this age of very powerful computers, is intractability a bugaboo?

Sorry, the comment form is closed at this time.

Blog at
Entries and comments feeds.