Models, math and macro

17 Apr, 2015 at 15:58 | Posted in Economics | 6 Comments

“To put it bluntly, the discipline of economics has yet to get over its childish passion for mathematics and for purely theoretical and often highly ideological speculation, at the expense of historical research and collaboration with the other social sciences.”

The quote is, of course, from Piketty’s Capital in the 21st Century. Judging by Noah Smith’s recent blog entry, there is still progress to be made.

Smith observes that the performance of DSGE models is dependably poor in predicting future macroeconomic outcomes—precisely the task for which they are widely deployed. Critics of DSGE are however dismissed because—in a nutshell—there’s nothing better out there.
This argument is deficient in two respects. First, there is a self-evident flaw in a belief that, despite overwhelming and damning evidence that a particular tool is faulty—and dangerously so—that tool should not be abandoned because there is no obvious replacement.

The second deficiency relates to the claim that there is no alternative way to approach macroeconomics:

“When I ask angry “heterodox” people “what better alternative models are there?”, they usually either mention some models but fail to provide links and then quickly change the subject, or they link me to reports that are basically just chartblogging.”

Although Smith is too polite to accuse me directly, this refers to a Twitter exchange from a few days earlier. This was triggered when I took offence at a previous post of his in which he argues that the triumph of New Keynesian sticky-price models over their Real Business Cycle predecessors was proof that “if you just keep pounding away with theory and evidence, even the toughest orthodoxy in a mean, confrontational field like macroeconomics will eventually have to give you some respect”.

When I put it to him that, rather then supporting his point, the failure of the New Keynesian model to be displaced—despite sustained and substantiated criticism—rather undermined it, he responded—predictably—by asking what should replace it.

The short answer is that there is no single model that will adequately tell you all you need to know about a macroeconomic system. A longer answer requires a discussion of methodology and the way that we, as economists, think about the economy. To diehard supporters of the ailing DSGE tradition, “a model” means a collection of dynamic simultaneous equations constructed on the basis of a narrow set of assumptions around what individual “agents” do—essentially some kind of optimisation problem. Heterodox economists argue for a much broader approach to understanding the economic system in which mathematical models are just one tool to aid us in thinking about economic processes.

What all this means is that it is very difficult to have a discussion with people for whom the only way to view the economy is through the lens of mathematical models—and a particularly narrowly defined class of mathematical models—because those individuals can only engage with an argument by demanding to be shown a sheet of equations.

Jo Michell

[h/t Jan Milch]


  1. Regarding my own comment reproducing Noah Smith’s Point 4 above:

    My view, almost certainly not Noah’s, is that what goes wrong in neoclassical economics is that people confuse geometry with cartography or geography. So, when Jo Mitchell says the problem is methodology, I would nod agreement, but go further, and claim the problem is epistemology.

    Paul Samuelson’s Foundations is geometry, and quite literally imitates Euclid’s Geometry in its careful exposition of a system of theorems. It is an error to think, as Samuelson apparently did, that geometry produces maps, or that matching an analytic model to particular circumstances, is a matter of verifying axiomatic assumptions. Analytic models are always, by their nature, a priori and, therefore, never descriptive. The process of building what Popper called operational models, to guide careful observation and measurement, and actually learn about the world a posteriori is an activity distinct and different from speculative theory (which is not to claim that you can do the former without the latter). You learn about the world, by looking at the world, and, yes, thinking logically about what must be the nature of functional relations (theoretical analysis), but also developing operational ideas and measuring to confirm that you actually see what you imagine you see.
    A simple-minded geometer, assuming a Euclidean plane is a proper analogue, might model a flat-earth with a cliff at its unknown edge. Any competent navigator, cartographer or geographer, measuring and calculating, must learn that the earth is a sphere. This is where we are in mainstream macroeconomics, with economic theoreticians touting Ptolemaic “frictions” and epicycles as insights and explanations, but resisting mightily the need to think seriously and operationally about how the actual, institutionalized world works: they refuse to make maps and test them by navigation and measurement.
    The typical economist knows a lot about economics and almost nothing about the economy. And, he refuses to realize that that’s an obstacle to learning more about either. This is where Noah’s idealistic “You’ve got to have a way to take them to data . . . ” seems both correct and epistemologically misguided, in expecting additional theory, when he should be looking for the accumulation of well-observed empirical knowledge.
    In the miniature twitterstorm linked via the OP, Jo Mitchell confronted Smith with a Godley stock-flow consistent simulation model from 1999 that many believe provides remarkable insight into the subsequent developments leading up to the 2008 financial crisis. Godley’s method is a kind of map-making, exactly the kind of operational model, useful for poking at the world to see its shape, while overcoming the modeler’s expectations. Godley deliberately constrains his model to be stock-flow consistent, because inconsistency of that kind is a well-known symptom of certain ideologies.
    Noah’s response was full of schoolyard taunts and sarcasm, but I think I see two points of resistance revealed, beyond simple geeky immaturity. One is social: he complains, “If these people had any confidence in their models, you’d think they’d publish them as working papers.” “..because of politics, wouldn’t they be broadcasting them for all the world to see, publicizing them…” which complaints (given the prodigious persistence of the Levy Institute) reveal the remarkable and arrogant insularity of mainstream macro — talk about epistemic closure!
    The other complaint I found telling was about the lack of generality in an operational model tied so specifically to circumstances. This is certainly true of operational models: they are always focused on the singular instance of a singular reality, and they lack the transcendence promised by the Platonic Ideals of high theory.

    • It’s very hard to understand what an “analytic model” is supposed to describe. You note that they are a priori. I get this with geometry. One can, at a level of pure abstraction, conjure up the notion of a triangle.

      But that it utterly inconsistent with any possible definition of the word “model”. There is no such thing as an a priori model because models are, by definition, things that correspond to the world, a world that must be observed.
      Even “metaphor” (which is what Krugman obviously means with his Friedman-esqe misuse of the word “model”) is a concept that involves a necessary a forteriori component.
      My question then is ontological. Whether it’s the fact that there is no systematic way to separate “useful economics” from bad “ideological economics” or the fact that “a priori model” is literally nonsense, it seems inescapable that truthful economics does not/cannot logically exist.

      • As I go through and think of the word model in other fields, the disconnect becomes more clear:

        The “mouse model” of human biology.
        The “Bohr model” of the atom.
        My plastic and cement model of an F16 fighter jet.
        A “computational model” of the atmosphere.
        A styrofoam and coat hanger model of the solar system.

        In every single one, the “model-ness” is fundamentally about a systematic a forteriori correspondence between the model and the modeled.

  2. One hears the complaint, from orthodox such as Krugman and heterodox such as Sandwichman, that it is very difficult to get someone to see the truth when his salary depends on his believing what is false.

    But the money economy has been around for only the last minute of human experience. Much more fundamental is the fact that we all value the things we are good at.

    Thus of critical importance is the self-selection that creates the set of economists. This selection, in 99% of cases, happens before one splits into either the orthodox or heterodox camps.
    Thus the dominant paradigm only faces critique from academics with the same skill set as those in the mainstream.
    What is that skill? Mathematical modeling. What are the results? A critique that misses the point and suggests that the cure for modeling is modeling, just more of it or different kinds of it.
    Meanwhile advances in understanding human social organization, and it’s subpart economics, are those social sciences whose participants are better at studying humans and less good at math. Such folks would never dream of hanging around w/ the autistics in the Econ department.

  3. I thought Noah’s Point 4 was spot-on:.

    4. “Macroeconomists don’t do enough to kill their models off.”
    This is something I hear surprisingly few people say, given that I think it’s the best of the criticisms out there. If you let a million flowers bloom but don’t cut any of the flowers, you get a big warehouse full of flowers. OK, so that metaphor went nowhere, but you get the point. Macroeconomists, when they get defensive, tend to say something along the lines of “We got models for everythin’!” But is that a good thing??
    I feel like if you have models for everything, you don’t actually have any models at all. Without a way of choosing between models, your near-infinite stable of models turns into one big giant mega-model that can give anyone any results he wants. Worried about a financial crisis? Pull out a model that tells you a financial crisis could be looming. Worried about inflation? Pull out a model where inflation is a big danger. And so on.
    Now, technically, you could choose between models based on the plausibility of the assumptions. But three things make this impossible in practice. First, the need for tractability means that the assumptions in almost any modern macro model will be utterly implausible to anyone who has not spent decades in a monastery high in the Himalayas training himself in the art of self-deception. Second, the assumptions are so stylized that it takes a huge amount of talent just to figure out what they are – in fact, we’re starting to see the emergence of top macro people, like Matt Rognlie, who specialize in figuring out what the heck models are actually saying. And third, with a near-infinite catalog of models to comb through, there’s just no way to compare any significant number of them all at once.
    If you ever want macro models to actually be useful, it’s not enough to just wave your hands and say “all models are wrong”. It’s not enough to treat models as ways to “organize our thinking”. You’ve got to have a way to take them to data and decide if you should keep them around, send them back to the shop for alterations, or burn them in a fire.

  4. Exactly the point!

    Noone would ask a sociologist to descbribe a society with handful of equations

Sorry, the comment form is closed at this time.

Blog at
Entries and Comments feeds.