Wren-Lewis and the Rodrik smorgasbord view of economic models

17 Jan, 2016 at 16:20 | Posted in Economics | 13 Comments

abbIn December 2015 yours truly run a series of eight posts on this blog discussing Dani Rodrik‘s Economics Rules (Oxford University Press, 2015).

There sure is much in the book I like and appreciate. It is one of those rare examples where a mainstream economist — instead of just looking the other way — takes his time to ponder on the tough and deep science-theoretic and methodological questions that underpin the economics discipline.

But (as I argue at large in a forthcoming article in this journal) there is also a very disturbing apologetic tendency in the book to blame all of the shortcomings on the economists and depicting economics itself as a problem-free smorgasbord collection of models. If you just choose the appropriate model from the immense and varied smorgasbord there’s no problem. It is as if all problems in economics were conjured away if only we could make the proper model selection.

Today, Oxford macroeconomist Simon Wren-Lewis has a post up on his blog on Rodrik’s book — and is totally überjoyed:

The first and most important thing to say is this is a great book … because it had a way of putting things which was illuminating and eminently sensible. Illuminating is I think the right word: seeing my own subject in a new light, which is something that has not happened to me for a long time. There was nothing I could think of where I disagreed …

The key idea is that there are many valid models, and the goal is to know when they are applicable to the problem in hand …

Lots of people get hung up on the assumptions behind models: are they true or false, etc. An analogy I had not seen before but which I think is very illuminating is with experiments. Models are like experiments. Experiments are designed to abstract from all kinds of features of the real world, to focus on a particular process or mechanism (or set of the same). The assumptions of models are designed to do the same thing.

“Models are like experiments.” I’ve run into that view many times over the years when having discussions with mainstream economists on their ‘thought experimental’ obsession — and I still think it’s too vague and elusive to be helpful. Just repeating the view doesn’t provide the slightest reasont to believe it.

Although perhaps thought provoking to some, I find the view on experiments offered too simplistic. And for several reasons — but mostly because the kind of experimental empiricism it favours is largely untenable.

Experiments are very similar to theoretical models in many ways  — on that Wren-Lewis and yours truly are in total agreement. Experiments have the same basic problem that they are built on rather artificial conditions and have difficulties with the “trade-off” between internal and external validity. But — with more artificial conditions and internal validity, also comes less external validity. The more we rig experiments/models to avoid the “confounding factors”, the less the conditions are reminiscent of the real “target system”. The nodal issue is how economists using different isolation strategies in different “nomological machines” attempt to learn about causal relationships.

Assume that you have examined how the work performance of Swedish workers A is affected by B (“treatment”). How can we extrapolate/generalize to new samples outside the original population (e.g. to the UK)? How do we know that any replication attempt “succeeds”? How do we know when these replicated experimental results can be said to justify inferences made in samples from the original population? If, for example, P(A|B) is the conditional density function for the original sample, and we are interested in doing a extrapolative prediction of E [P(A|B)], how can we know that the new sample’s density function is identical with the original? Unless we can give some really good argument for this being the case, inferences built on P(A|B) is not really saying anything on that of the target system’s P'(A|B).

As I see it, this is the heart of the matter. External validity/extrapolation/generalization is founded on the assumption that we can make inferences based on P(A|B) that is exportable to other populations for which P'(A|B) applies. Sure, if one can convincingly show that P and P’ are similar enough, the problems are perhaps not insurmountable. But arbitrarily just introducing functional specification restrictions of the type invariance/stability/homogeneity is, at least for an epistemological realist, far from satisfactory. And often it is — unfortunately — exactly this that we see when we take part of neoclassical economists’ models/experiments.

By this I do not mean to say that empirical methods per se are so problematic that they can never be used. On the contrary, I am basically — though not without reservations — in favour of the increased use of experiments within economics as an alternative to completely barren “bridge-less” axiomatic-deductive theory models. My criticism is more about aspiration levels and what we believe we can achieve with our mediational epistemological tools and methods in social sciences.

Just as traditional neoclassical thought-experimental modeling, real experimentation is basically also a deductive method. Given  the assumptions (such as manipulability, transitivity, separability, additivity, linearity etc)  these methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right.  Real target systems are seldom epistemically isomorphic to our axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of  the conclusions reached from within these epistemically convenient models/systems.

Limiting model assumptions in economic science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we ‘export’ them to our ‘target systems,’ we have to be able to show that they do not only hold under ceteris paribus conditions and a fortiori only are of limited value to our understanding, explanations or predictions of real economic systems.

Wren-Lewis obviously doesn’t want to get ‘hung up on the assumptions behind models.’ Maybe so, but it is still an undeniable fact that theoretical models building on piles of known to be false assumptions in no way even get close to being scientific explanations. On the contrary. They are untestable and a fortiori totally worthless from the point of view of scientific relevance.

13 Comments

  1. “Economics is a science of thinking in terms of models joined to the art of choosing models which are relevant to the contemporary world”

    Guess who?

    • I think perhaps it could be this guy:

      “But I am unfamiliar with the methods involved and it may be that my impression that nothing emerges at the end which has not been introduced expressly or tacitly at the beginning is quite wrong … It seems to me essential in an article of this sort to put in the fullest and most explicit manner at the beginning the assumptions which are made and the methods by which the price indexes are derived; and then to state at the end what substantially novel conclusions has been arrived at …

      I cannot persuade myself that this sort of treatment of economic theory has anything significant to contribute. I suspect it of being nothing better than a contraption proceeding from premises which are not stated with precision to conclusions which have no clear application … [This creates] a mass of symbolism which covers up all kinds of unstated special assumptions.”

      Letter from Keynes to Frisch 28 November 1935

      • Correct, he seems to have changed his mind in between ’35 to ’38.

    • The quote is from J. M. Keynes to Harrod , 4 July 1938. It needs to be read in context.

      http://economia.unipv.it/harrod/edition/editionstuff/rfh.346.htm

  2. Lousy scientists
    Comment on ‘Wren-Lewis and the Rodrik smorgasbord view of economic models’
    .
    The underlying binary code of criminal proceedings is guilty/not-guilty. The accused has an interest at a clear-cut not-guilty outcome if he is indeed not the wrongdoer. On the other hand, the accused, if he is indeed the wrongdoer, has an interest to arrive somewhere in the middle between guilty/not-guilty. Therefore, what he applies is a stratagem, that is, he blurs and obstructs the underlying binary distinction in all possible ways.
    .
    The situation is analogous in science where the underlying binary code is true/false. The genuine scientist tries to sharpen the issue at hand in order to eventually arrive at a clear-cut true/false answer. Everything in-between is scientifically worthless. The cargo cult scientist (Feynman’s term), on the other hand, tries everything to keep the issue safely in the no-man’s land between true/false, where “nothing is clear and everything is possible.” (Keynes)
    .
    The term ‘conventionalist stratagem’ has been introduced into methodology by Popper.* Boiled down to essentials it amounts to the application of all possible communicative means to prevent a clear-cut true/false outcome.
    .
    As a matter of fact, the stratagem issue goes back to the origin of science. Plato had been confronted with the Sophists whose selling proposition has been that they could win the client’s case no matter whether they were de facto right or wrong.
    .
    “Plato’s main concern with the Sophists is that their rhetoric does not provide an adequate view of justice
    Doxa – public opinion: Sophistic manipulation of doxa is aimed at persuasion only,
    Episteme – true knowledge: Plato’s goal beyond persuasion is to discover epistemic truth.”**
    .
    By proudly advertising that they have a model for every season, Wren-Lewis and Rodrik fall back methodologically to the proto-scientific position of the Sophists.
    .
    There is no better empirical proof of scientific incompetence than to apply proto-scientific stratagems. To be sure, this holds for both orthodox and heterodox economists in equal measure. Wren-Lewis and Rodrik are not the exception in economics but the rule.
    .
    Egmont Kakarot-Handtke
    .
    References
    * See the section which starts with ‘The Marxist theory of history . . . ’
    http://www.stephenjaygould.org/ctrl/popper_falsification.html
    ** http://people.uwplatt.edu/~ciesield/platovsoph.htm

  3. My complaint about mainstream economics revolves around falsifiability. Even if we grant the assumptions behind the models, they end up being equivalent to there being a natural rate of interest. This should at least lead to some predictive power – if the real rate of interest is negative, the economy will accelerate.

    But what happened after the crisis? When the acceleration did not occur, the “natural rate” magically went negative. In other words, no matter what the observed outcomes are, parameters are adjusted so that the models eventually “predict” the outcome (after a period of autocorellated errors).

    We then need to ask: what conceivable set of observations are inconsistent with mainstream macro theory, as it is currently applied? The answer is: nothing that looks remotely like real world data. A theory that cannot be falsified can tell us nothing useful; but this also means that it does not matter that the assumptions are unrealistic, since the models will eventually “predict” whatever happens (after a period of adjustment).

  4. It is, perhaps, a narrow and tangential point to make, but I do feel I should protest that empiricism is not identical with controlled experimentation.
    .
    The excellent Galbraith essay, which you quote in the subsequent blogpost to this one, identifies the problem with mainstream academic macro succinctly: “It was not about … the economy.”
    .
    Mainstream academic macro is badly unbalanced, in that research on the institutions and experience of the actual economy is severely deprecated in the methodological orthodoxy. Even when economists go and look, they are required to filter and distill stylized facts, essentially importing their observations into axiomatic, deductive models, using methods that discard much of the available information as anecdotal or otherwise suspect.
    .
    Theoretical analysis, by axiomatic and deductive methods, has its uses in the scientific enterprise. Checking out the logic of a priori conjecture, identifying the necessary and sufficient elements forming functional, systematic relationships — these are legitimate activities. Epistemologically, we cannot simply go and look, and “see” directly causal relationships; “cause and effect” is hidden from us until we have some theory to apply as a useful expectational filter on the phenomena before us, to sort out the flood of data as it were.
    .
    That said, investigating the actual economy requires . . . actually investigating the nature of the actual, institutional economy. One must go and look, observe and measure and map. Popper distinguished the analytic models of theoretical physics from the operational models of empirical physics and it is a useful distinction, even if his insistence that the operational models can be used to test the analytic models is misleading, and his honoring of the analytic models as nomological machines harboring truth is bad epistemology.
    .
    In a healthy discipline, the field work necessary to build up sophisticated and informed mental models — not thought experiments, but expert understanding and interpretation of the specific institutions and processes of the actual economy — would weigh against the careless hand waving from the smorgasbord. Thoughtful synthesis is in many important ways the exact opposite of analysis. Where analysis seeks to isolate factors in their causal importance, synthesis seeks to identify coincidence. A synthetic interpretation recognizes that what we see is the survival of processes, a kind of sampling by evolutionary trials in an uncertain world, that must be interpreted as much for what is not observed as what is observed. Where analysis — at least as practiced by economists — is essentially qualitative; synthesis is essentially quantitative, a matter of close measurement. For the theoretician of financial economics, say, an ideal financial market is informationally efficient; it is a quality possessed by the logic of the model. For a student of actual financial institutions, the relevant question is quantitatively how efficient a financial market is, and how its design and management (!) relates to the measurable efficiency. It is not unlike a physicist modeling an abstract heat engine and an engineer modeling the actual heat efficiency of a particular internal combustion engine; the activities may be related in that the engineer draws on his knowledge of theoretical physics, but no one confuses the two activities.
    .
    Mainstream economists do confuse the distinct activities of analysis and synthesis, not recognizing the necessity of the latter, the necessity of operational as distinct from analytic, modeling. They do not even recognize that hand waving and “insight” from the smorgasbord, by itself is not enough, is not knowledge of fact.
    .
    A particular obstacle, which Professor Syll’s essays often point out, is that certain assumptions necessary to make axiomatic, deductive models tractable are at odds with known properties of the actual economy. Krugman’s “kinda, sorta equilibrium and maximization” formula highlights this, because neither “maximization” nor stasis (homeostatic “equilibrium”) characterizes the actual economy, where uncertainty and risk are pervasive. Economists want to handwave from the smorgasbord, but, consequent upon the need to corral uncertainty to make an analysis work, nothing on the smorgasbord of analytic models is isomorphic with the institutions and circumstances of the actual economy, which are adapted to risk and uncertainty by survival and design. It doesn’t mean that economic theory and analysis will not be useful, but it does mean that the “thought experiment” is never going to work in isolation from observation and experience and measurement. Theory can produce useful information only by contrast, not isomorphism, with the actual economy.
    .
    I am afraid I am guilty once again of launching into a lengthy and tangential rant, but I won’t discard the comment despite guilty self-awareness. If someone plows thru it, I hope it is found useful.

    • “I am afraid I am guilty once again of launching into a lengthy and tangential rant, but I won’t discard the comment despite guilty self-awareness. If someone plows thru it, I hope it is found useful.”

      Quite to the contrary Bruce, I think you have given us a particularly well articulated identification of the problem. I didn’t see it as a problem of synthesis – I think you are spot on.

      “Where analysis seeks to isolate factors in their causal importance, synthesis seeks to identify coincidence.”

  5. SWL: “. . .illuminating and eminently sensible. Illuminating is I think the right word: seeing my own subject in a new light, which is something that has not happened to me for a long time. There was nothing I could think of where I disagreed …
    .
    Would it be possible to encapsulate more efficiently a confession of obdurate insensibility? He hasn’t seen his own discipline in “a new light” “for a long time”, yet he could find nothing “where I disagreed”. That’s a very strange “new light” that simply reconfirms things he already believed.
    .
    Tom Hickey is right to point us to James K Galbraith’s polemic. I am reading it again with great pleasure.

    • It’s bad enough to be ignorant of what’s going on of note in one’s own field but it is probably worse to be ignorant of the foundations of one’s field. I note that there is very little work done in the foundations of economics, seemingly minimal awareness of the relevant debates in philosophy of science, or that mathematics is a branch of logic and semiotics.

      Modeling in science is not just about constructing axiomatic-deductive systems. There are many issues involved in modeling in general and scientific modeling in particular. As a result, Keynes, himself a mathematician, was skeptical of econometrics and he provided reasons why.

      I get the impression that many if not most economists have not got the foggiest notion of the background involved, or the significant issues that have been raised.

      The attitude seems to be, we’re smart and we can figure all this out intuitively. But critical thinking is about disciplining intuition through rigor, and science is about correcting intuition in light of rigorous observation as well as rigorous logic.

      Rodrik and SWL’s view of many models as tools suitable under different conditions implies that economics deals with special cases. This further implies that there are no economic laws. Economics doesn’t have a either a general theory or even an agreed upon framework like conservation of energy in physics or natural selection in evolutionary biology. Is it efficiency? If so, what about effectiveness, which calls for prioritization of criteria based on some framework.

      In addition, the models lack scope conditions, as others have pointed out. As a result, which tool to use when is left up in the air and we see from the current debate that there is disagreement over which to use when. Why is this not ad hoc? What is the difference between this and reading tea leaves. I am being serious here and asking for criteria aka decision rules.

      Yet, the mainstream holds that methodological issues have been resolved definitively and further discussion of alternative view is unnecessary and therefore a waste of time.

      And they don’t care that Wynne Godley and others got it right while they didn’t. And subsequent events have shown that they don’t know how to fix it either, although alternatives are on the table that they refuse to look at because they are not cast in the right form.

      What is “scientific,” or even sensible about that?

      The one place where I agree with SWL is about economics being like medicine, bugt for a different reason. Medicine is important because it affects people’s lives. Economics is similarly significant because it affects the lives of so many as an influence on policy. Economists should not be playing witch doctor.

  6. If there are no pre-stated criteria for when to use various models appropriately wrt changing conditions ex ante, then the discipline is just ad hoc, where some model can be put forward ex post in hindsight to explain what was not foreseen. So, “everything is explained.” NOT. It’s pseudoscience.

    This is exactly what happened with the GFC. And if I recall correctly, SWL was one of the people who claimed to be “right” — in hindsight.

    What is rather surprising to me, puzzling actually, is that SWL served at Her Majesty’s Treasury previously and is familiar with the monetary economics of Wynne Godley. Godley predicted the crisis and used a “heterodox” model to do it. Surely SWL has read Galbraith’s “Who Are These Economists, Anyway?”

    • Thanks Tom. Especially for reminding us of Galbraith’s gem of an article. It’s up now 🙂

  7. Of course the actual underlying message here is that you need a priest to interpret the holy texts, rather than trying to do it yourself.
    .
    And you can tell a priest because they have a beard and a chocolate medal issued by the Riksbank.
    .
    This ‘technocrat autocracy’ push needs to be resisted. These people are a danger to democracy.


Sorry, the comment form is closed at this time.

Blog at WordPress.com.
Entries and Comments feeds.