Wren-Lewis and the Rodrik smorgasbord view of economic models17 January, 2016 at 16:20 | Posted in Economics | 13 Comments
In December 2015 yours truly run a series of eight posts on this blog discussing Dani Rodrik‘s Economics Rules (Oxford University Press, 2015).
There sure is much in the book I like and appreciate. It is one of those rare examples where a mainstream economist — instead of just looking the other way — takes his time to ponder on the tough and deep science-theoretic and methodological questions that underpin the economics discipline.
But (as I argue at large in a forthcoming article in this journal) there is also a very disturbing apologetic tendency in the book to blame all of the shortcomings on the economists and depicting economics itself as a problem-free smorgasbord collection of models. If you just choose the appropriate model from the immense and varied smorgasbord there’s no problem. It is as if all problems in economics were conjured away if only we could make the proper model selection.
Today, Oxford macroeconomist Simon Wren-Lewis has a post up on his blog on Rodrik’s book — and is totally überjoyed:
The first and most important thing to say is this is a great book … because it had a way of putting things which was illuminating and eminently sensible. Illuminating is I think the right word: seeing my own subject in a new light, which is something that has not happened to me for a long time. There was nothing I could think of where I disagreed …
The key idea is that there are many valid models, and the goal is to know when they are applicable to the problem in hand …
Lots of people get hung up on the assumptions behind models: are they true or false, etc. An analogy I had not seen before but which I think is very illuminating is with experiments. Models are like experiments. Experiments are designed to abstract from all kinds of features of the real world, to focus on a particular process or mechanism (or set of the same). The assumptions of models are designed to do the same thing.
“Models are like experiments.” I’ve run into that view many times over the years when having discussions with mainstream economists on their ‘thought experimental’ obsession — and I still think it’s too vague and elusive to be helpful. Just repeating the view doesn’t provide the slightest reasont to believe it.
Although perhaps thought provoking to some, I find the view on experiments offered too simplistic. And for several reasons — but mostly because the kind of experimental empiricism it favours is largely untenable.
Experiments are very similar to theoretical models in many ways — on that Wren-Lewis and yours truly are in total agreement. Experiments have the same basic problem that they are built on rather artificial conditions and have difficulties with the “trade-off” between internal and external validity. But — with more artificial conditions and internal validity, also comes less external validity. The more we rig experiments/models to avoid the “confounding factors”, the less the conditions are reminiscent of the real “target system”. The nodal issue is how economists using different isolation strategies in different “nomological machines” attempt to learn about causal relationships.
Assume that you have examined how the work performance of Swedish workers A is affected by B (“treatment”). How can we extrapolate/generalize to new samples outside the original population (e.g. to the UK)? How do we know that any replication attempt “succeeds”? How do we know when these replicated experimental results can be said to justify inferences made in samples from the original population? If, for example, P(A|B) is the conditional density function for the original sample, and we are interested in doing a extrapolative prediction of E [P(A|B)], how can we know that the new sample’s density function is identical with the original? Unless we can give some really good argument for this being the case, inferences built on P(A|B) is not really saying anything on that of the target system’s P'(A|B).
As I see it, this is the heart of the matter. External validity/extrapolation/generalization is founded on the assumption that we can make inferences based on P(A|B) that is exportable to other populations for which P'(A|B) applies. Sure, if one can convincingly show that P and P’ are similar enough, the problems are perhaps not insurmountable. But arbitrarily just introducing functional specification restrictions of the type invariance/stability/homogeneity is, at least for an epistemological realist, far from satisfactory. And often it is — unfortunately — exactly this that we see when we take part of neoclassical economists’ models/experiments.
By this I do not mean to say that empirical methods per se are so problematic that they can never be used. On the contrary, I am basically — though not without reservations — in favour of the increased use of experiments within economics as an alternative to completely barren “bridge-less” axiomatic-deductive theory models. My criticism is more about aspiration levels and what we believe we can achieve with our mediational epistemological tools and methods in social sciences.
Just as traditional neoclassical thought-experimental modeling, real experimentation is basically also a deductive method. Given the assumptions (such as manipulability, transitivity, separability, additivity, linearity etc) these methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. Real target systems are seldom epistemically isomorphic to our axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of the conclusions reached from within these epistemically convenient models/systems.
Limiting model assumptions in economic science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we ‘export’ them to our ‘target systems,’ we have to be able to show that they do not only hold under ceteris paribus conditions and a fortiori only are of limited value to our understanding, explanations or predictions of real economic systems.
Wren-Lewis obviously doesn’t want to get ‘hung up on the assumptions behind models.’ Maybe so, but it is still an undeniable fact that theoretical models building on piles of known to be false assumptions in no way even get close to being scientific explanations. On the contrary. They are untestable and a fortiori totally worthless from the point of view of scientific relevance.