The World in the Model – How the Model Became the Message in Economics

21 Sep, 2012 at 17:29 | Posted in Economics, Theory of Science & Methodology | 1 Comment

In her new book – The World in the Model Mary Morgan gives a historical account of how the model became the messsage in economics. On the question of how the models provide a method of enquiry where they can function both as “objects to enquire into” and as “objects to enquire with”, Morgan argues that model reasoning involves a kind of xperiment. She writes:

It may help to clarify my account of modelling as a double method of enquiry in economics if we compare it with two of the other reasoning styles … the method of mathematical postulation and proof and the method of laboratory experiment.

If we portray mathematical modelling as a version of the method of mathematical postulation and proof … models can indeed be truth-makers about that restricted and mathematical small world … But whether they can come to valid conclusions about the behaviour of their actual economic universe is a much more difficult problem …

If we make the alternative comparison with laboratory experiments … the important question of whether the results of the experiment on the model can be transferred to the world that the model represents can be considered an inference problem …

Of course, model experiments in economics are usually pen-and-paper, calculator, or computer, experiments on a model world or an analogical world … not laboratory experiments on the real world …

The experiments made on models are different from the experiments made in the laboratory … because model experiments are less powerful as an epistemic genre. It does make a difference to the power and scope of inference that the model experiment is carried out on a pen-and-paper represenation, that is on the world in the model, not on the world itself.

Now, I think it is but fair to say that field experiments, model experiments and laboratory experiments, are basically facing the same problems in terms of generalizability and external validity. They all have the same basic problem – they are built on rather artificial conditions and have difficulties with the trade-off between internal and external validity. The more artificial conditions, the more internal validity, but also less external validity. The more we rig experiments/field studies/models to avoid confounding, the less the conditions are reminicent of the real target system.

The nodal issue is how economists using different isolation strategies in different nomological machines attempt to learn about causal relationships. By contrast with Morgan, I would more explicitly and forcefully argue that  the generalizability of all these research strategies – because the probability is high that causal mechanisms are different in different contexts and the lack of homogeneity/stability/invariance – doesn’t give us warranted export licenses to the real target system.

If we mainly conceive of laboratory experiments, field studies and model experiments as heuristic tools, the dividing line is difficult to perceive. But if we see them as activities that ultimately aspire to say something about the real target system, then the problem of external validity is central. Let me elaborate a little on this point:

Assume that you have examined how the performance of A is affected by B (treatment). How can we extrapolate/generalize to new samples outside the original population? How do we know that any replication attempt succeeds? How do we know when these replicated experimental results can be said to justify inferences made in samples from the original population? If, for example, P(A|B) is the conditional density function for the original sample, and we are interested in doing a extrapolative prediction of E [P(A|B)], how can we know that the new sample’s density function is identical with the original? Unless we can give some really good argument for this being the case, inferences built on P(A|B) is not really saying anything on that of the target system’s P*(A|B).

As I see it is, this is the heart of the matter. External validity/extrapolation/generalization is founded on the assumption that we could make inferences based on P(A|B) that is exportable to other populations for which P*(A|B) applies. Sure, if one can convincingly show that P and P* are similar enough, the problems are perhaps surmountable. But arbitrarily just introducing functional specification restrictions of the type invariance/stability/homogeneity, is, at least for an epistemological realist far from satisfactory. And often it is – unfortunately – exactly this that I see when I take part of neoclassical economists’ models/laboratory experiments/field studies.

By this I do not mean to say that empirical methods per se are so problematic that they can never be used. On the contrary, I am basically – though not without reservations – in favour of the increased use of laboratory experiments and field studies within economics. Not least as an alternative to completely barren “bridge-less” axiomatic-deductive theory models. My criticism is more about aspiration levels and what we believe that we can achieve with our mediational epistemological tools and methods in the social sciences.

Many laboratory experimentalists claim that it is easy to replicate experiments under different conditions and therefore a fortiori easy to test the robustness of experimental results. But is it really that easy? If in the example given above, we run a test and find that our predictions were not correct – what can we conclude? That B works in X but not in Y? That B worked in the field study conducted in year Z but not in year W? Population selection is almost never simple. Had the problem of external validity only been about inference from sample to population, this would be no critical problem. But the really interesting inferences are those we try to make from specific laboratory experiments/fields to specific real world situations/institutions/structures that we are interested in understanding or (causally) to explain. And then the population problem is more difficult to tackle.

Nowadays many economists are in for randomized experiments. But just as most other methods used within neoclassical economics, randomization is basically a deductive method – or as Morgan calls it, “a deductive mode of manipulation”. Given the assumptions (such as manipulability, transitivity, separability, additivity, linearity etc) these methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. Real target systems are seldom epistemically isomorphic to our axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by randomization procedures may be valid in  laboratory models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.

Ideally controlled laboratory experiments (still the benchmark even for natural and quasi experiments) tell us with certainty what causes what effects – but only given the right “closures”. Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. It works there is no evidence for it will work here. Causes deduced in a laboratory experiment still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of “rigorous” and “precise” methods is despairingly small.

Most neoclassical economists want to have deductively automated answers to fundamental causal questions. But to apply “thin” methods we have to have “thick” background knowledge of what’s going on in the real world, and not in (ideally controlled randomized) laboratory experiments or “models experiment”. Conclusions can only be as certain as their premises – and that also goes for methods based on laboratory experiments.

Morgan’s new book – and especially those carefully selected case studies presented – is an important contribution to the history of economics in general, and more specifically to our understanding of how mainstream economics has become a totally model-based discipline. 

However, I haven’t – as you may have surmised – read it without objections.

In her description of how economists have used – and are using – these “reasoning tools” that we call models, Morgan puts to much emphasis – at least for my taste – on modelling as an epistemic genre of “reasoning to gain knowlege about the economic world”. Even if epistemology is important and interesting in itself, it ought never be anything but secondary in science, since the primary questions asked have too be ontological. First after having asked questions about ontology can we start thinking about what and how we can know anything about the world. If we do that, I think it is more or less necessary also to be more critical of the reasoning by  modelling that has come to be considered the only and right way to reason in mainstream economics for more tham 50 years now.

In a way it is rather symptomatic of the whole book that  when Morgan gets in the the all-important question of external validity in isolationist closed economic models, she most often halts at posing the question as “if those elements can be treated in isolation” and noting that this aspect of models is “much more difficult to charcterize than the way economists use models to investigate their ideas and theories”. Absolutely! But this doesn’t make model reasonings as “objects to enquire” into activities that from a scientific point of view are on a par with the much more important question if these models really have export-certificates or not. I think many readers of the book would have found it even more interesting to read if they would get more of  argued and critical evaluations of the activities, and not just more or less analytical descriptions.

So, by all means, read Morgan’s book. It’s in many ways a superb book. As a detailed and well-informed case studies-based history it is definitely a proof of great scholarship. I’m sure it will be a classic in the history of modern economics – but – to get more on the question if the economists’ models really give truthful and valid explanations on things happening in the real world, also do read two other modern classics – Tony Lawson’s Economics and Reality (1997) and Nancy Cartwright’s Hunting Causes and Using Them  (2009).

1 Comment

  1. […] The World in the Model (reviewed here) Mary Morgan characterizes the modelling tradition of economics as one concerned with “thin men […]

Sorry, the comment form is closed at this time.

Blog at
Entries and Comments feeds.