Dani Rodrik on math and models (VI)21 December, 2015 at 11:28 | Posted in Economics | 5 Comments
According to Dani Rodrik — as argued in Economics Rules — an economic model basically consists of ‘clearly stated assumptions and behavioral mechansisms” that easily lend themselves to mathematical treatment. Furthermore, Rodrik thinks that the usual critique against the use of mathematics in economics is wrong-headed. Math only plays an instrumental role in economic models:
First, math ensures that the elements of a model … are stated clearly and are transparent …
The second virtue of mathematics is that it ensures the internal consistency of a model — simply put, that the conclusions follow from the assumptions.
What is lacking in this overly simplistic view on using mathematical modeling in economics is an ontological reflection on the conditions that have to be fullfilled for appropriately applying the methods of mathematical modeling.
Using formal mathematical modeling, mainstream economists like Rodriik sure can guarantee that the conclusion holds given the assumptions. However, there is no warrant that the validity we get in abstract model worlds automatically transfer to real world economies. Validity and consistency may be good, but it isn’t enough. From a realist perspective both relevance and soundness are sine qua non.
In their search for validity, rigour and precision, mainstream macro modellers of various ilks construct microfounded DSGE models that standardly assume rational expectations, Walrasian market clearing, unique equilibria, time invariance, linear separability and homogeneity of both inputs/outputs and technology, infinitely lived intertemporally optimizing representative household/ consumer/producer agents with homothetic and identical preferences, etc., etc. At the same time the models standardly ignore complexity, diversity, uncertainty, coordination problems, non-market clearing prices, real aggregation problems, emergence, expectations formation, etc., etc.
Behavioural and experimental economics — not to speak of psychology — show beyond any doubts that “deep parameters” — peoples’ preferences, choices and forecasts — are regularly influenced by those of other participants in the economy. And how about the homogeneity assumption? And if all actors are the same – why and with whom do they transact? And why does economics have to be exclusively teleological (concerned with intentional states of individuals)? Where are the arguments for that ontological reductionism? And what about collective intentionality and constitutive background rules?
These are all justified questions – so, in what way can one maintain that these models give workable microfoundations for macroeconomics? Science philosopher Nancy Cartwright gives a good hint at how to answer that question:
Our assessment of the probability of effectiveness is only as secure as the weakest link in our reasoning to arrive at that probability. We may have to ignore some issues to make heroic assumptions about them. But that should dramatically weaken our degree of confidence in our final assessment. Rigor isn’t contagious from link to link. If you want a relatively secure conclusion coming out, you’d better be careful that each premise is secure going on.
In all those economic models that Rodrik praise — where the conclusions follow deductively from the assumptions — mathematics is the preferred means to assure that we get what we want to establish with deductive rigour and precision. The problem, however, is that what guarantees this deductivity are as a rule the same things that make the external validity of the models wanting. The core assumptions (CA), as we have shown in previous posts, are as a rule not very many, and so, if the modellers want to establish ‘interesting’ facts about the economy, they have to make sure the set of auxiliary assumptions (AA) is large enough to enable the derivations. But then — how do we validate that large set of assumptions that gives Rodrik his ‘clarity’ and ‘consistency’ outside the model itself? How do we evaluate those assumptions that are clearly used for no other purpose than to guarantee an analytical-formalistic use of mathematics? And how do we know that our model results ‘travel’ to the real world?
On a deep level one could argue that the one-eyed focus on validity and consistency make mainstream economics irrelevant, since its insistence on deductive-axiomatic foundations doesn’t earnestly consider the fact that its formal logical reasoning, inferences and arguments show an amazingly weak relationship to their everyday real world equivalents. Although the formal logic focus may deepen our insights into the notion of validity, the rigour and precision has a devastatingly important trade-off: the higher the level of rigour and precision, the smaller is the range of real world application. So the more mainstream economists insist on formal logic validity, the less they have to say about the real world. The time is due and over-due for getting the priorities right …