Economic modelling

4 May, 2022 at 16:19 | Posted in Economics | 2 Comments

A couple of years ago, Paul Krugman had a piece up on his blog arguing that the ‘discipline of modeling’ is a sine qua non for tackling politically and emotionally charged economic issues:

economist-nakedIn my experience, modeling is a helpful tool (among others) in avoiding that trap, in being self-aware when you’re starting to let your desired conclusions dictate your analysis. Why? Because when you try to write down a model, it often seems to lead some place you weren’t expecting or wanting to go. And if you catch yourself fiddling with the model to get something else out of it, that should set off a little alarm in your brain.

So when ‘modern’ mainstream economists use their models — standardly assuming rational expectations, Walrasian market clearing, unique equilibria, time invariance, linear separability and homogeneity of both inputs/outputs and technology, infinitely lived intertemporally optimizing representative agents with homothetic and identical preferences, etc. — and standardly ignoring complexity, diversity, uncertainty, coordination problems, non-market clearing prices, real aggregation problems, emergence, expectations formation, etc. — we are supposed to believe that this somehow helps them ‘to avoid motivated reasoning that validates what you want to hear.’

Yours truly is, to say the least, far from convinced. The alarm that sets off in my brain is that this, rather than being helpful for understanding real-world economic issues, is more of an ill-advised plaidoyer for voluntarily taking on a methodological straight-jacket of unsubstantiated and known to be false assumptions.

Let me just give two examples to illustrate my point.

In 1817 David Ricardo presented — in Principles — a theory that was meant to explain why countries trade and, based on the concept of opportunity cost, how the pattern of export and import is ruled by countries exporting goods in which they have a comparative advantage and importing goods in which they have a comparative disadvantage.

Ricardo’s theory of comparative advantage, however, didn’t explain why the comparative advantage was the way it was. At the beginning of the 20th century, two Swedish economists — Eli Heckscher and Bertil Ohlin — presented a theory/model/theorem according to which the comparative advantages arose from differences in factor endowments between countries. Countries have comparative advantages in producing goods that use up production factors that are most abundant in the different countries. Countries would a fortiori mostly export goods that used the abundant factors of production and import goods that mostly used factors of productions that were scarce.

The Heckscher-Ohlin theorem — as do the elaborations on in it by e.g. Vanek, Stolper and Samuelson — builds on a series of restrictive and unrealistic assumptions. The most critically important — besides the standard market-clearing equilibrium assumptions — are

(1) Countries use identical production technologies.

(2) Production takes place with constant returns to scale technology.

(3) Within countries the factor substitutability is more or less infinite.

(4) Factor prices are equalised (the Stolper-Samuelson extension of the theorem).

These assumptions are, as almost all empirical testing of the theorem has shown, totally unrealistic. That is, they are empirically false. 

That said, one could indeed wonder why on earth anyone should be interested in applying this theorem to real-world situations. Like so many other mainstream mathematical models taught to economics students today, this theorem has very little to do with the real world.

From a methodological point of view, one can, of course, also wonder, how we are supposed to evaluate tests of a theorem building on known to be false assumptions. What is the point of such tests? What can those tests possibly teach us? From falsehoods, anything logically follows.

Modern (expected) utility theory is a good example of this. Leaving the specification of preferences without almost any restrictions whatsoever, every imaginable evidence is safely made compatible with the all-embracing ‘theory’ — and a theory without informational content never risks being empirically tested and found falsified. Used in mainstream economics ‘thought experimental’ activities, it may of course be very ‘handy’, but totally void of any empirical value.

Utility theory has like so many other economic theories morphed into an empty theory of everything. And a theory of everything explains nothing — just as Gary Becker’s ‘economics of everything’ it only makes nonsense out of economic science.

Some people have trouble with the fact that by allowing false assumptions mainstream economists can generate whatever conclusions they want in their models. But that’s really nothing very deep or controversial. What I’m referring to is the well-known ‘principle of explosion,’ according to which if both a statement and its negation are considered true, any statement whatsoever can be inferred.

poppWhilst tautologies, purely existential statements and other nonfalsifiable statements assert, as it were, too little about the class of possible basic statements, self-contradictory statements assert too much. From a self-contradictory statement, any statement whatsoever can be validly deduced. Consequently, the class of its potential falsifiers is identical with that of all possible basic statements: it is falsified by any statement whatsoever.

On the question of tautology, I think it is only fair to say that the way axioms and theorems are formulated in mainstream (neoclassical) economics, they are often made tautological and informationally totally empty.

Using false assumptions, mainstream modellers can derive whatever conclusions they want. Wanting to show that ‘all economists consider austerity to be the right policy,’ just e.g. assume ‘all economists are from Chicago’ and ‘all economists from Chicago consider austerity to be the right policy.’  The conclusions follow by deduction — but is of course factually totally wrong. Models and theories building on that kind of reasoning are nothing but a pointless waste of time.

2 Comments »

RSS feed for comments on this post. TrackBack URI

  1. “…when you try to write down a model, it often seems to lead some place you weren’t expecting or wanting to go. And if you catch yourself fiddling with the model to get something else out of it, that should set off a little alarm in your brain.”

    This is a dangerous case of the subject leading the object. The fact is you are not asking questions. From the very start the choice of model itself is an arbitrary one. In Krugman’s case it will implicitly assume that societies are constructs of greedy individuals that have unlimited wants but face limited resources. Maybe he thinks that is true, but where is the justification for this being the basis of his analysis? He will assume that people are rational. Has he passed that by psychologists?

    So you use something that is highly questionable that gives you unexpected outcomes. So what? All you are telling us is what is what something suspect, yet unquestioned, is telling us.

    He is working the wrong way around. He should be making such fundamental assertions about human and social behaviour and its relationship with capitalism AFTER he has accumulated as much verifiable factual material on what he is examining as he can.

  2. A great article


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Blog at WordPress.com.
Entries and Comments feeds.