What is mainstream economics?

30 Oct, 2016 at 15:00 | Posted in Economics | 4 Comments

The reason you study an issue at all is usually that you care about it, that there’s something you want to achieve or see happen. Motivation is always there; the trick is to do all you can to avoid motivated reasoning that validates what you want to hear.

economist-nakedIn my experience, modeling is a helpful tool (among others) in avoiding that trap, in being self-aware when you’re starting to let your desired conclusions dictate your analysis. Why? Because when you try to write down a model, it often seems to lead some place you weren’t expecting or wanting to go. And if you catch yourself fiddling with the model to get something else out of it, that should set off a little alarm in your brain.

Paul Krugman 

Hmm …

So when Krugman and other ‘modern’ mainstream economists use their models — standardly assuming rational expectations, Walrasian market clearing, unique equilibria, time invariance, linear separability and homogeneity of both inputs/outputs and technology, infinitely lived intertemporally optimizing representative agents with homothetic and identical preferences, etc. — and standardly ignoring complexity, diversity, uncertainty, coordination problems, non-market clearing prices, real aggregation problems, emergence, expectations formation, etc. — we are supposed to believe that this somehow helps them ‘to avoid motivated reasoning that validates what you want to hear.’

Yours truly  is, to say the least, far from convinced. The alarm that sets off in my brain is that this, rather than being helpful for understanding real world economic issues, sounds more like an ill-advised plaidoyer for voluntarily taking on a methodological straight-jacket of unsubstantiated and known to be false assumptions.

Modern (expected) utility theory is a good example of this. Leaving the specification of preferences without almost any restrictions whatsoever, every imaginable evidence is safely made compatible with the all-embracing ‘theory’ — and a theory without informational content never risks being empirically tested and found falsified. Used in mainstream economics ‘thought experimental’ activities, it may of course be very ‘handy’, but totally void of any empirical value.

Utility theory has like so many other economic theories morphed into an empty theory of everything. And a theory of everything explains nothing — just as Gary Becker’s ‘economics of everything’ it only makes nonsense out of economic science.

Using false assumptions, mainstream modelers can derive whatever conclusions they want. Wanting to show that ‘all economists consider austerity to be the right policy,’ just e.g. assume ‘all economists are from Chicago’ and ‘all economists from Chicago consider austerity to be the right policy.’  The conclusions follows by deduction — but is of course factually totally wrong. Models and theories building on that kind of reasoning is nothing but a pointless waste of time.

Mainstream economics today is mainly an approach in which you think the goal is to be able to write down a set of empirically untested assumptions and then deductively infer conclusions from them. When applying this deductivist thinking to economics, economists usually set up ‘as if’ models based on a set of tight axiomatic assumptions from which consistent and precise inferences are made. The beauty of this procedure is of course that if the axiomatic premises are true, the conclusions necessarily follow. The snag is that if the models are to be relevant, we also have to argue that their precision and rigour still holds when they are applied to real-world situations. They often don’t do for the simple reason that empty theoretical exercises of this kind do not tell us anything about the world. When addressing real economies, the idealizations necessary for the deductivist machinery to work, simply don’t hold.

So how should we evaluate the search for ever greater precision and the concomitant arsenal of mathematical and formalist models? To a large extent, the answer hinges on what we want our models to perform and how we basically understand the world.

The world as we know it, has limited scope for certainty and perfect knowledge. Its intrinsic and almost unlimited complexity and the interrelatedness of its parts prevent the possibility of treating it as constituted by atoms with discretely distinct, separable and stable causal relations. Our knowledge accordingly has to be of a rather fallible kind. To search for deductive precision and rigour in such a world is self-defeating. The only way to defend such an endeavour is to restrict oneself to prove things in closed model-worlds. Why we should care about these and not ask questions of relevance is hard to see.


  1. Paul Krugman is at it again, this time presenting us with a “toy model” that ‘explains’ the slowdown in world trade.

    But does it? There seems to be a real lack of intellectual curiosity. A characteristically casual remark is made that it has to do with technological change and a change in trade strategies from import substitution to export orientation. Even if this was correct, it is likely to be the tip of the iceberg and it begs other questions – how much of the recent increase in trade is explained by the entry of China into the system, and is it even true to say that this country follows export oriented development (its X/GNP ratios were never particularly high, and it also encouraged the substitution of imports with domestic production). Furthermore was it technological change driving the increase in trade, or the increase in trade driving technological innovation and profusion?

  2. “In my experience, modeling is a helpful tool (among others) in avoiding that trap, in being self-aware when you’re starting to let your desired conclusions dictate your analysis.”

    The problem with this is of course is that it is Model that is dictating the analysis and not the facts. In many ways this is even worse than having the subject (the economist) determine the object (the analysis).

    There is only one thing that should dictate analysis: the facts. This must also include primary evidence which will often necessarily be non-quantitative. For example if we we want to understand why there is a liquidity trap, we will not find out the real anwers with Model. It is to be found in the records of the decisions of key actors – such as banks and borrowers. The evidence must then be carefully assembled and must be as complete as possible. From this we make the best conclusion we can on the basis of what we can know.

  3. Or, to put it another way, in a two-agent model, utility for the representative patron of economic research is measured by the degree to which the research product of the representative economist can shift policy in a direction beneficial to the representative patron. In exchange for the favorable policy shift resulting from the research product of the representative economist, the representative patron provides financial, institutional, and political support.
    Conversely, the representative economist maximizes his expected lifetime utility (in the form of financial, institutional, and political support) by producing research which results in policy shifts most favorable to the representative patron.
    Both the representative patron and the representative economist rationally converge on an equilibrium where the level of support provided by the patron and the level of policy shift provided by the economist provide optimal utility to each, and this equilibrium point is what we call “mainstream economics”.
    There are, admittedly, some simplifying assumptions made in this model, but the model is nevertheless extremely helpful for clarifying the analysis and highlighting issues that may have important policy implications.

  4. “Mainstream economics today is mainly an approach in which you think the goal is to be able to write down a set of empirically untested assumptions and then deductively infer conclusions from them.”
    Too generous.
    The goal is to be able to derive a set of empirically untestable assumptions by working backward from the conclusions, which are set a priori.
    Take, for example, the recently-highlighted conclusion that “increases in the minimum wage lead to increases in unemployment”. Assumptions supporting this conclusion which can be tested empirically are inherently untrustworthy. Assumptions which are axiomatic or tautological or otherwise immune to empirical refutation are much more reliable foundations for supporting the desired policy objective of low (or no) minimum wage.

Sorry, the comment form is closed at this time.

Blog at WordPress.com.
Entries and Comments feeds.