Angel in my room (private)

30 March, 2014 at 16:25 | Posted in Varia | Comments Off on Angel in my room (private)


The best Swedish song ever.
This one is for you, Kristina — the angel in my room.

Economics as ideology

30 March, 2014 at 11:04 | Posted in Economics | 1 Comment


Although I never believed it when I was young and held scholars in great respect, it does seem to be the case that ideology plays a large role in economics. How else to explain Chicago’s acceptance of not only general equilibrium but a particularly simplified version of it as ‘true’ or as a good enough approximation to the truth? Or how to explain the belief that the only correct models are linear and that the von Neuman prices are those to which actual prices converge pretty smartly? This belief unites Chicago and the Classicals; both think that the ‘long-run’ is the appropriate period in which to carry out analysis. There is no empirical or theoretical proof of the correctness of this. But both camps want to make an ideological point. To my mind that is a pity since clearly it reduces the credibility of the subject and its practitioners.

Frank Hahn

Is INET nothing but a Trojan horse of the financial oligarchy?

30 March, 2014 at 10:36 | Posted in Economics | 2 Comments


So far, the history and the actions of the Institute for New Economic Thinking, founded by George Soros and other members of the financial establishment, are compatible with the hypothesis that it might be a Trojan horse of the financial oligarchy, meant to control the movement for reform of economics. However, despite some limited evidence to the contrary, it is also still compatible with the counter-hypothesis that it is a bona fide effort to push such reform to the benefit of society at large. A restrictive policy of supporting independent initiatives with the same stated goals, and a recent tendency toward the promotion of the less radical reformist ideas make it opportune to monitor the activities of INET with an open but skeptical mind.

Norbert Häring

Yours truly can’t but concur. And obviously there are others also having doubts about INET:

The first INET meeting at Cambridge University in 2010 bore some small promise—for instance, when protestors disrupted the IMF platitudes of Dominique Strauss-Kahn in Kings great hall, or when Lord Adair Turner bravely suggested we needed a much smaller financial 007But the sequel turned out to be a profoundly more unnerving and chilly affair, and not just due to the caliginous climate. The nightmare scenario began with a parade of figures whom one could not in good conscience admit to anyone’s definition of “New Economic Thinking”: Ken Rogoff, Larry Summers, Barry Eichengreen, Niall Ferguson and Gordon Brown … The range of economic positions proved much less varied than at the first meeting, and couldn’t help notice that the agenda seemed more pitched toward capturing the attention of journalists and bloggers [oh my, I’m included in this one], and those more interested in getting to see more star power up close than sampling complex thinking outside the box. It bespoke an unhealthy obsession with Guaranteed Legitimacy and Righteous Sound Thinking.

Philip Mirowski

On using models properly

29 March, 2014 at 18:35 | Posted in Economics | Comments Off on On using models properly


More great stuff from Stanford economist Paul Pfleiderer on the misuse of theoretical models in finance and economics. [h/t Dwayne Woods]

Economic models — chameleons and theoretical cherry picking

28 March, 2014 at 20:07 | Posted in Economics, Theory of Science & Methodology | 6 Comments

chameleon-ipad-backgroundChameleons arise and are often nurtured by the following dynamic. First a bookshelf model is constructed that involves terms and elements that seem to have some relation to the real world and assumptions that are not so unrealistic that they would be dismissed out of hand. The intention of the author, let’s call him or her “Q,” in developing the model may to say something about the real world or the goal may simply be to explore the implications of making a certain set of assumptions. Once Q’s model and results become known, references are made to it, with statements such as “Q shows that X.” This should be taken as short-hand way of saying “Q shows that under a certain set of assumptions it follows (deductively) that X,” but some people start taking X as a plausible statement about the real world. If someone skeptical about X challenges the assumptions made by Q, some will say that a model shouldn’t be judged by the realism of its assumptions, since all models have assumptions that are unrealistic. Another rejoinder made by those supporting X as something plausibly applying to the real world might be that the truth or falsity of X is an empirical matter and until the appropriate empirical tests or analyses have been conducted and have rejected X, X must be taken seriously. In other words, X is innocent until proven guilty. Now these statements may not be made in quite the stark manner that I have made them here, but the underlying notion still prevails that because there is a model for X, because questioning the assumptions behind X is not appropriate, and because the testable implications of the model supporting X have not been empirically rejected, we must take X seriously. Q’s model (with X as a result) becomes a chameleon that avoids the real world filters.

The best way to illustrate what chameleons are is to give some actual examples …

In April 2012 Harry DeAngelo and René Stulz circulated a paper entitled “Why High Leverage is Optimal for Banks.” The title of the paper is important here: it strongly suggests that the authors are claiming something about actual banks in the real world. In the introduction to this paper the authors explain what their model is designed to do:

“To establish that high bank leverage is the natural (distortion-free) result of intermediation focused on liquid-claim production, the model rules out agency problems, deposit insurance, taxes, and all other distortionary factors. By positing these idealized conditions, the model obviously ignores some important determinants of bank capital structure in the real world. However, in contrast to the MM framework – and generalizations that include only leverage-related distortions – it allows a meaningful role for banks as producers of liquidity and shows clearly that, if one extends the MM model to take that role into account, it is optimal for banks to have high leverage.” [emphasis added]

Their model, in other words, is designed to show that if we rule out many important things and just focus on one factor alone, we obtain the particular result that banks should be highly leveraged. This argument is for all intents and purpose analogous to the argument made in another paper entitled “Why High Alcohol Consumption is Optimal for Humans” by Bacardi and Mondavi. In the introduction to their paper Bacardi and Mondavi explain what their model does:

“To establish that high intake of alcohol is the natural (distortion free) result of human liquid-drink consumption, the model rules out liver disease, DUIs, health benefits, spousal abuse, job loss and all other distortionary factors. By positing these idealized conditions, the model obviously ignores some important determinants of human alcohol consumption in the real world. However, in contrast to the alcohol neutral framework – and generalizations that include only overconsumption- related distortions – it allows a meaningful role for humans as producers of that pleasant “buzz” one gets by consuming alcohol, and shows clearly that if one extends the alcohol neutral model to take that role into account, it is optimal for humans to be drinking all of their waking hours.”[emphasis added]

Deangelo and Stulz model is clearly a bookshelf theoretical model that would not pass through any reasonable filter if we want to take its results and apply them directly to the real world. In addition to ignoring much of what is important (agency problems, taxes, systemic risk, government guarantees, and other distortionary factors), the results of their main model are predicated on the assets of the bank being riskless and are based on a posited objective function that is linear in the percentage of assets funded with deposits. Given this the authors naturally obtain a corner solution with assets 100% funded by deposits. (They have no explicit model addressing what happens when bank assets are risky, but they contend that bank leverage should still be “high” when risk is present) …

cherry-pickDeAngelo and Stulz paper is a good illustration of my claim that one can generally develop a theoretical model to produce any result within a wide range. Do you want a model that produces the result that banks should be 100% funded by deposits? Here is aset of assumptions and an argument that will give you that result. That such a model exists tells us very little. By claiming relevance without running it through the filter it becomes a chameleon …

Whereas some theoretical models can be immensely useful in developing intuitions, in essence a theoretical model is nothing more than an argument that a set of conclusions follows from a given set of assumptions. Being logically correct may earn a place for a theoretical model on the bookshelf, but when a theoretical model is taken off the shelf and applied to the real world, it is important to question whether the model’s assumptions are in accord with what we know about the world. Is the story behind the model one that captures what is important or is it a fiction that has little connection to what we see in practice? Have important factors been omitted? Are economic agents assumed to be doing things that we have serious doubts they are able to do? These questions and others like them allow us to filter out models that are ill suited to give us genuine insights. To be taken seriously models should pass through the real world filter.

Chameleons are models that are offered up as saying something significant about the real world even though they do not pass through the filter. When the assumptions of a chameleon are challenged, various defenses are made (e.g., one shouldn’t judge a model by its assumptions, any model has equal standing with all other models until the proper empirical tests have been run, etc.). In many cases the chameleon will change colors as necessary, taking on the colors of a bookshelf model when challenged, but reverting back to the colors of a model that claims to apply the real world when not challenged.

Paul Pfleiderer

Reading Pfleiderer’s absolutely fabulous gem of an article reminded me of what H. L. Mencken once famously said:

There is always an easy solution to every problem – neat, plausible and wrong.

Pfleiderer’s perspective may be applied to many of the issues involved when modeling complex and dynamic economic phenomena. Let me take just one example — simplicity.

When it come to modeling I do see the point emphatically made time after time by e. g. Paul Krugman in simplicity — as long as it doesn’t impinge on our truth-seeking. “Simple” macroeconomic models may of course be an informative heuristic tool for research. But if practitioners of modern macroeconomics do not investigate and make an effort of providing a justification for the credibility of the simplicity-assumptions on which they erect their building, it will not fulfill its tasks. Maintaining that economics is a science in the “true knowledge” business, I remain a skeptic of the pretences and aspirations of  “simple” macroeconomic models and theories. So far, I can’t really see that e. g. “simple” microfounded models have yielded very much in terms of realistic and relevant economic knowledge.

All empirical sciences use simplifying or unrealistic assumptions in their modeling activities. That is not the issue – as long as the assumptions made are not unrealistic in the wrong way or for the wrong reasons.

But models do not only face theory. They also have to look to the world. Being able to model a “credible world,” a world that somehow could be considered real or similar to the real world, is not the same as investigating the real world. Even though — as Pfleiderer acknowledges — all theories are false, since they simplify, they may still possibly serve our pursuit of truth. But then they cannot be unrealistic or false in any way. The falsehood or unrealisticness has to be qualified.

Explanation, understanding and prediction of real world phenomena, relations and mechanisms therefore cannot be grounded on simpliciter assuming simplicity. If we cannot show that the mechanisms or causes we isolate and handle in our models are stable, in the sense that what when we export them from are models to our target systems they do not change from one situation to another, then they – considered “simple” or not – only hold under ceteris paribus conditions and a fortiori are of limited value for our understanding, explanation and prediction of our real world target system.

The obvious ontological shortcoming of a basically epistemic – rather than ontological – approach, is that “similarity” or “resemblance” tout court do not guarantee that the correspondence between model and target is interesting, relevant, revealing or somehow adequate in terms of mechanisms, causal powers, capacities or tendencies. No matter how many convoluted refinements of concepts made in the model, if the simplifications made do not result in models similar to reality in the appropriate respects (such as structure, isomorphism etc), the surrogate system becomes a substitute system that does not bridge to the world but rather misses its target.

Constructing simple macroeconomic models somehow seen as “successively approximating” macroeconomic reality, is a rather unimpressive attempt at legitimizing using fictitious idealizations for reasons more to do with model tractability than with a genuine interest of understanding and explaining features of real economies. Many of the model assumptions standardly made by neoclassical macroeconomics – simplicity being one of them – are restrictive rather than harmless and could a fortiori anyway not in any sensible meaning be considered approximations at all.

If economists aren’t able to show that the mechanisms or causes that they isolate and handle in their “simple” models are stable in the sense that they do not change when exported to their “target systems”, they do only hold under ceteris paribus conditions and are a fortiori of limited value to our understanding, explanations or predictions of real economic systems.

That Newton’s theory in most regards is simpler than Einstein’s is of no avail. Today Einstein has replaced Newton. The ultimate arbiter of the scientific value of models cannot be simplicity.

As scientists we have to get our priorities right. Ontological under-labouring has to precede epistemology.


Footnote: And of course you understood that the Bacardi/Mondavi paper is fictional. Or?

Adam Smith and the other side of the invisible hand

28 March, 2014 at 13:07 | Posted in Economics | 1 Comment

thmorsentHow selfish soever man may be supposed, there are evidently some principles in his nature, which interest him in the fortune of others, and render their happiness necessary to him, though he derives nothing from it except the pleasure of seeing it. Of this kind is pity or compassion, the emotion which we feel for the misery of others, when we either see it, or are made to conceive it in a very lively manner. That we often derive sorrow from the sorrow of others, is a matter of fact too obvious to require any instances to prove it; for this sentiment, like all the other original passions of human nature, is by no means confined to the virtuous and humane, though they perhaps may feel it with the most exquisite sensibility. The greatest ruffian, the most hardened violator of the laws of society, is not altogether without it.

Krugman’s vindication of the IS-LM gadget — brilliantly silly

27 March, 2014 at 19:04 | Posted in Economics | 6 Comments

islmPaul Krugman yesterday responded to my critique of IS-LM. This hardly came as a surprise. For years now, self-proclaimed “proud neoclassicist” Paul Krugman has in endless harpings on the same old IS-LM string told us about the splendour of the Hicksian invention.

His argumentation is nothing new. In an earlier post on his blog, Krugman has argued that “Keynesian” macroeconomics more than anything else “made economics the model-oriented field it has become.” In Krugman’s eyes, Keynes was a “pretty klutzy modeler,” and it was only thanks to Samuelson’s famous 45-degree diagram and Hicks’s IS-LM that things got into place. Although admitting that economists have a tendency to use ”excessive math” and “equate hard math with quality” he still vehemently defends — and always have — the mathematization of economics:

I’ve seen quite a lot of what economics without math and models looks like — and it’s not good.

Sure, “New Keynesian” economists like Krugman — and their forerunners, “Keynesian” economists like Paul Samuelson and (young) John Hicks — certainly have contributed to making economics more mathematical and “model-oriented.”

wrong-tool-by-jerome-awBut if these math-is-the-message-modelers aren’t able to show that the mechanisms or causes that they isolate and handle in their mathematically formalized macromodels are stable in the sense that they do not change when we “export” them to our “target systems,” these mathematical models do only hold under ceteris paribus conditions and are consequently of limited value to our understandings, explanations or predictions of real economic systems.

Science should help us disclose the causal forces at work behind the apparent facts. But models — mathematical, econometric, or what have you — can never be more than a starting point in that endeavour. There is always the possibility that there are other (non-quantifiable) variables – of vital importance, and although perhaps unobservable and non-additive, not necessarily epistemologically inaccessible – that were not considered for the formalized mathematical model.

The kinds of laws and relations that “modern” economics has established, are laws and relations about mathematically formalized entities in models that presuppose causal mechanisms being atomistic and additive. When causal mechanisms operate in real world social target systems they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made mathematical-statistical “nomological machines” they are rare, or even non-existant. Unfortunately that also makes most of contemporary mainstream neoclassical endeavours of mathematical economic modeling rather useless. And that also goes for Krugman and the rest of the “New Keynesian” family.

When it comes to modeling philosophy, Paul Krugman has in an earlier piece defended his position in the following words (my italics):

I don’t mean that setting up and working out microfounded models is a waste of time. On the contrary, trying to embed your ideas in a microfounded model can be a very useful exercise — not because the microfounded model is right, or even better than an ad hoc model, but because it forces you to think harder about your assumptions, and sometimes leads to clearer thinking. In fact, I’ve had that experience several times.

The argument is hardly convincing. If people put that enormous amount of time and energy that they do into constructing macroeconomic models, then they really have to be substantially contributing to our understanding and ability to explain and grasp real macroeconomic processes. If not, they should – after somehow perhaps being able to sharpen our thoughts – be thrown into the waste-paper-basket (something the father of macroeconomics, Keynes, used to do), and not as today, being allowed to overrun our economics journals and giving their authors celestial academic prestige.

Krugman’s explications on this issue is really interesting also because they shed light on a kind of inconsistency in his art of argumentation. During a couple of years Krugman has in more than one article criticized mainstream economics for using too much (bad) mathematics and axiomatics in their model-building endeavours. But when it comes to defending his own position on various issues he usually himself ultimately falls back on the same kind of models. In his End This Depression Now — just to take one example — Paul Krugman maintains that although he doesn’t buy “the assumptions about rationality and markets that are embodied in many modern theoretical models, my own included,” he still find them useful “as a way of thinking through some issues carefully.”

When it comes to methodology and assumptions, Krugman obviously has a lot in common with the kind of model-building he otherwise criticizes.

The same critique – that when it comes to defending his own position on various issues he usually himself ultimately falls back on the same kind of models that he otherwise criticize – can be directed against his new post. Krugman has said these things before, but I am still waiting for him to really explain HOW the silly assumptions behind IS-LM helps him work with the fundamental issues. If one can only use those assumptions with — as Krugman says, “tongue in cheek” – well, why then use them at all? Wouldn’t it be better to use more adequately realistic assumptions and be able to talk clear without any tongue in cheek?

I have noticed again and again, that on most macroeconomic policy issues I find myself in agreement with Krugman. To me that just shows that Krugman is right in spite of and not thanks to those neoclassical models — IS-LM included — he ultimately refers to. When he is discussing austerity measures, Ricardian equivalence or problems with the euro, he is actually not using those models, but rather (even) simpler and more adequate and relevant thought-constructions much more in the vein of Keynes.

The final court of appeal for macroeconomic models is the real world, and as long as no convincing justification is put forward for how the inferential bridging de facto is made, macroeconomic model building is little more than “hand waving” that give us rather little warrant for making inductive inferences from models to real world target systems. If substantive questions about the real world are being posed, it is the formalistic-mathematical representations utilized to analyze them that have to match reality, not the other way around. As Keynes has it:

Economics is a science of thinking in terms of models joined to the art of choosing models which are relevant to the contemporary world. It is compelled to be this, because, unlike the natural science, the material to which it is applied is, in too many respects, not homogeneous through time.

If macroeconomic models – no matter of what ilk – make assumptions, and we know that real people and markets cannot be expected to obey these assumptions, the warrants for supposing that conclusions or hypotheses of causally relevant mechanisms or regularities can be bridged, are obviously non-justifiable. Macroeconomic theorists – regardless of being New Monetarist, New Classical or ”New Keynesian” – ought to do some ontological reflection and heed Keynes’ warnings on using thought-models in economics:

The object of our analysis is, not to provide a machine, or method of blind manipulation, which will furnish an infallible answer, but to provide ourselves with an organized and orderly method of thinking out particular problems; and, after we have reached a provisional conclusion by isolating the complicating factors one by one, we then have to go back on ourselves and allow, as well as we can, for the probable interactions of the factors amongst themselves. This is the nature of economic thinking. Any other way of applying our formal principles of thought (without which, however, we shall be lost in the wood) will lead us into error.

For the n:th time organ-grinder Paul Krugman pretends it’s raining and calls upon a Keynes-Hicks-IS-LM-model — an oxymoron that really never existed. It’s deeply disappointing. You would expect more from a Nobel laureate.

So let me — respectfully — summarize: A gadget is just a gadget — and brilliantly silly simple models — IS-LM included — do not help us working with the fundamental issues of modern economies any more than brilliantly silly complicated models — calibrated DSGE and RBC models included.

Added March 29: Merijn Knibbe and Phil Pilkington have posts on the subject well worth reading.

IS-LM vs. Minsky

25 March, 2014 at 13:56 | Posted in Economics | 18 Comments


As we all know, Paul Krugman and some other more or less unorthodox mainstream economists keeps on arguing that IS-LM is a valid model for analyzing modern economies.

Yours truly disagrees. In this article I want to focus on why IS-LM doesn’t adequately reflect the width and depth of Keynes’s insights on the workings of modern market economies and why we have so much more to learn from Hyman Minsky than a “brilliantly silly” gadget like the IS-LM model.

Almost nothing in the post-General Theory writings of Keynes suggests him considering Hicks’s IS-LM anywhere near a faithful rendering of his thought. In Keynes’s canonical statement of the essence of his theory — in the famous 1937 Quarterly Journal of Economics article — there is nothing to even suggest that Keynes would have thought the existence of a Keynes-Hicks-IS-LM-theory anything but pure nonsense. John Hicks, the man who invented IS-LM in his 1937 Econometrica review of Keynes’ General Theory — “Mr. Keynes and the ‘Classics’. A Suggested Interpretation” — returned to it in an article in 1980 — “IS-LM: an explanation” — in Journal of Post Keynesian Economics. Self-critically he wrote that ”the only way in which IS-LM analysis usefully survives — as anything more than a classroom gadget, to be superseded, later on, by something better — is in application to a particular kind of causal analysis, where the use of equilibrium methods, even a drastic use of equilibrium methods, is not inappropriate.” What Hicks acknowledges in 1980 is basically that his original IS-LM model ignored significant parts of Keynes’ theory. IS-LM is inherently a temporary general equilibrium model. However — much of the discussions we have in macroeconomics is about timing and the speed of relative adjustments of quantities, commodity prices and wages — on which IS-LM doesn’t have much to say.

IS-LM forces to a large extent the analysis into a static comparative equilibrium setting that doesn’t in any substantial way reflect the processual nature of what takes place in historical time. To me Keynes’s analysis is in fact inherently dynamic — at least in the sense that it was based on real historic time and not the logical-ergodic-non-entropic time concept used in most neoclassical model building. And as Niels Bohr used to say — thinking is not the same as just being logical …

IS-LM reduces interaction between real and nominal entities to a rather constrained interest mechanism which is far too simplistic for analyzing complex financialised modern market economies.

IS-LM gives no place for real money, but rather trivializes the role that money and finance play in modern market economies. As Hicks, commenting on his IS-LM construct, had it in 1980 — “one did not have to bother about the market for loanable funds.” From the perspective of modern monetary theory, it’s obvious that IS-LM to a large extent ignores the fact that money in modern market economies is created in the process of financing — and not as IS-LM depicts it, something that central banks determine.

IS-LM is typically set in a current values numéraire framework that definitely downgrades the importance of expectations and uncertainty — and a fortiori gives too large a role for interests as ruling the roost when it comes to investments and liquidity preferences. In this regard it is actually as bad as all the modern microfounded Neo-Walrasian-New-Keynesian models where Keynesian genuine uncertainty and expectations aren’t really modelled. Especially the two-dimensionality of Keynesian uncertainty — both a question of probability and “confidence” — has been impossible to incorporate into this framework, which basically presupposes people following the dictates of expected utility theory (high probability may mean nothing if the agent has low “confidence” in it). Reducing uncertainty to risk — implicit in most analyses building on IS-LM models — is nothing but hand waving. According to Keynes we live in a world permeated by unmeasurable uncertainty — not quantifiable stochastic risk — which often forces us to make decisions based on anything but “rational expectations.” Keynes rather thinks that we base our expectations on the “confidence” or “weight” we put on different events and alternatives. To Keynes expectations are a question of weighing probabilities by “degrees of belief,” beliefs that often have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents as modeled by “modern” social sciences. And often we “simply do not know.” As Keynes writes in A Treatise on Probability:

The kind of fundamental assumption about the character of material laws, on which scientists appear commonly to act, seems to me to be [that] the system of the material universe must consist of bodies … such that each of them exercises its own separate, independent, and invariable effect, a change of the total state being compounded of a number of separate changes each of which is solely due to a separate portion of the preceding state … Yet there might well be quite different laws for wholes of different degrees of complexity, and laws of connection between complexes which could not be stated in terms of laws connecting individual parts … If different wholes were subject to different laws qua wholes and not simply on account of and in proportion to the differences of their parts, knowledge of a part could not lead, it would seem, even to presumptive or probable knowledge as to its association with other parts … In my judgment, the practical usefulness of those modes of inference … on which the boasted knowledge of modern science depends, can only exist … if the universe of phenomena does in fact present those peculiar characteristics of atomism and limited variety which appears more and more clearly as the ultimate result to which material science is tending.

Models can never be more than a starting point in the endeavour of finding causal mechanisms. Consequently we cannot — from a relevant and realistic point of view — simpliciter presuppose that what has worked before, will continue to do so in the future. How strange then that macroeconomic models — IS-LM included — as a rule do not even touch upon these aspects of scientific methodology that seems to be so fundamental and important for anyone trying to understand how we learn and orient ourselves in an uncertain world. An educated guess on why this is a fact would be that Keynes’s concepts are not possible to squeeze into a single calculable numerical “probability.” In the quest for quantities one – IS-LM models included — puts a blind eye to qualities and looks the other way.

Why is this important? Because the kind of involuntary unemployment and low investment activity that intermittently characterizes modern market economies is basically impossible to understand without weighing in the kind of uncertainties and expectations that was at the forefront of Keynes’s analysis.

IS-LM not only ignores genuine uncertainty, but also the essentially complex and cyclical character of economies and investment activities, speculation, endogenous money, labour market conditions, and the importance of income distribution. And as Axel Leijonhufvud so eloquently notes on IS-LM economics — “one doesn’t find many inklings of the adaptive dynamics behind the explicit statics.” Most of the insights on dynamic coordination problems that made Keynes write General Theory are lost in the translation into the IS-LM framework.

Thirty years ago — as a young research stipendiate in the U.S. — yours truly had the great pleasure and privelege of having Hyman Minsky as teacher. He was a great inspiration at the time. He still is — and the points I have made here are some of the main reasons why I still think Hyman was right when maintaining that “Keynes without uncertainty is rather like Hamlet without the Prince,” and characterizing IS-LM as an “unfair and naive representation of Keynes’s subtle and sophisticated views”:

minskys keynesbokThe glib assumption made by Professor Hicks in his exposition of Keynes’s contribution that there is a simple, negatively sloped function, reflecting the productivity of increments to the stock of capital, that relates investment to the interest rate is a caricature of Keynes’s theory of investment … which relates the pace of investment not only to prospective yields but also to ongoing financial behavior …

The conclusion to our argument is that the missing step in the standard Keynesian theory was the explicit consideration of capitalist finance within a cyclical and speculative context. Once capitalist finance is introduced and the development of cash flows … during the various states of the economy is explicitly examined, then the full power of the revolutionary insights and the alternative frame of analysis that Keynes developed becomes evident …

The greatness of The General Theory was that Keynes visualized [the imperfections of the monetary-financial system] as systematic rather than accidental or perhaps incidental attributes of capitalism … Only a theory that was explicitly cyclical and overtly financial was capable of being useful …

As all students of economics know, time is limited. Given that, there has to be better ways to optimize its utilization than spending hours and hours working through or constructing irrelevant economic models. I rather recommend my students to allocate some time to study great forerunners like Keynes and Minsky, helping them to construct better, real and relevant economic models – models that really help us to explain and understand reality.

Economics in need of a re-think

24 March, 2014 at 11:10 | Posted in Economics | 3 Comments

AS THE old joke goes, the questions in economics exams are the same every year; only the answers change. Just 40 years ago, the emergence of stagflation was prompting the monetarists to challenge the disciples of John Maynard Keynes, but after the crisis of 2008 the enthusiasm for Keynesian stimulus was discovered all over again.

money-blood-and-revolutionIn a new book George Cooper, a fund manager, suggests that the economics profession is itself in a state of crisis. It needs the kind of shift in thinking that Copernicus brought to astronomy, or Charles Darwin to biology, Mr Cooper argues. He cites Thomas Kuhn’s theory of scientific revolutions …

Economics fits the pre-revolutionary template, he argues. It has fractured into many incompatible schools of thought. It has created complex models without noticeably improving the accuracy of its predictions. And many of those models ignore features of the real world.

Neoclassical models, for example, are built on the assumptions that individuals make their own decisions based on self-interest, that they seek to maximise their welfare and that the result is a stable system that tends to equilibrium. Yet as behavioural studies have shown, individuals are not always rational optimisers. Mr Cooper suggests that neoclassical economists treat evidence that humans are not rational as “problematic, inexplicable and annoying—but also ignorable.”

Instead, he thinks people act as competitors, not maximisers … The logic of competition helps to explain, in his view, why there was so little economic growth before the Industrial Revolution. This lack of growth is a problem for those who believe that the West’s modern difficulties are caused by excessive government regulation and high taxes; the world before 1700 had minimal government and low taxes …

Today’s problems, according to Mr Cooper, are caused by the high debts built up by those lower down the social scale; debt transfers money from poor to rich. What is needed is thus an injection of money at the bottom of the pyramid, via fiscal stimulus, rather than “quantitative easing”, which pushes up asset prices and benefits the rich.

Whether mainstream economists will take Mr Cooper’s ideas seriously is doubtful; his book has no formulae (and few statistics) … But for those with an open mind his criticisms of the economics profession, and suggestions for new ways forward, will be extremely welcome.

Buttonwood/The Economist

I started out reading this book with high expectations, since Cooper’s earlier book The Origin of Financial Crises with its well-argued attack on today’s economic orthodoxy had been such a rewarding and enjoyable must-read. After having read the new book I, however, reluctantly have to concur with The Economist review  — it’s doubtful if economists will take Cooper’s argumentation seriously. The categorization of economics used by Cooper is from a doctrinal point of view — to say the least — problematic. The proposed new alternative framework is also far to vague to really have a bite. It in fact also resounds a lot of the theories of positional goods that we connect with Fred Hirsch and Robert Frank — good, interesting theories to be sure, but in no way self-evidently to be considered non-mainstream.

If anything, this underlines how important it is to realize that there are no short cuts in science. Or as a noted German philosopher once famously wrote:

There is no royal road to science, and only those who do not dread the fatiguing climb of its steep paths have a chance of gaining its luminous summits.

Greg Mankiw — lazy knee-jerk quasi-libertarian philosophy

24 March, 2014 at 07:58 | Posted in Economics | 1 Comment

I have let Greg Mankiw’s latest piece for the New York Times simmer in my brain for a few days, and now I have to let some of the noxious vapors escape.

Here’s what he says. Most economic choices are complex, with positive and negative effects on many parties. The utilitarian calculus, the greatest good for the greatest number, doesn’t work, because it means helping some people by hurting others. Unless there is a clear case of market failure—externalities—it is best to defer to the voluntary choices made by individuals in a free market.

libertarian at a dinner party

What makes it difficult to respond is the multitude of errors and omissions in the Mankiw formulation. It’s hard to know where to begin, so let me just make a list. All of these are interconnected, of course, so the whole list is more than the sum of its elements.

1. Externalities are not the only market failure! It’s scary that one of the planet’s most widely read undergraduate textbook authors could say this. For the record, you’ve also got imperfect competition, public goods and asymmetric information, and together they apply to a lot of economic terrain.

2. A different issue, not yet classified under market failure, is multiplicity of equilibrium. I’ve written a lot on this in the past and won’t repeat myself here, but interaction effects between economic agents, as well as the goods and services they produce, routinely make possible potential returns to collective action. To not see this is to not see the “social” in social science.

3. It is true that market transactions, if participants are self-interested and rational, filter out possible actions that make some better off at the expense of others. But surely to rule out all social change that is not voluntarily accepted on all sides is to commit to an extreme conservatism. The world we live in is the product of the past, with all the irrationalities and inequalities that have been transmitted to us by history. Maybe, just maybe, we might want to rectify some of them, even if history’s beneficiaries are against it.

4. Embedded in Mankiw’s quasi-libertarianism is a profound distrust of democracy. No one, he says, can reasonably weigh the competing claims of the winners and losers from a policy proposal. Well, that’s what democracy is supposed to do. In theory, we discuss it. We ask people to not simply assert their interests but give reasons why society should defer to them, and then we assess these reasons. Obviously, actually existing democracy falls far short of the ideal, but its performance is not completely worthless, and there is untapped potential even in existing institutions to do this job a lot better. It isn’t hard to find examples from modern history where democracies have risen to the occasion and brought about social change that, in hindsight, most of us now endorse. I sentence Mankiw to 40 hours of mandatory service to his own brain, in the form of reading John Dewey on democratic theory. He can get started here.

I can see the value in giving people plenty of scope to make voluntarily arrangements with one another. There is real freedom involved, and it’s important that there be lots of opportunities for individuals to take initiative as they see fit. But this is one value among several, and its weight varies from one policy context to the next. Knee-jerk libertarianism is simply lazy philosophy.

Peter Dorman

Next Page »

Blog at
Entries and comments feeds.