Wren-Lewis still doesn’t get it right!

12 Jul, 2012 at 22:04 | Posted in Economics, Theory of Science & Methodology | 1 Comment

Oxford professor Simon Wren-Lewis had a piece – in part a response to my article Wren–Lewis drivelling on macroeconomics – on heterodox versus mainstream macroeconomics on his blog a couple of days ago. Wren-Lewis confesses to be surprised by what he calls “The Great Divide” between the two types of macroeconomics. Although he claims to be sympathetic to parts of the heterodox project, he maintains that the “rejectionist strategy is of course unlikely to win friends within the mainstream.”

Even though I – rather self-evidently – agree on the last statement, I think Wren-Lewis mostly confuses the issue.

From the fact that Wren-Lewis, Krugman, yours truly and many other heterodox economists, share the same position in the ongoing policy debates, it by no means follows that – on a deep theoretical level – “New Keynesian” macroeconomists share the same theories and models as those Post Keynesian economists e.g. work with.

The Great Divide comes from both evidence and logic showing that the neoclassical economic theory is not adequate for analyzing, explaining or understanding modern economies. And that is why we have to be rejectionist when it comes to “New Keynesian” or any other ilk of manstream neoclassical macroeconomics!

The root of the deep theoretical divide in modern macroeconomics ultimately goes back to how we look upon the data we are handling. In modern neoclassical macroeconomics – Dynamic Stochastic General Equilibrium, New Synthesis, New Classical and “New Keynesian” – variables are treated as if drawn from a known “data-generating process” that unfolds over time and on which we therefore have access to heaps of historical time-series. If we do not assume that we know the “data-generating process” – if we do not have the “true” model – the whole edifice collapses.

Modern macroeconomics obviously did not anticipate the enormity of the problems that unregulated “efficient” financial markets created. Why? Because it builds on the myth of us knowing the “data-generating process” and that we can describe the variables of our evolving economies as drawn from an urn containing stochastic probability functions with known means and variances.

After thoroughly neglecting anything resembling a real-world finance system, one can’t just go on as if nothing has happened and simply append financial considerations to  neoclassical macromodels where finance more or less is equated to the neoclassical thought-construction of a “market for loanable funds.”

Both ontologically and epistemologically founded uncertainty makes any hopes of being able to consistently integrate financial crises into neoclassical macroeconomic models totally unfounded, since those models are based on assumptions of rational expectations, representative actors and dynamically stochastic general equilibrium – assumptions that convey the view that markets – give or take a few rigidities – are efficient!

Finance has its own dimension, and if taken seriously, its effect on an analysis must modify the whole theoretical system and not just be added as an unsystematic appendage. Finance is fundamental to our understanding of modern economies, and – as Johan Åkerman used to say – acting like the baker’s apprentice who, having forgotten to add yeast to the dough, throws it into the oven afterwards, just isn’t enough.

Most models in science are representations of something else. Models “stand for” or “depict” specific parts of a “target system” (usually the real world). A model that has neither surface nor deep resemblance to important characteristics of real economies ought to be treated with prima facie suspicion. How could we possibly learn about the real world if there are no parts or aspects of the model that have relevant and important counterparts in the real world target system? The burden of proof lays on the theoretical economists thinking they have contributed anything of scientific relevance without even hinting at any bridge enabling us to traverse from model to reality. All theories and models have to use sign vehicles to convey some kind of content that may be used for saying something of the target system. But purpose-built assumptions, like invariance, made solely to secure a way of reaching deductively validated results in mathematical models, are of little value if they cannot be validated outside of the model.

All empirical sciences use simplifying or unrealistic assumptions in their modeling activities. That is (no longer) the issue – as long as the assumptions made are not unrealistic in the wrong way or for the wrong reasons.

Theories are difficult to directly confront with reality. Economists therefore build models of their theories. Those models are representations that are directly examined and manipulated to indirectly say something about the target systems.

There are economic methodologists and philosophers that argue for a less demanding view on modeling and theorizing in economics. And to some theoretical economists it is deemed quite enough to consider economics as a mere “conceptual activity” where the model is not so much seen as an abstraction from reality, but rather a kind of “parallel reality”. By considering models as such constructions, the economist distances the model from the intended target, only demanding the models to be credible, thereby enabling him to make inductive inferences to the target systems.

But what gives license to this leap of faith, this “inductive inference”? Within-model inferences in formal-axiomatic models are usually deductive, but that does not come with a warrant of reliability for inferring conclusions about specific target systems. Since all models in a strict sense are false (necessarily building in part on false assumptions) deductive validity cannot guarantee epistemic truth about the target system. To argue otherwise would surely be an untenable overestimation of the epistemic reach of “surrogate models”.

Models do not only face theory. They also have to look to the world. But being able to model a credible world, a world that somehow could be considered real or similar to the real world, is not the same as investigating the real world. Even though all theories are false, since they simplify, they may still possibly serve our pursuit of truth. But then they cannot be unrealistic or false in any way. The falsehood or unrealisticness has to be qualified (in terms of resemblance, relevance etc). At the very least, the minimalist demand on models in terms of credibility has to give away to a stronger epistemic demand of “appropriate similarity and plausibility” (Pålsson Syll 2001:60). One could of course also ask for a sensitivity or robustness analysis, but the credible world, even after having tested it for sensitivity and robustness, can still be a far way from reality – and unfortunately often in ways we know are important. Robustness of claims in a model does not per se give a warrant for exporting the claims to real world target systems.

Questions of external validity are important more specifically also when it comes to microfounded macromodels. It can never be enough that these models somehow are regarded as internally consistent. One always also has to pose questions of consistency with the data. Internal consistency without external validity is worth nothing.

Re microfoundations – and a fortiori rational expectations and  representative agents – we have to be aware that they serve a particular theoretical purpose. And as the history of macroeconomics during the last thirty years has shown, the microfoundation programme for macroeconomics so eagerly pursued by “New Keynesian” macroeconomists is only methodologically consistent within the framework of a (deterministic or stochastic) general equilibrium analysis. In no other context has it been possible to incorporate these kind of microfoundations, with its “forward-looking optimizing individuals,” into macroeconomic models.

This is of course not by accident. General equilibrium theory is basically nothing else than an endeavour to consistently generalize the microeconomics of individuals and firms on to the macroeconomic level of aggregates.  

But it obviously doesn’t work. The analogy between microeconomic behaviour and macroeconomic behaviour is misplaced. Empirically, science-theoretically and methodologically, neoclassical microfoundations for macroeconomics are defective.  Tenable foundations for macroeconomics really have to be sought for elsewhere.

As so often among “New Keynesian“ macroeconomists, Simon Wren-Lewis maintains there is no alternatives to microfoundations. But of course there are alternatives to neoclassical general equilibrium microfoundations! Behavioural economics and Goldberg & Frydman’s “imperfect knowledge” economics being two noteworthy examples that easily come to mind. 

And for those of us who have not forgotten the history of our discipline, and not bought the sweet-water nursery tale of Lucas et consortes that Keynes was not “serious thinking,” we can easily see that there exists a macroeconomic tradition inspired by Keynes (that has absolutely nothing to do with any New Synthesis or “New Keynesianism” to do).

Its ultimate building-block is the perception of genuine uncertainty and that people often “simply do not know.” Real actors can’t know everything and their acts and decisions are not simply possible to sum or aggregate without the economist risking to succumb to “the fallacy of composition”.

Instead of basing macroeconomics on unreal and unwarranted generalizations of microeconomic behaviour and relations, it is far better to accept the ontological fact that the future to a large extent is uncertain, and rather conduct macroeconomics on this fact of reality.

The real macroeconomic challenge is to accept uncertainty and still try to explain why economic transactions take place – instead of simply conjuring the problem away by assuming uncertainty to be reducible to stochastic risk. That is scientific cheating. And it has been going on for too long now.

The Keynes-inspired building-blocks are there. But it is admittedly a long way to go before the whole construction is in place. But the sooner we are intellectually honest and ready to admit that the microfoundationalist programme has come to way’s end – the sooner we can redirect are aspirations and knowledge in more fruitful endeavours.

As a young research stipendiate in the U.S. at the beginning of the 1980s yours truly had the great pleasure and privelege of having Hyman Minsky as teacher. He was a great inspiration at the time. He still is.

I come to think of Hyman because Wren-Lewis’s attitude towards heterodox economists is in much also the attitude of Paul Krugman. Krugman had a post up some time ago, responding to a paper by Steve Keen on how some people – like Krugman – gets it so wrong on  the economics of Hyman Minsky. Krugman said that his basic reaction to discussions about “What Minsky Really Meant” or “What Keynes Really Meant” is that “Krugman Doesn’t Care.” The reason given for this rather debonair attitude is allegedly that history of economic thought may be OK, but what really counts is if reading Minsky or Keynes give birth to new and interesting insights and ideas. Economics is not religion, and to simply refer to authority is not an accepted way of arguing in science.

Although I have a lot of sympathy for Krugman’s view on authority, there is a somewhat disturbing and unbecoming coquetting – and certainly not for the first time, as his rather controversial speech at Cambridge last year, commemorating the 75th anniversary of Keynes’ General Theory, bear evidence of – in his attitude towards the great forerunners he is discussing. Sometimes it is easier to see things if you can stand on the shoulders of elders and giants. If Krugman took his time and really studied Keynes and Minsky, I’m sure even he would learn a lot. Krugman is a great economist, but it smacks not so little of hubris to simply say “if where you take the idea is very different from what the great man said somewhere else in his book, so what?” Physicists arguing like that when discussing Newton, Einstein, Bohr or Feynman would not be taken seriously. 

On most macroeconomic policy discussions I find myself in agreement with both Krugman and Wren-Lewis. To me that just shows that they are right in spite of and not thanks to those models they ultimately refer to.

So why don’t “New Keynesian” macroeconomists just renounce the ideas of rational expectations, representative actors and stochastic risk (rather than genuine uncertainty) once and for all? If and when they do, I’m sure many heterodox economists – on the other side of The Great Divide – will be more than happy to welcome all these former mainstream economists now wanting to contribute to once more making macroeconomics into that realist and relevant science that Keynes once laid the grounds for more than 75 years ago.

Added 13/7: Unlearning economics also has a good go at explaining the rejectionist attitude of heterodox economists vis-à-vis mainstream neoclassical economics:

The reason heterodox economists remain dissatisfied with mainstream economics, no matter how many modifications the latter adds to its core framework, is that there is always an implication that, in the absence of various real world ‘frictions’, the economy would function like a smoothly oiled machine. That is: assuming perfect information, mobility, ‘small’ firms, no unions, flexible prices/wages and so forth, the economy would achieve full employment, with near perfect utilisation of resources, and stay there, perhaps buffeted by mild external shocks.

New Keynesians and New Classicals sometimes act like bitter rivals, but mainly they only differ on which ‘frictions’ should be present or not (this is an oversimplification of the disagreement, of course). The original New Classical models started with economies that are always in equilibrium, preferences are constant, and competition is perfect. New Keynesian models add imperfect competition, sticky prices, transaction costs and so forth. The newest papers go further and add heterogeneous agents (which generally means two), changing preferences, and other ‘frictions.’ However, it is assumed that if the economy were rid some specific features/characteristics, it would function similarly to one of the core Walrasian or Arrow-Debreu style formulations.

So is it not true that real world mechanics prevent things from going as smoothly as they might do in absence of those mechanics? Well, partially. But according to heterodox economists, capitalism has inherent tendencies to crisis, unemployment and misallocation anyway.

1 Comment

  1. Great post. I think the closest New Keynesian to moving over at this point is probably Joe Stiglitz. If he breaks (and seriously he’s about as far from the mainstream while still being the mainstream you could possibly be) I would expect more to start breaking.


Sorry, the comment form is closed at this time.

Blog at WordPress.com.
Entries and comments feeds.