Sacrifice

20 Jan, 2016 at 17:27 | Posted in Varia | Comments Off on Sacrifice

 

‘Deep parameters’ and microfoundations

20 Jan, 2016 at 10:31 | Posted in Economics | Comments Off on ‘Deep parameters’ and microfoundations

In a post last week, Simon Wren-Lewis was discussing if modern academic macroeconomics is eclectic or not. When it comes to methodology it seems as though his conclusion is that it is not:

The New Classical Counter Revolution of the 1970s and 1980s … was primarily a revolution about methodology, about arguing that all models should be microfounded, and in terms of mainstream macro it was completely successful. It also tried to link this to a revolution about policy, about overthrowing Keynesian economics, and this ultimately failed. But perhaps as a result, methodology and policy get confused. Mainstream academic macro is very eclectic in the range of policy questions it can address, and conclusions it can arrive at, but in terms of methodology it is quite the opposite.

In an earlier post he elaborated on why the New Classical Counterrevolution was so successful in replacing older theories, despite the fact that the New Classical models weren’t able to explain what happened to output and inflation in the 1970s and 1980s:

The new theoretical ideas New Classical economists brought to the table were impressive, particularly to those just schooled in graduate micro. Rational expectations is the clearest example …

However, once the basics of New Keynesian theory had been established, it was quite possible to incorporate concepts like rational expectations or Ricardian Eqivalence into a traditional structural econometric model (SEM) …

The real problem with any attempt at synthesis is that a SEM is always going to be vulnerable to the key criticism in Lucas and Sargent, 1979: without a completely consistent microfounded theoretical base, there was the near certainty of inconsistency brought about by inappropriate identification restrictions …

So why does this matter? … If mainstream academic macroeconomists were seduced by anything, it was a methodology – a way of doing the subject which appeared closer to what at least some of their microeconomic colleagues were doing at the time, and which was very different to the methodology of macroeconomics before the New Classical Counterrevolution. The old methodology was eclectic and messy, juggling the competing claims of data and theory. The new methodology was rigorous!

Wren-Lewis seems to be impressed by the ‘rigour’ brought to macroeconomics by the New Classical counterrevolution and its rational expectations, microfoundations and ‘Lucas Critique’.

I fail to see why.

3634flimWren-Lewis’s ‘portrayal’ of rational expectations is not as innocent as it may look. Rational expectations in the neoclassical economists’s world implies that relevant distributions have to be time independent. This amounts to assuming that an economy is like a closed system with known stochastic probability distributions for all different events. In reality it is straining one’s beliefs to try to represent economies as outcomes of stochastic processes. An existing economy is a single realization tout court, and hardly conceivable as one realization out of an ensemble of economy-worlds, since an economy can hardly be conceived as being completely replicated over time. It is — to say the least — very difficult to see any similarity between these modelling assumptions and the expectations of real persons. In the world of the rational expectations hypothesis we are never disappointed in any other way than as when we lose at the roulette wheels. But real life is not an urn or a roulette wheel. And that’s also the reason why allowing for cases where agents ‘make predictable errors’ in the New Keynesian models doesn’t take us a closer to a relevant and realist depiction of actual economic decisions and behaviours. If we really want to have anything of interest to say on real economies, financial crisis and the decisions and choices real people make we have to replace the rational expectations hypothesis with more relevant and realistic assumptions concerning economic agents and their expectations than childish roulette and urn analogies.

The predominant strategy in mainstream macroeconomics today is to build models and make things happen in these ‘analogue-economy models.’ But although macro-econometrics may have supplied economists with rigorous replicas of real economies, if the goal of theory is to be able to make accurate forecasts or explain what happens in real economies, this ability to — ad nauseam — construct toy models, does not give much leverage.

‘Rigorous’ and ‘precise’ New Classical models — and that goes for the ‘New Keynesian’ variety too — cannot be considered anything else than unsubstantiated conjectures as long as they aren’t supported by evidence from outside the theory or model. To my knowledge no in any way decisive empirical evidence has been presented.

And — applying a ‘Lucas critique’ on New Classical and ‘New Keynesian’ models, it is obvious that they too fail.

Changing ‘policy rules’ cannot just be presumed not to influence investment and consumption behavior and a fortiori technology, thereby contradicting the invariance assumption. Technology and tastes cannot live up to the status of an economy’s deep and structurally stable Holy Grail. They too are part and parcel of an ever-changing and open economy. Lucas hope of being able to model the economy as ‘a FORTRAN program’ and ‘gain some confidence that the component parts of the program are in some sense reliable prior to running it’ therefore seems – from an ontological point of view – totally misdirected. The failure in the attempt to anchor the analysis in the alleged stable deep parameters ‘tastes’ and ‘technology’ shows that if you neglect ontological considerations pertaining to the target system, ultimately reality gets its revenge when at last questions of bridging and exportation of model exercises are laid on the table.

keynes-right-and-wrong

People like Dani Rodrik and Simon Wren-Lewis are proud of having an ever-growing smorgasbord of models to cherry-pick from (as long as, of course, the models do not question the standard modeling strategy) when performing their analyses. The ‘rigorous’ and ‘precise’ deductions made in these closed models, however, are not in any way matched by a similar stringency or precision when it comes to what ought to be the most important stage of any research — making statements and explaining things in real economies. Although almost every mainstream economist holds the view that thought-experimental modeling has to be followed by confronting the models with reality — which is what they indirectly want to predict/explain/understand using their models — they all of a sudden become exceedingly vague and imprecise. It is as if all the intellectual force has been invested in the modeling stage and nothing is left for what really matters — what exactly do these models teach us about real economies.

No matter how precise and rigorous the analysis, and no matter how hard one tries to cast the argument in modern mathematical form, they do not push economic science forwards one single millimeter if they do not stand the acid test of relevance to the target. No matter how clear, precise, rigorous or certain the inferences delivered inside these models are, they do not per se say anything about real world economies.

Proving things ‘rigorously’ in mathematical models is at most a starting-point for doing an interesting and relevant economic analysis. Forgetting to supply export warrants to the real world makes the analysis an empty exercise in formalism without real scientific value.

Axiomatics — the economics fetish

18 Jan, 2016 at 20:40 | Posted in Theory of Science & Methodology | 4 Comments

Mainstream — neoclassical — economics has become increasingly irrelevant to the understanding of the real world. The main reason for this irrelevance is the failure of economists to match their deductive-axiomatic methods with their subject.

The idea that a good scientific theory must be derived from a formal axiomatic system has little if any foundation in the methodology or history of science. Nevertheless, it has become almost an article of faith in modern economics. I am not aware, but would be interested to know, whether, and if so how widely, this misunderstanding has been propagated in other (purportedly) empirical disciplines. The requirement of the axiomatic method in economics betrays a kind of snobbishness and (I use this word advisedly, see below) pedantry, resulting, it seems, from a misunderstanding of good scientific practice …

This doesn’t mean that trying to achieve a reduction of a higher-level discipline to another, deeper discipline is not a worthy objective, but it certainly does mean that one cannot just dismiss, out of hand, a discipline simply because all of its propositions are not deducible from some set of fundamental propositions. Insisting on reduction as a prerequisite for scientific legitimacy is not a scientific attitude; it is merely a form of obscurantism …

theory of vlueThe fetish for axiomitization in economics can largely be traced to Gerard Debreu’s great work, The Theory of Value: An Axiomatic Analysis of Economic Equilibrium … The subsequent work was then brilliantly summarized and extended in another great work, General Competitive Analysis by Arrow and Frank Hahn. Unfortunately, those two books, paragons of the axiomatic method, set a bad example for the future development of economic theory, which embarked on a needless and counterproductive quest for increasing logical rigor instead of empirical relevance …

I think that it is important to understand that there is simply no scientific justification for the highly formalistic manner in which much modern economics is now carried out. Of course, other far more authoritative critics than I, like Mark Blaug and Richard Lipsey have complained about the insistence of modern macroeconomics on microfounded, axiomatized models regardless of whether those models generate better predictions than competing models. Their complaints have regrettably been ignored for the most part. I simply want to point out that a recent, and in many ways admirable, introduction to modern macroeconomics failed to provide a coherent justification for insisting on axiomatized models. It really wasn’t the author’s fault; a coherent justification doesn’t exist.

David Glasner

 
It is — sad to say — a fact that within mainstream economics internal validity is everything and external validity nothing. Why anyone should be interested in that kind of theories and models — as long as mainstream economists do not come up with any export licenses for their theories and models to the real world in which we live — is beyond my imagination. Sure, the simplicity that axiomatics and analytical arguments bring to economics is attractive to many economists. But …

aSimplicity, however, has its perils. It is one thing to choose as one’s first object of theoretical study the type of arguments open to analysis in the simplest terms. But it is quite another to treat this type of argument as a paradigm and to demand that arguments in other fields should conform to its standards regardless, or build up from a study of the simplest forms of argument alone a set of categories intended for application to arguments of all sorts: one must at any rate begin by inquiring carefully how far the artificial simplicity of one’s chosen modal results in these logical categories also being artificially simple. The sorts of risks one runs otherwise are obvious enough. Distinctions which all happen to cut along the same line for the simplest arguments may need to be handled quite separately in the general case; if we forget this, and our new found logical categories yield paradoxical results when applied to more complex arguments, we may be tempted to put these rules down to defects in the arguments instead of in our categories; and we may end up by thinking that, for some regrettable reason hidden deep in the nature of things, only our original, peculiarly simple arguments are capable of attaining to the ideal of validity.

The gussied up economics of Tweedledum and Tweedledee

17 Jan, 2016 at 19:09 | Posted in Economics | 2 Comments

“Of course, there were exceptions to these trends: a few economists challenged the assumption of rational behavior, questioned the belief that financial markets can be trusted and pointed to the long history of financial crises that had devastating economic consequences. But they were swimming against the tide, unable to make much headway against a pervasive and, in retrospect, foolish complacency.” —Paul Krugman, New York Times Magazine, September 6, 2009

While normal ecclesiastic practice places this word at the end of the prayer, on this occasion it seems right to put it up front. In two sentences, Professor Paul Krugman … has summed up the failure of an entire era in economic thought, practice, and policy discussion.

And yet, there is something odd about the role of this short paragraph in an essay of over 6,500 words. It’s a throwaway. It leads nowhere …

3bc4bd9c320c4bd3450188933cb126a1

Krugman’s entire essay is about two groups, both deeply entrenched at (what
they believe to be) the top of academic economics. Both are deeply preoccupied
with their status and with a struggle for influence and for academic power and
prestige—against the other group. Krugman calls them “saltwater” and “freshwater” economists; they tend to call themselves “new classicals” and the “new Keynesians” — although one is not classical and the other is not Keynesian. One might speak of a “Chicago School” and an “MIT School”—after the graduate programs through which so many passed. In truth, there are no precise labels, because the differences between them are both secondary and obscure.

The two groups share a common perspective, a preference for thinking along similar lines. Krugman describes this well, as a “desire for an all-encompassing, intellectually elegant approach that also gave economists a chance to show off their mathematical prowess.” Exactly so. It was in part about elegance — and in part about showing off. It was not about … the economy. It was not a discussion of problems, risks, dangers, and policies. In consequence, the failure was shared by both groups. This is the extraordinary thing. Economics was not riven by a feud between Pangloss and Cassandra. It was all a chummy conversation between Tweedledum and Tweedledee. And if you didn’t think either Tweedle was worth much — well then, you weren’t really an economist, were you?

Professor Krugman contends that Tweedledum and Tweedledee “mistook beauty for truth.” The beauty in question was the “vision of capitalism as a perfect or nearly perfect system.” To be sure, the accusation that a scientist — let alone an entire science — was seduced by beauty over truth is fairly damaging. But it’s worth asking, what exactly was beautiful about this idea? Krugman doesn’t quite say. He does note that the mathematics used to describe the alleged perfection was “impressive-looking” — ”gussied up” as he says, “with fancy equations.” It’s a telling choice of words. “Impressive-looking”? “Gussied up”? These are not terms normally used to describe the Venus de Milo.

James K. Galbraith

Wren-Lewis and the Rodrik smorgasbord view of economic models

17 Jan, 2016 at 16:20 | Posted in Economics | 13 Comments

abbIn December 2015 yours truly run a series of eight posts on this blog discussing Dani Rodrik‘s Economics Rules (Oxford University Press, 2015).

There sure is much in the book I like and appreciate. It is one of those rare examples where a mainstream economist — instead of just looking the other way — takes his time to ponder on the tough and deep science-theoretic and methodological questions that underpin the economics discipline.

But (as I argue at large in a forthcoming article in this journal) there is also a very disturbing apologetic tendency in the book to blame all of the shortcomings on the economists and depicting economics itself as a problem-free smorgasbord collection of models. If you just choose the appropriate model from the immense and varied smorgasbord there’s no problem. It is as if all problems in economics were conjured away if only we could make the proper model selection.

Today, Oxford macroeconomist Simon Wren-Lewis has a post up on his blog on Rodrik’s book — and is totally überjoyed:

The first and most important thing to say is this is a great book … because it had a way of putting things which was illuminating and eminently sensible. Illuminating is I think the right word: seeing my own subject in a new light, which is something that has not happened to me for a long time. There was nothing I could think of where I disagreed …

The key idea is that there are many valid models, and the goal is to know when they are applicable to the problem in hand …

Lots of people get hung up on the assumptions behind models: are they true or false, etc. An analogy I had not seen before but which I think is very illuminating is with experiments. Models are like experiments. Experiments are designed to abstract from all kinds of features of the real world, to focus on a particular process or mechanism (or set of the same). The assumptions of models are designed to do the same thing.

“Models are like experiments.” I’ve run into that view many times over the years when having discussions with mainstream economists on their ‘thought experimental’ obsession — and I still think it’s too vague and elusive to be helpful. Just repeating the view doesn’t provide the slightest reasont to believe it.

Although perhaps thought provoking to some, I find the view on experiments offered too simplistic. And for several reasons — but mostly because the kind of experimental empiricism it favours is largely untenable.

Experiments are very similar to theoretical models in many ways  — on that Wren-Lewis and yours truly are in total agreement. Experiments have the same basic problem that they are built on rather artificial conditions and have difficulties with the “trade-off” between internal and external validity. But — with more artificial conditions and internal validity, also comes less external validity. The more we rig experiments/models to avoid the “confounding factors”, the less the conditions are reminiscent of the real “target system”. The nodal issue is how economists using different isolation strategies in different “nomological machines” attempt to learn about causal relationships.

Assume that you have examined how the work performance of Swedish workers A is affected by B (“treatment”). How can we extrapolate/generalize to new samples outside the original population (e.g. to the UK)? How do we know that any replication attempt “succeeds”? How do we know when these replicated experimental results can be said to justify inferences made in samples from the original population? If, for example, P(A|B) is the conditional density function for the original sample, and we are interested in doing a extrapolative prediction of E [P(A|B)], how can we know that the new sample’s density function is identical with the original? Unless we can give some really good argument for this being the case, inferences built on P(A|B) is not really saying anything on that of the target system’s P'(A|B).

As I see it, this is the heart of the matter. External validity/extrapolation/generalization is founded on the assumption that we can make inferences based on P(A|B) that is exportable to other populations for which P'(A|B) applies. Sure, if one can convincingly show that P and P’ are similar enough, the problems are perhaps not insurmountable. But arbitrarily just introducing functional specification restrictions of the type invariance/stability/homogeneity is, at least for an epistemological realist, far from satisfactory. And often it is — unfortunately — exactly this that we see when we take part of neoclassical economists’ models/experiments.

By this I do not mean to say that empirical methods per se are so problematic that they can never be used. On the contrary, I am basically — though not without reservations — in favour of the increased use of experiments within economics as an alternative to completely barren “bridge-less” axiomatic-deductive theory models. My criticism is more about aspiration levels and what we believe we can achieve with our mediational epistemological tools and methods in social sciences.

Just as traditional neoclassical thought-experimental modeling, real experimentation is basically also a deductive method. Given  the assumptions (such as manipulability, transitivity, separability, additivity, linearity etc)  these methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right.  Real target systems are seldom epistemically isomorphic to our axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of  the conclusions reached from within these epistemically convenient models/systems.

Limiting model assumptions in economic science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we ‘export’ them to our ‘target systems,’ we have to be able to show that they do not only hold under ceteris paribus conditions and a fortiori only are of limited value to our understanding, explanations or predictions of real economic systems.

Wren-Lewis obviously doesn’t want to get ‘hung up on the assumptions behind models.’ Maybe so, but it is still an undeniable fact that theoretical models building on piles of known to be false assumptions in no way even get close to being scientific explanations. On the contrary. They are untestable and a fortiori totally worthless from the point of view of scientific relevance.

Blue In Green

17 Jan, 2016 at 14:41 | Posted in Economics, Varia | Comments Off on Blue In Green

 

Lennart Schön (1946-2016) In Memoriam

17 Jan, 2016 at 10:02 | Posted in Varia | Comments Off on Lennart Schön (1946-2016) In Memoriam

Swedish economic historians were deeply saddened to learn of the death earlier this month of professor Lennart Schön.

Schön contributed to many areas of Swedish economic history. Most important was his development of a long-run cycles perspective on Swedish industrial society’s history (on which we worked together in a couple of research projects in the 1990s, carrying further the structural analytical tradition of Johan Åkerman, Erik Dahmén and Ingvar Svennilson).

The greatest Swedish economic historian since Eli Heckscher has passed away.

He is held in great esteem and we all truly miss this open-minded, good-hearted and passionate researcher.

Rest in peace my friend.

Rule of law

16 Jan, 2016 at 17:30 | Posted in Politics & Society | Comments Off on Rule of law

asa

Den 21 januari är det fjorton år sedan Fadime Sahindal mördades; hennes pappa vägrade låta henne välja sitt liv. Den gången var det laddat att tala om hedersrelaterat våld, om miljöer där släktens rykte hängde på kvinnornas dygd. Tidigare hade kulturella olikheter rentav betraktats som förmildrande omständigheter.

Sådan kulturrelativism är lyckligtvis historia. Men fortfarande lever tiotusentals unga svenskar i miljöer präglade av hederskultur och vittnesmål från förorter avslöjar att självutnämnda väktarråd kringskär kvinnors frihet.

I Afghanistan behandlas många kvinnor illa. I Saudiarabien får kvinnor inte köra bil. I Iran beskär mullorna kvinnors livssfär. Det måste gå att säga att det finns kulturella skillnader i synen på kvinnor och sexualitet. Det måste gå att säga att många pojkar och män som vuxit upp i patriarkala kulturer bär detta med sig – också till Sverige. Att bara väsa “rasist” gör det hopplöst att komma åt problemen.

Kvinnor och män har samma värde. Alla som lever i Sverige måste respektera detta.

Heidi Avellan/Sydsvenskan

Sverige ska vara ett öppet land. En del av världssamfundet.

Men det ska också vara ett land som slår fast att de landvinningar i termer av jämlikhet, öppenhet och tolerans som vi tillkämpat oss under sekler inte är förhandlingsbara.

Människor som kommer till vårt land ska åtnjuta dessa rättigheter och friheter.

Men med dessa rättigheter och friheter kommer också en skyldighet. Alla — utan undantag — måste också acceptera att i vårt land gäller en lag — lika för alla.

Rule of law.

I Köln, Hamburg och Stockholm utmanas stadens och statens rättsordning av grupper som, oavsett deras skiftande etnicitet, förefaller omfatta just stamtänkandets normer. I vetskap om att man inte riskerar stamkulturens vedergällning och i avsaknad av lojalitet mot det omgivande samhället ser man sig fri att behandla oskyddade kvinnor som byten. Det mest bekymmersamma med denna utveckling är kanske den liberala medborgargemenskapens undfallenhet. En långtgående kulturrelativism har medfört en sorts förvärvad stupiditet, som gör att man hellre söker förtiga kulturrelaterade problem och låtsas som om de inte finns än att åtgärda dem. Alternativt skuldbelägger man sig själv, för att slippa ta i den besvärliga konflikten med Den Andre.

Per Bauhn

When will Krugman catch up with Keynes?

16 Jan, 2016 at 10:44 | Posted in Economics | 2 Comments

In his column this morning, Paul Krugman, had this to say about issues that have mattered a lot to me over many years now. I admire Krugman, of course, but this is bullshit, pure and simple. Not the Harry Frankfurt kind, which requires willful ignorance of the facts, but the everyday kind, which requires mere ignorance of the historical record.

“Don’t say that redistribution is inherently wrong. Even if high incomes perfectly reflected productivity, market outcomes aren’t the same as moral justification. And given the reality that wealth often reflects either luck or power, there’s a strong case to be made for collecting some of that wealth in taxes and using it to make society as a whole stronger, as long as it doesn’t destroy the incentive to keep creating more wealth.”

The “incentive to keep creating more wealth”? …

John-Maynard-Keynes-capitalism-quoteKeynes, bless his heart, called it “a somewhat disgusting morbidity.”

He was right … Here’s how he put it [in ‘Economic Possibilities for Our Grandchildren’ (1930)]:

“When the accumulation of wealth is no longer of high social importance, there will be a great change in the code of morals. We shall be able to rid ourselves of many of the pseudo-moral principles which have hag-ridden us for two hundred years, by which we have exalted some of the most distasteful of human qualities into the position of the highest virtues. We shall be able to afford to dare to assess the money-motive at its true value. The love of money as a possession … will be recognised for what it is, a somewhat disgusting morbidity, one of those semi-criminal, semi-pathological propensities which one hands over with a shudder to the specialist in mental disease.” …

Paul Krugman is a hero to many of us because he fights the good fight against the idiocies of economic theory and practice in our time. But he is now behind the times because he hasn’t yet caught up with Keynes.

James Livingston

Wren-Lewis on macroeconomic eclecticism

13 Jan, 2016 at 20:39 | Posted in Economics | 8 Comments

Oxford macroeconomist Simon Wren-Lewis has a post up today on his blog discussing whether mainstream macroeconomics is eclectic or not. His answer is — both yes and no:

Does this mean academic macroeconomics is fragmented into lots of cliques, some big and some small? Not really … This is because these models (unlike those of 40+ years ago) use a common language …

quote-eclecticism-every-truth-is-so-true-that-any-truth-must-be-false-f-h-bradley-212906

It means that the range of assumptions that models (DSGE models if you like) can make is huge. There is nothing formally that says every model must contain perfectly competitive labour markets where the simple marginal product theory of distribution holds, or even where there is no involuntary unemployment, as some heterodox economists sometimes assert. Most of the time individuals in these models are optimising, but I know of papers in the top journals that incorporate some non-optimising agents into DSGE models. So there is no reason in principle why behavioural economics could not be incorporated …

It also means that the range of issues that models (DSGE models) can address is also huge …

Mainstream academic macro is very eclectic in the range of policy questions it can address, and conclusions it can arrive at, but in terms of methodology it is quite the opposite.

Wren-Lewis tries to give a picture of modern macroeconomics as a pluralist enterprise. But the change and diversity that gets Wren-Lewis approval only takes place within the analytic-formalistic modeling strategy that makes up the core of mainstream economics. You’re free to take your analytical formalist models and apply it to whatever you want — as long as you do it with a modeling methodology that is acceptable to the mainstream. If you do not follow this particular mathematical-deductive analytical formalism you’re not even considered doing economics. If you haven’t modeled your thoughts, you’re not in the economics business. But this isn’t pluralism. It’s a methodological reductionist straightjacket.

To most mainstream economists you only have knowledge of something when you can prove it, and so ‘proving’ theories with their models via deductions is considered the only certain way to acquire new knowledge. This is, however, a view for which there is no warranted epistemological foundation. Outside mathematics and logics, all human knowledge is conjectural and fallible.

Validly deducing things in closed analytical-formalist-mathematical models — built on atomistic-reductionist assumptions — doesn’t much help us understand or explain what is taking place in the real world we happen to live in. Validly deducing things from patently unreal assumptions — that we all know are purely fictional — makes most of the modeling exercises pursued by mainstream macroeconomists rather pointless. It’s simply not the stuff that real understanding and explanation in science is made of. Had mainstream economists like Wren-Lewis not been so in love with their smorgasbord of models, they would have perceived this too. Telling us that the plethora of models that make up modern macroeconomics  ‘are not right or wrong,’ but ‘just more or less applicable to different situations,’ is nothing short of hand waving.

So, yes, there is a proliferation of macromodels nowadays — but it almost exclusively takes place as a kind of axiomatic variation within the standard DSGE modeling framework. And — no matter how many thousands of models mainstream economists come up with, as long as they are just axiomatic variations of the same old mathematical-deductive ilk, they will not take us one single inch closer to giving us relevant and usable means to further our understanding and explanation of real economies.

Wren-Lewis seems to have no problem with the lack of fundamantal diversity — not just path-dependent elaborations of the mainstream canon — and vanishingly little real world relevance that characterize modern mainstream macroeconomics.

Wren-Lewis obviously shares the view of his mainstream colleagues Paul Krugman and Greg Mankiw that there is nothing basically wrong with ‘standard theory.’ As long as policy makers and economists stick to ‘standard economic analysis’ — DSGE — everything is fine. Economics is just a common language and method that makes us think straight and reach correct answers.

And just as his colleagues, when it really counts, Wren-Lewis shows what he is — a mainstream neoclassical economist fanatically defending the insistence of using an axiomatic-deductive economic modeling strategy. To yours truly, this attitude is nothing but a late confirmation of Alfred North Whitehead’s complaint that ‘the self-confidence of learned people is the comic tragedy of civilization.’
 

Added January 15: Wren-Lewis has a post up today commenting on some of the critique put forward here. Writes Wren-Lewis:

My post ended with the following sentence:

‘Mainstream academic macro is very eclectic in the range of policy questions it can address, and conclusions it can arrive at, but in terms of methodology it is quite the opposite.’

I argue in the post that “this non-eclecticism in terms of excluding non-microfounded work is deeply problematic.” I then link to my many earlier posts where I have expanded on this theme. So how I can be a fanatic defender of insisting that this modelling strategy be used escapes me. Unless I have misunderstood what an ‘axiomatic-deductive’ strategy is.

Yes indeed, that is a total misunderstanding!

I have two PhDs. I am a professor. I can read.

So, of course, the issue is not about microfoundations or not (on which I have written plenty elsewhere, e.g. here). What I criticize Wren-Lewis and other mainstream economists for, is the insistence of using axiomatic-deductive modeling (with or without microfoundations). And I am — in case anyone thought otherwise — not alone in that critique:

The fundamental problem of modern economics is that methods are repeatedly applied in conditions for which they are not appropriate … Specifically, modern academic economics is dominated by a mainstream tradition whose defining characteristic is an insistence that certain methods of mathematical modelling be more or less always employed in the analysis of economic phenomena, and are so in conditions for which they are not suitable.

tony-lawsonFundamental to my argument is an assessment that the application of mathematics involves more than merely the introduction of a formal language. Of relevance here is recognition that mathematical methods and techniques are essentially tools. And as with any other tools (pencils, hammers, drills, scissors), so the sorts of mathematical methods which economists wield (functional relations, forms of calculus, etc.) are useful under some sets of conditions and not others …

Clearly if social phenomena are highly internally related they do not each exist in isolation. And if they are processual in nature, being continually transformed through practice, they are not atomistic. So the emphasis on the sorts of mathematical modelling methods that economists employ necessarily entails the construction of economic narratives – including the sorts of axioms and assumptions made and hypotheses entertained – that, at best, are always but highly distorted accounts of the complex phenomena of the real open social system … It is thus not at all surprising that mainstream contributions are found continually to be so unrealistic and explanatorily limited.

Employing the term deductivism to denote the thesis that closed systems are essential to social scientific explanation (whether the event regularities, correlations, uniformities, laws, etc., are either a prior constructions or a posterior observations), I conclude that the fundamental source of the discipline’s numerous, widespread and long lived problems and failings is precisely the emphasis placed upon forms of mathematical deductivist reasoning.

Tony Lawson

Heroes

13 Jan, 2016 at 09:27 | Posted in Economics, Varia | Comments Off on Heroes

 

Life on Mars

11 Jan, 2016 at 23:08 | Posted in Varia | Comments Off on Life on Mars

 

Forecasting econometrics

10 Jan, 2016 at 11:23 | Posted in Statistics & Econometrics | 1 Comment

411e9aO5PCL._SY344_BO1,204,203,200_There have been over four decades of econometric research on business cycles … The formalization has undeniably improved the scientific strength of business cycle measures …

But the significance of the formalization becomes more difficult to identify when it is assessed from the applied perspective, especially when the success rate in ex-ante forecasts of recessions is used as a key criterion. The fact that the onset of the 2008 financial-crisis-triggered recession was predicted by only a few ‘Wise Owls’ … while missed by regular forecasters armed with various models serves us as the latest warning that the efficiency of the formalization might be far from optimal. Remarkably, not only has the performance of time-series data-driven econometric models been off the track this time, so has that of the whole bunch of theory-rich macro dynamic models developed in the wake of the rational expectations movement, which derived its fame mainly from exploiting the forecast failures of the macro-econometric models of the mid-1970s recession.

The limits of econometric forecasting has, as noted by Qin, been critically pointed out many times before.

Trygve Haavelmo — with the completion (in 1958) of the twenty-fifth volume of Econometrica — assessed the the role of econometrics in the advancement of economics, and although mainly positive of the “repair work” and “clearing-up work” done, Haavelmo also found some grounds for despair:

We have found certain general principles which would seem to make good sense. Essentially, these principles are based on the reasonable idea that, if an economic model is in fact “correct” or “true,” we can say something a priori about the way in which the data emerging from it must behave. We can say something, a priori, about whether it is theoretically possible to estimate the parameters involved. And we can decide, a priori, what the proper estimation procedure should be … But the concrete results of these efforts have often been a seemingly lower degree of accuracy of the would-be economic laws (i.e., larger residuals), or coefficients that seem a priori less reasonable than those obtained by using cruder or clearly inconsistent methods.

Haavelmo-intro-2-125397_630x210There is the possibility that the more stringent methods we have been striving to develop have actually opened our eyes to recognize a plain fact: viz., that the “laws” of economics are not very accurate in the sense of a close fit, and that we have been living in a dream-world of large but somewhat superficial or spurious correlations.

And as the quote below shows, even Ragnar Frisch shared some of Haavelmo’s — and Keynes’s — doubts on the applicability of econometrics:

sp9997db.hovedspalteI have personally always been skeptical of the possibility of making macroeconomic predictions about the development that will follow on the basis of given initial conditions … I have believed that the analytical work will give higher yields – now and in the near future – if they become applied in macroeconomic decision models where the line of thought is the following: “If this or that policy is made, and these conditions are met in the period under consideration, probably a tendency to go in this or that direction is created”.

Ragnar Frisch

Maintaining that economics is a science in the “true knowledge” business, I remain a skeptic of the pretences and aspirations of econometrics. So far, I cannot really see that it has yielded very much in terms of relevant, interesting economic knowledge. And, more specifically,  when it comes to forecasting activities, the results have been bleak indeed.

Firmly stuck in an empiricist tradition, econometrics is only concerned with the measurable aspects of reality, But there is always the possibility that there are other variables – of vital importance and although perhaps unobservable and non-additive not necessarily epistemologically inaccessible – that were not considered for the model. Those who were can hence never be guaranteed to be more than potential causes, and not real causes.

A perusal of the leading econom(etr)ic journals shows that most econometricians still concentrate on fixed parameter models and that parameter-values estimated in specific spatio-temporal contexts are presupposed to be exportable to totally different contexts. To warrant this assumption one, however, has to convincingly establish that the targeted acting causes are stable and invariant so that they maintain their parametric status after the bridging. The endemic lack of predictive success of the econometric project indicates that this hope of finding fixed parameters is a hope for which there really is no other ground than hope itself.

When causal mechanisms operate in real world social target systems they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made “nomological machines” they are rare, or even non-existant. Unfortunately that also makes most of the achievements of econometric forecasting rather useless.

Je suis Charlie encore aujourd’hui

9 Jan, 2016 at 11:18 | Posted in Politics & Society | Comments Off on Je suis Charlie encore aujourd’hui

Ska vi kunna leva tillsammans, människor från olika länder, kulturer och religioner, så måste det finnas tydliga spelregler:

Detta gäller i Sverige. För alla.

Respekt för lagen, för demokrati och mänskliga rättigheter, jämlikhet och jämställdhet, allas lika värde och individens rätt att välja sitt liv. När väl det är på plats står det var och en fritt att tillbe sin gud, bära slöja och fira jul, pesach, eid al-fitr. Eller låta bli.

54adfbdacf62d.image
 
Det är särskilt viktigt nu, när flyktingströmmen fört så många nya människor hit … Redan hörs oroande rapporter från invandrartäta områden: skäggmän kringskär kvinnors frihet, har åsikt om klädsel och vandel, här som i det gamla hemlandet.

Oacceptabelt är bara förnamnet … Ingen får tumma på kvinnors mänskliga rättigheter, oavsett religion, kultur och familjeförhållanden. Ingen får frånta unga rätten till sexualundervisning. Ingen får vända bort blicken när en tjej inte kommer tillbaka efter semestern i det gamla hemlandet – eller återvänder bortlovad. Ingen får blunda för att inte alla unga i Sverige tillåts välja sitt liv. Ingen får böja sig för patriarkala kulturers urtida krav på att få kuva kvinnor, kräva lydnad och kyskhet, beskriva släktens heder utifrån kvinnornas dygd. Inte heller svenska myndigheter som vill slippa bråk och kallar det “respekt”.

Detta kräver tydliga spelregler och stort mod. Men det är alldeles nödvändigt för vårt sätt att leva, tillsammans i frihet. Och ett bra sätt att fira 250 år av fria ord.

Heidi Avellan/SDS

Kultur, identitet, etnicitet, genus, religiositet får aldrig accepteras som grund för intolerans i politiska och medborgerliga hänseenden. I ett modernt demokratiskt samhälle måste människor som tillhör dessa olika grupper kunna räkna med att samhället också skyddar dem mot intoleransens övergrepp. Alla medborgare måste ha friheten och rätten att också ifrågasätta och lämna den egna gruppen. Mot dem som inte accepterar den toleransen måste vi vara intoleranta.

I Sverige har vi länge okritiskt omhuldat en ospecificerad och odefinierad mångkulturalism. Om vi med mångkulturalism menar att det i vårt samhälle finns flera olika kulturer ställer detta inte till med problem. Då är vi alla mångkulturalister.

Men om vi med mångkulturalism menar att det med kulturell tillhörighet och identitet också kommer specifika moraliska, etiska och politiska rättigheter och skyldigheter, talar vi om något helt annat. Då talar vi om normativ mångkulturalism. Och att acceptera normativ mångkulturalism, innebär också att tolerera oacceptabel intolerans, eftersom den normativa mångkulturalismen innebär att specifika kulturella gruppers rättigheter kan komma att ges högre dignitet än samhällsmedborgarens allmänmänskliga rättigheter – och därigenom indirekt bli till försvar för dessa gruppers (eventuella) intolerans. I ett normativt mångkulturalistiskt samhälle kan institutioner och regelverk användas för att inskränka människors frihet utifrån oacceptabla och intoleranta kulturella värderingar.

Den normativa mångkulturalismen innebär precis som främlingsfientlighet och rasism att individer på ett oacceptabelt sätt reduceras till att vara passiva medlemmar av kultur- eller identitetsbärande grupper. Men tolerans innebär inte att vi måste ha en värderelativistisk inställning till identitet och kultur. De som i vårt samhälle i handling visar att de inte respekterar andra människors rättigheter, kan inte räkna med att vi ska vara toleranta mot dem. De som med våld vill tvinga andra människor att underordna sig en speciell grupps religion, ideologi eller ”kultur” är själva ansvariga för den intolerans de måste bemötas med.

Om vi ska värna om det moderna demokratiska samhällets landvinningar måste samhället vara intolerant mot den intoleranta normativa mångkulturalismen. Och då kan inte samhället själv omhulda en normativ mångkulturalism. I ett modernt demokratiskt samhälle måste rule of law gälla – och gälla alla!

Mot dem som i vårt samhälle vill tvinga andra att leva efter deras egna religiösa, kulturella eller ideologiska trosföreställningar och tabun, ska samhället vara intolerant. Mot dem som vill tvinga samhället att anpassa lagar och regler till den egna religionens, kulturens eller gruppens tolkningar, ska samhället vara intolerant. Mot dem som i handling är intoleranta ska vi inte vara toleranta.

Models and forecasts

8 Jan, 2016 at 17:01 | Posted in Economics | 5 Comments

Yesterday John Kay had an interesting article about models and forecasting in Financial Times:

A bane of this economist’s life is the belief that economics is clairvoyance. I should, according to this view, be offering prognostications of what gross domestic product growth will be this year and when the central bank will raise interest rates …

What was the right answer on January 1 1989 to the question “will the Berlin Wall be pulled down in 1989?” A shrewd commentator would have said (though few did) something like “almost certainly the Wall will stand but you should understand the potentially destructive forces undermining the Soviet engine and the East German state”. That type of response combines probabilistic and narrative thinking.

But people long for certainties, though they know they cannot have them. I have learnt that few really want answers when they ask me to predict GDP growth or advise whether interest rates will rise in the third quarter. It is usually easy to move the subject on to something more interesting than macroeconomic forecasting.

Kay’s remarks — and Tony Yates comments on them — made me think about an article that Oxford macroeconomist Simon Wren-Lewis wrote on models and forecasts a couple of years ago, saying that “macroeconomic forecasts are always bad,” but, on the other hand, since they are “probably no worse than intelligent guesses” and anyway are “not obviously harmful,” we have no reason to complain.

The thing is that Wren-Lewis is wrong. These forecasting models and the organizations and persons around them do cost society billions of pounds, euros and dollars every year. And if they do not produce anything better than “intelligent guesswork,” I’m afraid most taxpayers would say that they are certainly not harmless at all!

Mainstream neoclassical economists often maintain – usually referring to the methodological individualism of Milton Friedman – that it doesn’t matter if the assumptions of the models they use are realistic or not. What matters is if the predictions are right or not. But, if so, then the only conclusion we can make is – throw away the garbage! Because, oh dear, oh dear, how wrong they have been!

When Simon Potter a couple of years ago analyzed the predictions that the Federal Reserve Bank of New York did on the development of real GDP and unemployment for the years 2007-2010, it turned out that the predictions were wrong with respectively 5.9% and 4.4% – which is equivalent to 6 millions of unemployed. In other words — the “rigorous” and “precise” macroeconomic mathematical-statistical forecasting models were wrong. And the rest of us have to pay.

Potter is not the only one who lately has criticized the forecasting business. John Mingers comes to essentially the same conclusion when scrutinizing it from a somewhat more theoretical angle.

The empirical and theoretical evidence is clear. Predictions and forecasts are inherently difficult to make in a socio-economic domain where genuine uncertainty and unknown unknowns often rule the roost. The real processes that underly the time series that economists use to make their predictions and forecasts do not confirm with the assumptions made in the applied statistical and econometric models. Much less is a fortiori predictable than standardly — and uncritically — assumed. The forecasting models fail to a large extent because the kind of uncertainty that faces humans and societies actually makes the models strictly seen inapplicable. The future is inherently unknowable — and using statistics, econometrics, decision theory or game theory, does not in the least overcome this ontological fact. The economic future is not something that we normally can predict in advance. Better then to accept that as a rule “we simply do not know.”

So, to say that this counterproductive forecasting activity is harmless, simply isn’t true. Spending billions after billions of hard-earned money on an activity that is no better than “intelligent guesswork,” is doing harm to our economies.

In New York State, Section 899 of the Code of Criminal Procedure provides that persons “Pretending to Forecast the Future” shall be considered disorderly under subdivision 3, Section 901 of the Code and liable to a fine of $250 and/or six months in prison. Although the law does not apply to “ecclesiastical bodies acting in good faith and without fees,” I’m not sure where that leaves macroeconomic model-builders and other forecasters …

In an interesting discussion on the hopelessness of accurately modeling what will happen in the real world, Nobel laureate Kenneth Arrow – in Eminent Economists: Their Life Philosophies (CUP 1992) – pretty well sums up what the forecasting business is all about:

It is my view that most individuals underestimate the uncertainty of the world. This is almost as true of economists and other specialists as it is of the lay public. To me our knowledge of the way things work, in society or in nature, comes trailing clouds of vagueness … Experience during World War II as a weather forecaster added the news that the natural world as also unpredictable. cloudsAn incident illustrates both uncertainty and the unwillingness to entertain it. Some of my colleagues had the responsi-bility of preparing long-range weather forecasts, i.e., for the following month. The statisticians among us subjected these forecasts to verification and found they differed in no way from chance. The forecasters themselves were convinced and requested that the forecasts be discontinued. The reply read approximately like this: ‘The Commanding General is well aware that the forecasts are no good. However, he needs them for planning purposes.’

« Previous PageNext Page »

Blog at WordPress.com.
Entries and Comments feeds.

%d bloggers like this: