Behavioural finance
28 Feb, 2017 at 17:00 | Posted in Economics | 4 Comments
To-day, in many parts of the world, it is the serious embarrassment of the banks which is the cause of our gravest concern …
[The banks] stand between the real borrower and the real lender. They have given their guarantee to the real lender; and this guarantee is only good if the money value of the asset belonging to the real borrower is worth the money which has been advanced on it.
It is for this reason that a decline in money values so severe as that which we are now experiencing threatens the solidity of the whole financial structure. Banks and bankers are by nature blind. They have not seen what was coming. Some of them … employ so-called “economists” who tell us even to-day that our troubles are due to the fact that the prices of some commodities and some services have not yet fallen enough, regardless of what should be the obvious fact that their cure, if it could be realised, would be a menace to the solvency of their institution. A “sound” banker, alas! is not one who foresees danger and avoids it, but one who, when he is ruined, is ruined in a conventional and orthodox way along with his fellows, so that no one can really blame him.
But to-day they are beginning at last to take notice. In many countries bankers are becoming unpleasantly aware of the fact that, when their customers’ margins have run off, they are themselves “on margin” …
The present signs suggest that the bankers of the world are bent on suicide. At every stage they have been unwilling to adopt a sufficiently drastic remedy. And by now matters have been allowed to go so far that it has become extraordinarily difficult to find any way out.
It is necessarily part of the business of a banker to maintain appearances and to profess a conventional respectability which is more than human. Lifelong practices of this kind make them the most romantic and the least realistic of men. It is so much their stock-in-trade that their position should not be questioned, that they do not even question it themselves until it is too late. Like the honest citizens they are, they feel a proper indignation at the perils of the wicked world in which they live,—when the perils mature; but they do not foresee them. A Bankers’ Conspiracy! The idea is absurd! I only wish there were one! So, if they are saved, it will be, I expect, in their own despite.
John Maynard Keynes, Essays in Persuasion, 1931
Simpson’s paradox, Trump voters and the limits of econometrics
27 Feb, 2017 at 14:57 | Posted in Statistics & Econometrics | 1 Comment
From a more theoretical perspective, Simpson’s paradox importantly shows that causality can never be reduced to a question of statistics or probabilities, unless you are — miraculously — able to keep constant all other factors that influence the probability of the outcome studied.
To understand causality we always have to relate it to a specific causal structure. Statistical correlations are never enough. No structure, no causality.
Simpson’s paradox is an interesting paradox in itself, but it can also highlight a deficiency in the traditional econometric approach towards causality. Say you have 1000 observations on men and an equal amount of observations on women applying for admission to university studies, and that 70% of men are admitted, but only 30% of women. Running a logistic regression to find out the odds ratios (and probabilities) for men and women on admission, females seem to be in a less favourable position (‘discriminated’ against) compared to males (male odds are 2.33, female odds are 0.43, giving an odds ratio of 5.44). But once we find out that males and females apply to different departments we may well get a Simpson’s paradox result where males turn out to be ‘discriminated’ against (say 800 male apply for economics studies (680 admitted) and 200 for physics studies (20 admitted), and 100 female apply for economics studies (90 admitted) and 900 for physics studies (210 admitted) — giving odds ratios of 0.62 and 0.37).
Econometric patterns should never be seen as anything else than possible clues to follow. From a critical realist perspective it is obvious that behind observable data there are real structures and mechanisms operating, things that are — if we really want to understand, explain and (possibly) predict things in the real world — more important to get hold of than to simply correlate and regress observable variables.
Math cannot establish the truth value of a fact. Never has. Never will.
Simon Wren-Lewis — flimflam defender of economic orthodoxy
25 Feb, 2017 at 21:08 | Posted in Economics | 3 CommentsAgain and again, Oxford professor Simon Wren-Lewis rides out to defend orthodox macroeconomic theory against attacks from ‘heterodox’ critics like yours truly.
A couple of years ago, it was the rational expectations hypothesis (REH) he wanted to save:
It is not a debate about rational expectations in the abstract, but about a choice between different ways of modelling expectations, none of which will be ideal. This choice has to involve feasible alternatives, by which I mean theories of expectations that can be practically implemented in usable macroeconomic models …
However for the foreseeable future, rational expectations will remain the starting point for macro analysis, because it is better than the only practical alternative.
And now it is the concept of non-accelerating inflationary rate of unemployment (NAIRU) he tries to save. A couple of days ago he wrote:
If we really think there is no relationship between unemployment and inflation, why on earth are we not trying to get unemployment below 4%? We know that the government could, by spending more, raise demand and reduce unemployment. And why would we ever raise interest rates above their lower bound? …
There is a relationship between inflation and unemployment, but it is just very difficult to pin down. For most macroeconomists, the concept of the NAIRU really just stands for that basic macroeconomic truth.
And yesterday he was back with another post:
The second [reflection] relates to the sharp reactions to my original post I noted at the start, and the hostility displayed by some heterodox economists (I stress some) to the concept. I have been trying to decide what annoys me about this so much. I think it is this. The concept of the NAIRU, or equivalently the Phillips curve, is very basic to macroeconomics. It is hard to teach about inflation, unemployment and demand management without it. Those trying to set interest rates in independent central banks are, for the most part, doing what they can to find the optimal balance between inflation and unemployment.
Well, Wren-Lewis is — sad to say, but still — totally wrong on both issues.
REH
Wren-Lewis doesn’t appreciate heterodox critiques of the rational expectations hypothesis. And he seems to be especially annoyed with yours truly, who “does write very eloquently,” but only “appeal to the occasional young economist, who is inclined to believe that only the radical overthrow of orthodoxy will suffice.”
If at some time my skeleton should come to be used by a teacher of osteology to illustrate his lectures, will his students seek to infer my capacities for thinking, feeling, and deciding from a study of my bones? If they do, and any report of their proceedings should reach the Elysian Fields, I shall be much distressed, for they will be using a model which entirely ignores the greater number of relevant variables, and all of the important ones. Yet this is what ‘rational expectations’ does to economics.
G. L. S. Shackle
Since I have already put forward a rather detailed theoretical-methodological critique of the rational expectations hypothesis elsewhere — Rational expectations – a fallacious foundation for macroeconomics in a non-ergodic world — I’m not going to recapitulate the arguments here, but rather limit myself to elaborate on a couple of the rather unwarranted allegations Wren-Lewis has put forward in his repeated attempts at rescuing the rational expectations hypothesis from the critique.
In macroeconomic rational expectations models the world evolves in accordance with fully predetermined models where uncertainty has been reduced to stochastic risk describable by some probabilistic distribution.
The tiny little problem that there is no hard empirical evidence that verifies these models doesn’t usually bother its protagonists too much. Rational expectations überpriest Thomas Sargent — often favourably mentioned by Wren-Lewis — has the following to say on the epistemological status of the rational expectations hypothesis (emphasis added):
Partly because it focuses on outcomes and does not pretend to have behavioral content, the hypothesis of rational epectations has proved to be a powerful tool for making precise statements about complicarted dynamic economic systems.
Precise, yes, but relevant and realistic? I’ll be dipped!
In his attempted rescue operations Wren-Lewis tries to give the picture that only heterodox economists like yours truly are critical of the rational expectations hypothesis. But, on this, he is, simply, eh, wrong. Let’s listen to Nobel laureate Edmund Phelps — hardly a heterodox economist — and what he has to say (emphasis added):
Q: So how did adaptive expectations morph into rational expectations?
A: The “scientists” from Chicago and MIT came along to say, we have a well-established theory of how prices and wages work … The [rational expectations] approach is to suppose that the people in the market form their expectations in the very same way that the economist studying their behavior forms her expectations: on the basis of her theoretical model.
Q: And what’s the consequence of this putsch?
A: Craziness for one thing. You’re not supposed to ask what to do if one economist has one model of the market and another economist a different model. The people in the market cannot follow both economists at the same time. One, if not both, of the economists must be wrong …
Q: So rather than live with variability, write a formula in stone!
A: What led to rational expectations was a fear of the uncertainty and, worse, the lack of understanding of how modern economies work. The rational expectationists wanted to bottle all that up and replace it with deterministic models of prices, wages, even share prices, so that the math looked like the math in rocket science. The rocket’s course can be modeled while a living modern economy’s course cannot be modeled to such an extreme. It yields up a formula for expectations that looks scientific because it has all our incomplete and not altogether correct understanding of how economies work inside of it, but it cannot have the incorrect and incomplete understanding of economies that the speculators and would-be innovators have …
Q: The economics profession, including Federal Reserve policy makers, appears to have been hijacked by Robert Lucas.
A: You’re right that people are grossly uninformed, which is a far cry from what the rational expectations models suppose. Why are they misinformed? I think they don’t pay much attention to the vast information out there because they wouldn’t know what to do what to do with it if they had it. The fundamental fallacy on which rational expectations models are based is that everyone knows how to process the information they receive according to the one and only right theory of the world. The problem is that we don’t have a “right” model that could be certified as such by the National Academy of Sciences. And as long as we operate in a modern economy, there can never be such a model.
Just as when it comes to NAIRU, Wren-Lewis doesn’t want to take a theoretical discussion about rational expectations as a modelling tool. So let’s see how it fares as an empirical assumption. Empirical efforts at testing the correctnes of the hypothesis has resulted in a series of empirical studies that have more or less concluded that it is not consistent with the facts. In one of the more well-known and highly respected evaluation reviews made, Michael Lovell (1986) concluded:
it seems to me that the weight of empirical evidence is sufficiently strong to compel us to suspend belief in the hypothesis of rational expectations, pending the accumulation of additional empirical evidence.
The rational expectations hypothesis presumes consistent behaviour, where expectations do not display any persistent errors. In the world of rational expectations we are always, on average, hitting the bull’s eye. In the more realistic, open systems view, there is always the possibility (danger) of making mistakes that may turn out to be systematic. It is because of this, presumably, that we put so much emphasis on learning in our modern knowledge society.
NAIRU
Wren-Lewis is not the only economist that subscribes to the NAIRU story and its policy implication that attempts to promote full employment is doomed to fail, since governments and central banks can’t push unemployment below the critical NAIRU threshold without causing harmful runaway inflation.
But one of the main problems with NAIRU is that it essentially is a timeless long-run equilibrium attractor to which actual unemployment (allegedly) has to adjust. But if that equilibrium is itself changing — and in ways that depend on the process of getting to the equilibrium — well, then we can’t really be sure what that equlibrium will be without contextualizing unemployment in real historical time. And when we do, we will see how seriously wrong we go if we omit demand from the analysis. Demand policy has long-run effects and matters also for structural unemployment — and governments and central banks can’t just look the other way and legitimize their passivity re unemployment by refering to NAIRU.
Wren-Lewis tries to escape this important problem by trivialising the NAIRU concept into the platitude “there is a relationship between inflation and unemployment.” But that is just looking the other way, instead of trying to heed the theoretically central question: if (the mythical) NAIRU is continually moving, how can it be consistently conceptualised as an attractor?
The existence of a long-run equilibrium is a very handy modeling assumption to use. But that does not make it easily applicable to real-world economies. Why? Because it is basically a timeless concept utterly incompatible with real historical events. In the real world it is the second law of thermodynamics and historical — not logical — time that rules.
This importantly means that long-run equilibrium is an awfully bad guide for macroeconomic policies. In a world full of genuine uncertainty, multiple equilibria, asymmetric information and market failures, the long run equilibrium — including NAIRU — is simply a non-existent unicorn.
In celestial mechanics we have a gravitational constant. In economics there is none. NAIRU does not hold water simply because it does not exist — and to base economic policies on such a weak theoretical and empirical construct is nothing short of writing out a prescription for self-inflicted economic havoc.
NAIRU is — whatever Wren-Lewis tries to make us think — a useless concept, and the sooner we bury it, the better.
The conventional wisdom, codified in the theory of the non-accelerating-inflation rate of unemployment (NAIRU) … holds that in the longer run, an economy’s potential growth depends on – what Milton Friedman called – the “natural rate of unemployment”: the structural unemployment rate at which inflation is constant …
We argue in our book Macroeconomics Beyond the NAIRU that the NAIRU doctrine is wrong because it is a partial, not a general, theory. Specifically, wages are treated as mere costs to producers. In NAIRU, higher real-wage claims necessarily reduce firms’ profitability and hence, if firms want to protect profits (needed for investment and growth), higher wages must lead to higher prices and ultimately run-away inflation. The only way to stop this process is to have an increase in “natural unemployment”, which curbs workers’ wage claims.
What is missing from this NAIRU thinking is that wages provide macroeconomic benefits in terms of higher labor productivity growth and more rapid technological progress …
NAIRU wisdom holds that a rise in the (real) interest rate will only affect inflation, not structural unemployment. We argue instead that higher interest rates slow down technological progress – directly by depressing demand growth and indirectly by creating additional unemployment and depressing wage growth.
As a result, productivity growth will fall, and the NAIRU must increase. In other words, macroeconomic policy has permanent effects on structural unemployment and growth – the NAIRU as a constant “natural” rate of unemployment does not exist.
This means we cannot absolve central bankers from charges that their anti-inflation policies contribute to higher unemployment. They have already done so. Our estimates suggest that overly restrictive macro policies in the OECD countries have actually and unnecessarily thrown millions of workers into unemployment by a policy-induced decline in productivity and output growth. This self-inflicted damage must rest on the conscience of the economics profession.
Keynes’ devastating critique of econometrics
24 Feb, 2017 at 10:14 | Posted in Statistics & Econometrics | 3 CommentsMainstream economists often hold the view that Keynes’ criticism of econometrics was the result of a sadly misinformed and misguided person who disliked and did not understand much of it.
This is, however, nothing but a gross misapprehension.
To be careful and cautious is not the same as to dislike. Keynes did not misunderstand the crucial issues at stake in the development of econometrics. Quite the contrary. He knew them all too well — and was not satisfied with the validity and philosophical underpinning of the assumptions made for applying its methods.
Keynes’ critique is still valid and unanswered in the sense that the problems he pointed at are still with us today and ‘unsolved.’ Ignoring them — the most common practice among applied econometricians — is not to solve them.
To apply statistical and mathematical methods to the real-world economy, the econometrician has to make some quite strong assumptions. In a review of Tinbergen’s econometric work — published in The Economic Journal in 1939 — Keynes gave a comprehensive critique of Tinbergen’s work, focusing on the limiting and unreal character of the assumptions that econometric analyses build on:
Completeness: Where Tinbergen attempts to specify and quantify which different factors influence the business cycle, Keynes maintains there has to be a complete list of all the relevant factors to avoid misspecification and spurious causal claims. Usually this problem is ‘solved’ by econometricians assuming that they somehow have a ‘correct’ model specification. Keynes is, to put it mildly, unconvinced:
It will be remembered that the seventy translators of the Septuagint were shut up in seventy separate rooms with the Hebrew text and brought out with them, when they emerged, seventy identical translations. Would the same miracle be vouchsafed if seventy multiple correlators were shut up with the same statistical material? And anyhow, I suppose, if each had a different economist perched on his a priori, that would make a difference to the outcome.
Homogeneity: To make inductive inferences possible — and being able to apply econometrics — the system we try to analyse has to have a large degree of ‘homogeneity.’ According to Keynes most social and economic systems — especially from the perspective of real historical time — lack that ‘homogeneity.’ As he had argued already in Treatise on Probability (ch. 22), it wasn’t always possible to take repeated samples from a fixed population when we were analysing real-world economies. In many cases there simply are no reasons at all to assume the samples to be homogenous. Lack of ‘homogeneity’ makes the principle of ‘limited independent variety’ non-applicable, and hence makes inductive inferences, strictly seen, impossible since one its fundamental logical premisses are not satisfied. Without “much repetition and uniformity in our experience” there is no justification for placing “great confidence” in our inductions (TP ch. 8).
And then, of course, there is also the ‘reverse’ variability problem of non-excitation: factors that do not change significantly during the period analysed, can still very well be extremely important causal factors.
Stability: Tinbergen assumes there is a stable spatio-temporal relationship between the variables his econometric models analyze. But as Keynes had argued already in his Treatise on Probability it was not really possible to make inductive generalisations based on correlations in one sample. As later studies of ‘regime shifts’ and ‘structural breaks’ have shown us, it is exceedingly difficult to find and establish the existence of stable econometric parameters for anything but rather short time series.
Measurability: Tinbergen’s model assumes that all relevant factors are measurable. Keynes questions if it is possible to adequately quantify and measure things like expectations and political and psychological factors. And more than anything, he questioned — both on epistemological and ontological grounds — that it was always and everywhere possible to measure real-world uncertainty with the help of probabilistic risk measures. Thinking otherwise can, as Keynes wrote, “only lead to error and delusion.”
Independence: Tinbergen assumes that the variables he treats are independent (still a standard assumption in econometrics). Keynes argues that in such a complex, organic and evolutionary system as an economy, independence is a deeply unrealistic assumption to make. Building econometric models from that kind of simplistic and unrealistic assumptions risk to produce nothing but spurious correlations and causalities. Real-world economies are organic systems for which the statistical methods used in econometrics are ill-suited, or even, strictly seen, inapplicable. Mechanical probabilistic models have little leverage when applied to non-atomic evolving organic systems — such as economies.
It is a great fault of symbolic pseudo-mathematical methods of formalising a system of economic analysis … that they expressly assume strict independence between the factors involved and lose all their cogency and authority if this hypothesis is disallowed; whereas, in ordinary discourse, where we are not blindly manipulating but know all the time what we are doing and what the words mean, we can keep “at the back of our heads” the necessary reserves and qualifications and the adjustments which we shall have to make later on, in a way in which we cannot keep complicated partial differentials “at the back” of several pages of algebra which assume that they all vanish.
Building econometric models can’t be a goal in itself. Good econometric models are means that make it possible for us to infer things about the real-world systems they ‘represent.’ If we can’t show that the mechanisms or causes that we isolate and handle in our econometric models are ‘exportable’ to the real-world, they are of limited value to our understanding, explanations or predictions of real-world economic systems.
The kind of fundamental assumption about the character of material laws, on which scientists appear commonly to act, seems to me to be much less simple than the bare principle of uniformity. They appear to assume something much more like what mathematicians call the principle of the superposition of small effects, or, as I prefer to call it, in this connection, the atomic character of natural law.
The system of the material universe must consist, if this kind of assumption is warranted, of bodies which we may term (without any implication as to their size being conveyed thereby) legal atoms, such that each of them exercises its own separate, independent, and invariable effect, a change of the total state being compounded of a number of separate changes each of which is solely due to a separate portion of the preceding state …
The scientist wishes, in fact, to assume that the occurrence of a phenomenon which has appeared as part of a more complex phenomenon, may be some reason for expecting it to be associated on another occasion with part of the same complex. Yet if different wholes were subject to laws qua wholes and not simply on account of and in proportion to the differences of their parts, knowledge of a part could not lead, it would seem, even to presumptive or probable knowledge as to its association with other parts.
Linearity: To make his models tractable, Tinbergen assumes the relationships between the variables he study to be linear. This is still standard procedure today, but as as Keynes writes:
It is a very drastic and usually improbable postulate to suppose that all economic forces are of this character, producing independent changes in the phenomenon under investigation which are directly proportional to the changes in themselves; indeed, it is ridiculous.
To Keynes it was a ‘fallacy of reification’ to assume that all quantities are additive (an assumption closely linked to independence and linearity).
The unpopularity of the principle of organic unities shows very clearly how great is the danger of the assumption of unproved additive formulas. The fallacy, of which ignorance of organic unity is a particular instance, may perhaps be mathematically represented thus: suppose f(x) is the goodness of x and f(y) is the goodness of y. It is then assumed that the goodness of x and y together is f(x) + f(y) when it is clearly f(x + y) and only in special cases will it be true that f(x + y) = f(x) + f(y). It is plain that it is never legitimate to assume this property in the case of any given function without proof.
J. M. Keynes “Ethics in Relation to Conduct” (1903)
And as even one of the founding fathers of modern econometrics — Trygve Haavelmo — wrote:
What is the use of testing, say, the significance of regression coefficients, when maybe, the whole assumption of the linear regression equation is wrong?
Real-world social systems are usually not governed by stable causal mechanisms or capacities. The kinds of ‘laws’ and relations that econometrics has established, are laws and relations about entities in models that presuppose causal mechanisms and variables — and the relationship between them — being linear, additive, homogenous, stable, invariant and atomistic. But — when causal mechanisms operate in the real world they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. Since statisticians and econometricians — as far as I can see — haven’t been able to convincingly warrant their assumptions of homogeneity, stability, invariance, independence, additivity as being ontologically isomorphic to real-world economic systems, Keynes’ critique is still valid . As long as — as Keynes writes in a letter to Frisch in 1935 — “nothing emerges at the end which has not been introduced expressively or tacitly at the beginning,” I remain doubtful of the scientific aspirations of econometrics.
In his critique of Tinbergen, Keynes points us to the fundamental logical, epistemological and ontological problems of applying statistical methods to a basically unpredictable, uncertain, complex, unstable, interdependent, and ever-changing social reality. Methods designed to analyse repeated sampling in controlled experiments under fixed conditions are not easily extended to an organic and non-atomistic world where time and history play decisive roles.
Econometric modeling should never be a substitute for thinking. From that perspective it is really depressing to see how much of Keynes’ critique of the pioneering econometrics in the 1930s-1940s is still relevant today.
The general line you take is interesting and useful. It is, of course, not exactly comparable with mine. I was raising the logical difficulties. You say in effect that, if one was to take these seriously, one would give up the ghost in the first lap, but that the method, used judiciously as an aid to more theoretical enquiries and as a means of suggesting possibilities and probabilities rather than anything else, taken with enough grains of salt and applied with superlative common sense, won’t do much harm. I should quite agree with that. That is how the method ought to be used.
Keynes, letter to E.J. Broster, December 19, 1939
RBC models — the art of missing the point completely
23 Feb, 2017 at 09:24 | Posted in Economics | 1 Comment
The real business cycle program is part of the larger new classical macroeconomic research program. Proponents of these models often promote them as models that provide satisfactory microfoundations for macroeconomics … The claim for providing microfoundations is largely based on the fact that new classical models in general, and real business cycle models in particular, model the representative agent as solving a single dynamic optimization problem on behalf of all the consumers, workers, and firms in the economy. However, the claim that representative agent models are innately superior to other sorts of models is unfounded. There is no a priori reason to accord real business cycle models a presumption of accuracy because they look like they are based on microeconomics. Rather, there are several reasons to be theoretically skeptical of such models.
Most familiar to economists is the problem of the fallacy of composition … It is difficult to deny that what is true for an individual may not be true for a group, yet, representative agent models explicitly embody the fallacy of composition … By completely eliminating even the possibility of problems relating to coordination, representative agent models are inherently incapable of modeling such complexities.
The real business cycle model thus employs the formal mathematics of microeconomics, but applies it in a theoretically inappropriate circumstance: it provides the simulacrum of microfoundations, not the genuine article. It is analogous to modeling the behavior of a gas by a careful analysis of a single molecule in vacuo, or, of a crowd of people by an analysis of the actions of a single android. For some issues, such models may work well; for many others, they will miss the point completely.
RBC is one of the theories that has put macroeconomics on a path of intellectual regress for three decades now. And although there are many kinds of useless ‘post-real’economics held in high regard within mainstream economics establishment today, few — if any — are less deserved than real business cycle theory.
The future is not reducible to a known set of prospects. It is not like sitting at the roulette table and calculating what the future outcomes of spinning the wheel will be. So instead of — as RBC economists do — assuming calibration and rational expectations to be right, one ought to confront the hypothesis with the available evidence. It is not enough to construct models. Anyone can construct models. To be seriously interesting, models have to come with an aim. They have to have an intended use. If the intention of calibration and rational expectations is to help us explain real economies, it has to be evaluated from that perspective. A model or hypothesis without a specific applicability is not really deserving our interest.
Without strong evidence all kinds of absurd claims and nonsense may pretend to be science. We have to demand more of a justification than rather watered-down versions of ‘anything goes’ when it comes to rationality postulates. If one proposes rational expectations one also has to support its underlying assumptions. None is given by RBC economists, which makes it rather puzzling how rational expectations has become the standard modeling assumption made in much of modern macroeconomics. Perhaps the reason is that economists often mistake mathematical beauty for truth.
In the hands of Lucas, Prescott and Sargent, rational expectations has been transformed from an – in principle – testable hypothesis to an irrefutable proposition. Believing in a set of irrefutable propositions may be comfortable – like religious convictions or ideological dogmas – but it is not science.
So where does this all lead us? What is the trouble ahead for economics? Putting a sticky-price DSGE lipstick on the RBC pig sure won’t do. Neither will — as Paul Romer noticed — just looking the other way and pretend it’s raining:
The trouble is not so much that macroeconomists say things that are inconsistent with the facts. The real trouble is that other economists do not care that the macroeconomists do not care about the facts. An indifferent tolerance of obvious error is even more corrosive to science than committed advocacy of error.
Robert Lucas and the triumph of empty formalism
22 Feb, 2017 at 15:34 | Posted in Economics | Comments Off on Robert Lucas and the triumph of empty formalism
Vielleicht ist diese Grundperspektive der radikalen Trennung von Form und Gehalt hilfreich, einige zunächst überaus paradoxe Äußerungen von Lucas etwas zu erhellen. Erinnert man sich der Forderungen von Lucas, die Makroökonomik zwingend auf Basis der klassischen Postulate, die Lucas und Sargent (1978) als (a) „Markträumung“ und (b) „Eigennutz“ umrissen hatten, zu errichten, so erstaunt man doch angesichts Passagen wie der folgenden:
“In recent years, the meaning of the term “equilibrium” has undergone such dramatic development that a theorist of the 1930s would not rec ognize it. It is now routine to describe an economy following a multi variate stochastic process as being “in equilibrium,” by which is meant nothing more than that at each point in time, postulates (a) and (b) above are satisfied. This development, which stemmed mainly from work by K. J. Arrow […] and G. Debreu […], implies that simply to look at any economic time series and conclude that it is a “disequilibrium phenomenon” is a meaningless observation. Indeed, a more likely conjecture […] is that the general hypothesis that a collection of time series describes an economy in competitive equilibrium is without con tent.” (Lucas und Sargent 1978: 58-9)”
Zunächst ist man erstaunt, weil die Argumentation nicht nur kontraintuitiv, sondern geradezu widersinnig erscheint: Wie passt das zusammen, dass Lucas und Sargent einerseits einfordern, jegliche Makrotheorie habe von den Prinzipien „Markträumung bzw. Gleichgewicht“ und „Eigennutz bzw. Optimierung“ auszugehen, wenn sie direkt anschließend konstatieren, dass solche Aussagen inhaltsleer und ohne Bedeutung sind? Die hier gelieferte Verteidigung der Gleichgewichtsan nahme „geräumter Märkte“ hat demnach für Lucas gar nichts mit jener alltäglichen Bedeutung zu tun, wonach der Markt für Äpfel geräumt ist, wenn wie beo bachten können, dass alle angebotenen Äpfel auch gekauft werden. Markträumung als Konzept ist somit kein Ereignis, dass mit der Realität korrespondiert, sondern ein Modellbaukonzept, mit dem Modellstrukturen erzeugt werden kön nen, die Zeitreihen nachbilden können. Die Begriffe „Markträumung“ und „Eigennutz“ haben eben in diesem Sinne keine empirische, oder reale Bedeutung, wie Lucas und Sargent klar herausstellen, sie dienen nur zur Erzeugung der formalen Struktur des Modells, sie sind jedoch losgelöst von ihrer (anschaulichen) Bedeutung oder Interpretation.
In case your German isn’t to rusty, Cruccolini’s dissertation on the development of modern macroeconomics is highly recommended reading.
Some of us have for years been urging economists to pay attention to the ontological foundations of their assumptions and models. Sad to say, economists have not hearkened the appeal — and so modern economics has become increasingly irrelevant to the understanding of the real world.
Within mainstream economics internal validity is still everything and external validity nothing. Why anyone should be interested in that kind of theories and models is beyond imagination. As long as mainstream economists do not come up with any export-licenses for their theories and models to the real world in which we live, they really should not be surprised if people say that this is not science, but autism!
Studying mathematics and logics is interesting and fun. It sharpens the mind. In pure mathematics and logics we do not have to worry about external validity. But — economics is not pure mathematics or logics. It’s about society. The real world. Forgetting that, economics is really in dire straits.
Mathematical axiomatic systems lead to analytic truths, which do not require empirical verification, since they are true by virtue of definitions and logic. It is a startling discovery of the twentieth century that sufficiently complex axiomatic systems are undecidable and incomplete. That is, the system of theorem and proof can never lead to ALL the true sentences about the system, and ALWAYS contain statements which are undecidable – their truth values cannot be determined by proof techniques. More relevant to our current purpose is that applying an axiomatic hypothetico-deductive system to the real world can only be done by means of a mapping, which creates a model for the axiomatic system. These mappings then lead to assertions about the real world which require empirical verification. These assertions (which are proposed scientific laws) can NEVER be proven in the sense that mathematical theorems can be proven …
Many more arguments can be given to explain the difference between analytic and synthetic truths, which corresponds to the difference between mathematical and scientific truths … The scientific method arose as a rejection of the axiomatic method used by the Greeks for scientific methodology. It was this rejection of axiomatics and logical certainty in favour of empirical and observational approach which led to dramatic progress in science. However, this did involve giving up the certainties of mathematical argumentation and learning to live with the uncertainties of induction. Economists need to do the same – abandon current methodology borrowed from science and develop a new methodology suited for the study of human beings and societies.
Solow kicking Lucas and Sargent in the pants
22 Feb, 2017 at 09:30 | Posted in Economics | Comments Off on Solow kicking Lucas and Sargent in the pantsIn opening the conference, Frank Morris mentioned his disappointment or disillusionment – which many others share – that the analytical success of the 1960s didn’t survive that decade. I think we all knew, even back in the 1960s, that as Geof put it, “inflation doesn’t wait for full employment.” These days inflation doesn’t even seem to care if full employment is going along on the trip … The question is: what are the possible responses that economists and economics can make to those events?
One possible response is that of Professors Lucas and Sargent. They describe what happened in the 1970s in a very strong way with a polemical vocabulary reminiscent of Spiro Agnew. Let me quote some phrases that I culled from thepaper: “wildly incorrect,” “fundamentally flawed,” “wreckage,” “failure,” “fatal,” “of no value,” “dire implications,” “failure on a grand scale,” “spectacular recent failure,” “no hope” … I think that Professors Lucas and Sargent really seem to be serious in what they say, and in turn they have a proposal for constructive research that I find hard to talk about sympathetically. They call it equilibrium business cycle theory, and they say very firmly that it is based on two terribly important postulates — optimizing behavior and perpetual market clearing. When you read closely, they seem to regard the postulate of optimizing behavior as self-evident and the postulate of market-clearing behavior as essentially meaningless. I think they are too optimistic, since the one that they think is self-evident I regard as meaningless and the one that they think is meaningless, I regard as false. The assumption that everyone optimizes implies only weak and uninteresting consistency conditions on their behavior. Anything useful has to come from knowing what they optimize, and what constraints they perceive. Lucas and Sargent’s casual assumptions have no special claim to attention …
It is plain as the nose on my face that the labor market and many markets for produced goods do not clear in any meaningful sense. Professors Lucas and Sargent say after all there is no evidence that labor markets do not clear, just the unemployment survey. That seems to me to be evidence. Suppose an unemployed worker says to you “Yes, I would be glad to take a job like the one I have already proved I can do because I had it six months ago or three or four months ago. And I will be glad to work at exactly the same wage that is being paid to those exactly like myself who used to be working at that job and happen to be lucky enough still to be working at it.” Then I’m inclined to label that a case of excess supply of labor and I’m not inclined to make up an elaborate story of search or misinformation or anything of the sort. By the way I find the misinformation story another gross implausibility. I would like to see direct evidence that the unemployed are more misinformed than the employed, as I presume would have to be the case if everybody is on his or her supply curve of employment.
The purported strength of New Classical macroeconomics is that it has firm anchorage in preference-based microeconomics, and especially the decisions taken by inter-temporal utility maximizing “forward-loooking” individuals.
To some of us, however, this has come at too high a price. The almost quasi-religious insistence that macroeconomics has to have microfoundations – without ever presenting neither ontological nor epistemological justifications for this claim – has put a blind eye to the weakness of the whole enterprise of trying to depict a complex economy based on an all-embracing representative actor equipped with superhuman knowledge, forecasting abilities and forward-looking rational expectations. It is as if – after having swallowed the sour grapes of the Sonnenschein-Mantel-Debreu-theorem – these economists want to resurrect the omniscient walrasian auctioneer in the form of all-knowing representative actors equipped with rational expectations and assumed to somehow know the true structure of our model of the world.
Methodologically Lucas and Sargent build their whole approach on the utopian idea that there are some ‘deep structural constants’ that never change in the economy. To most other economists it is self-evident that economic structures change over time. That was one of the main points in Keynes’ critique of Tinbergen’s econometrics. Economic parameters do not remain constant over long periods. If there is anything we know, it is that structural changes take place (something that ought to be pretty obvious if one took the ‘Lucas critique’ seriously …)
The Lucas-Sargent Holy Grail of a ‘true economic structure’ being constant even in the long run, is from a realist perspective simply ludicrous. That anyone outside of Chicago should take the Lucas-Sargent kind of stuff seriously is totally incomprehensible. As Solow has it:
Suppose someone sits down where you are sitting right now and announces to me that he is Napoleon Bonaparte. The last thing I want to do with him is to get involved in a technical discussion of cavalry tactics at the battle of Austerlitz. If I do that, I’m getting tacitly drawn into the game that he is Napoleon. Now, Bob Lucas and Tom Sargent like nothing better than to get drawn into technical discussions, because then you have tacitly gone along with their fundamental assumptions; your attention is attracted away from the basic weakness of the whole story. Since I find that fundamental framework ludicrous, I respond by treating it as ludicrous – that is, by laughing at it – so as not to fall into the trap of taking it seriously and passing on to matters of technique.
When your day is night alone (personal)
21 Feb, 2017 at 16:50 | Posted in Varia | Comments Off on When your day is night alone (personal)
To David and Tora with love
The logical fallacy that good science builds on
21 Feb, 2017 at 09:45 | Posted in Theory of Science & Methodology | Comments Off on The logical fallacy that good science builds onIn economics most models and theories build on a kind of argumentation pattern that looks like this:
Premise 1: All Chicago economists believe in REH
Premise 2: Robert Lucas is a Chicago economist
—————————————————————–
Conclusion: Robert Lucas believes in REH
Among philosophers of science this is treated as an example of a logically valid deductive inference (and, following Quine, whenever logic is used in this post, ‘logic’ refers to deductive/analytical logic).
In a hypothetico-deductive reasoning we would use the conclusion to test the law-like hypothesis in premise 1 (according to the hypothetico-deductive model, a hypothesis is confirmed by evidence if the evidence is deducible from the hypothesis). If Robert Lucas does not believe in REH we have gained some warranted reason for non-acceptance of the hypothesis (an obvious shortcoming here being that further information beyond that given in the explicit premises might have given another conclusion).
The hypothetico-deductive method (in case we treat the hypothesis as absolutely sure/true, we rather talk of an axiomatic-deductive method) basically means that we
•Posit a hypothesis
•Infer empirically testable propositions (consequences) from it
•Test the propositions through observation or experiment
•Depending on the testing results either find the hypothesis corroborated or falsified.
However, in science we regularly use a kind of ‘practical’ argumentation where there is little room for applying the restricted logical ‘formal transformations’ view of validity and inference. Most people would probably accept the following argument as a ‘valid’ reasoning even though it from a strictly logical point of view is non-valid:
Premise 1: Robert Lucas is a Chicago economist
Premise 2: The recorded proportion of Keynesian Chicago economists is zero
————————————————————————–
Conclusion: So, certainly, Robert Lucas is not a Keynesian economist
How come? Well I guess one reason is that in science, contrary to what you find in most logic text-books, not very many argumentations are settled by showing that ‘All Xs are Ys.’ In scientific practice we instead present other-than-analytical explicit warrants and backings — data, experience, evidence, theories, models — for our inferences. As long as we can show that our ‘deductions’ or ‘inferences’ are justifiable and have well-backed warrants our colleagues listen to us. That our scientific ‘deductions’ or ‘inferences’ are logical non-entailments simply is not a problem. To think otherwise is committing the fallacy of misapplying formal-analytical logic categories to areas where they are pretty much irrelevant or simply beside the point.
Scientific arguments are not analytical arguments, where validity is solely a question of formal properties. Scientific arguments are substantial arguments. If Robert Lucas is a Keynesian or not, is nothing we can decide on formal properties of statements/propositions. We have to check out what the guy has actually been writing and saying to check if the hypothesis that he is a Keynesian is true or not.
Deductive logic may work well — given that it is used in deterministic closed models! In mathematics, the deductive-axiomatic method has worked just fine. But science is not mathematics. Conflating those two domains of knowledge has been one of the most fundamental mistakes made in modern economics. Applying it to real-world open systems immediately proves it to be excessively narrow and hopelessly irrelevant. Both the confirmatory and explanatory ilk of hypothetico-deductive reasoning fails since there is no way you can relevantly analyze confirmation or explanation as a purely logical relation between hypothesis and evidence or between law-like rules and explananda. In science we argue and try to substantiate our beliefs and hypotheses with reliable evidence — propositional and predicate deductive logic, on the other hand, is not about reliability, but the validity of the conclusions given that the premises are true.
Deduction — and the inferences that goes with it — is an example of ‘explicative reasoning,’ where the conclusions we make are already included in the premises. Deductive inferences are purely analytical and it is this truth-preserving nature of deduction that makes it different from all other kinds of reasoning. But it is also its limitation, since truth in the deductive context does not refer to a real world ontology (only relating propositions as true or false within a formal-logic system) and as an argument scheme is totally non-ampliative — the output of the analysis is nothing else than the input.
In science we standardly use a logically non-valid inference — the fallacy of affirming the consequent — of the following form:
(1) p => q
(2) q
————-
p
or, in instantiated form
(1) ∀x (Gx => Px)
(2) Pa
————
Ga
Although logically invalid, it is nonetheless a kind of inference — abduction — that may be strongly warranted and truth-producing.
Following the general pattern ‘Evidence => Explanation => Inference’ we infer something based on what would be the best explanation given the law-like rule (premise 1) and an observation (premise 2). The truth of the conclusion (explanation) is nothing that is logically given, but something we have to justify, argue for, and test in different ways to possibly establish with any certainty or degree. And as always when we deal with explanations, what is considered best is relative to what we know of the world. In the real world all evidence has an irreducible holistic aspect. We never conclude that evidence follows from a hypothesis simpliciter, but always given some more or less explicitly stated contextual background assumptions. All non-deductive inferences and explanations are a fortiori context-dependent.
If we extend the abductive scheme to incorporate the demand that the explanation has to be the best among a set of plausible competing/rival/contrasting potential and satisfactory explanations, we have what is nowadays usually referred to as inference to the best explanation.
In inference to the best explanation we start with a body of (purported) data/facts/evidence and search for explanations that can account for these data/facts/evidence. Having the best explanation means that you, given the context-dependent background assumptions, have a satisfactory explanation that can explain the fact/evidence better than any other competing explanation — and so it is reasonable to consider/believe the hypothesis to be true. Even if we (inevitably) do not have deductive certainty, our reasoning gives us a license to consider our belief in the hypothesis as reasonable.
Accepting a hypothesis means that you believe it does explain the available evidence better than any other competing hypothesis. Knowing that we — after having earnestly considered and analysed the other available potential explanations — have been able to eliminate the competing potential explanations, warrants and enhances the confidence we have that our preferred explanation is the best explanation, i. e., the explanation that provides us (given it is true) with the greatest understanding.
This, of course, does not in any way mean that we cannot be wrong. Of course we can. Inferences to the best explanation are fallible inferences — since the premises do not logically entail the conclusion — so from a logical point of view, inference to the best explanation is a weak mode of inference. But if the arguments put forward are strong enough, they can be warranted and give us justified true belief, and hence, knowledge, even though they are fallible inferences. As scientists we sometimes — much like Sherlock Holmes and other detectives that use inference to the best explanation reasoning — experience disillusion. We thought that we had reached a strong conclusion by ruling out the alternatives in the set of contrasting explanations. But — what we thought was true turned out to be false.
That does not necessarily mean that we had no good reasons for believing what we believed. If we cannot live with that contingency and uncertainty, well, then we are in the wrong business. If it is deductive certainty you are after, rather than the ampliative and defeasible reasoning in inference to the best explanation — well, then get in to math or logic, not science.
For realists, the name of the scientific game is explaining phenomena … Realists typically invoke ‘inference to the best explanation’ or IBE … What exactly is the inference in IBE, what are the premises, and what the conclusion? …
It is reasonable to believe that the best available explanation of any fact is true.
F is a fact.
Hypothesis H explains F.
No available competing hypothesis explains F as well as H does.
Therefore, it is reasonable to believe that H is true.This scheme is valid and instances of it might well be sound. Inferences of this kind are employed in the common affairs of life, in detective stories, and in the sciences …
People object that the best available explanation might be false. Quite so – and so what? It goes without saying that any explanation might be false, in the sense that it is not necessarily true. It is absurd to suppose that the only things we can reasonably believe are necessary truths …
People object that being the best available explanation of a fact does not prove something to be true or even probable. Quite so – and again, so what? The explanationist principle – “It is reasonable to believe that the best available explanation of any fact is true” – means that it is reasonable to believe or think true things that have not been shown to be true or probable, more likely true than not.
Cutting wages is not the solution
20 Feb, 2017 at 19:01 | Posted in Economics | 7 CommentsA couple of years ago yours truly had a discussion with the chairman of the Swedish Royal Academy of Sciences (yes, the one that yearly presents the winners of The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel). What started the discussion was the allegation that the level of employment in the long run is a result of people’s own rational intertemporal choices and that how much people work basically is a question of incentives.
Somehow the argument sounded familiar.
When being awarded the ‘Nobel prize’ for 2011, Thomas Sargent declared that workers ought to be prepared for having low unemployment compensations in order to get the right incentives to search for jobs. The Swedish right-wing finance minister at the time appreciated Sargent’s statement and declared it to be a “healthy warning” for those who wanted to increase compensation levels.
The view is symptomatic. As in the 1930s, more and more right-wing politicians – and some economists – now suggest that lowering wages is the right medicine to strengthen the competitiveness of their faltering economies, get the economy going, increase employment and create growth that will get rid of towering debts and create balance in the state budgets.
But, intimating that one could solve economic problems by wage cuts and impairing unemployment compensations, in these dire times, should really be taken more as a sign of how low the confidence in our economic system has sunk. Wage cuts and lower unemployment compensation levels do not save neither competitiveness, nor jobs.
What is needed more than anything else in these times is stimulus and economic policies that increase effective demand.
On a societal level wage cuts only increase the risk of more people getting unemployed. To think that that one can solve economic crisis in this way is a turning back to those faulty economic theories and policies that John Maynard Keynes conlusively showed to be wrong already in the 1930s. It was theories and policies that made millions of people all over the world unemployed.
It’s an atomistic fallacy to think that a policy of general wage cuts would strengthen the economy. On the contrary. The aggregate effects of wage cuts would, as shown by Keynes, be catastrophical. They would start a cumulative spiral of lower prices that would make the real debts of individuals and firms increase since the nominal debts wouldn’t be affected by the general price and wage decrease. In an economy that more and more has come to rest on increased debt and borrowing this would be the entrance-gate to a debt deflation crises with decreasing investments and higher unemployment. In short, it would make depression knock on the door.
The impending danger for today’s economies is that they won’t get consumption and investments going. Confidence and effective demand have to be reestablished. The problem of our economies is not on the supply side. Overwhelming evidence shows that the problem today is on the demand side. Demand is – to put it bluntly – simply not sufficient to keep the wheels of the economies turning. To suggest that the solution is lower wages and unemployment compensations is just to write out a prescription for even worse catastrophes.
Goodness of fit
20 Feb, 2017 at 12:08 | Posted in Statistics & Econometrics | Comments Off on Goodness of fit
Which independent variables should be included in the equation? The goal is a “good fit” … How can a good fit be recognized? A popular measure for the satisfactoriness of a regression is the coefficient of determination, R2. If this number is large, it is said, the regression gives a good fit …
Nothing about R2 supports these claims. This statistic is best regarded as characterizing the geometric shape of the regression points and not much more.
The central difficulty with R2 for social scientists is that the independent variables are not subject to experimental manipulation. In some samples, they vary widely, producing large variance; in other cases, the observations are more tightly grouped and there is little dispersion. The variances are a function of the sample, not of the underlying relationship. Hence they cannot have any real connection to the “strength” of the relationship as social scientists ordinarily use the term, i. e., as a measure of how much effect a given change in independent variable has on the dependent variable …
Thus “maximizing R2” cannot be a reasonable procedure for arriving at a strong relationship. It neither measures causal power nor is comparable across samples … “Explaining variance” is not what social science is about.
Debunking the NAIRU myth
18 Feb, 2017 at 17:54 | Posted in Economics | 9 Comments
In our extended NAIRU model, labor productivity growth is included in the wage bargaining process … The logical consequence of this broadening of the theoretical canvas has been that the NAIRU becomes endogenous itself and ceases to be an attractor — Milton Friedman’s natural, stable and timeless equilibrium point from which the system cannot permanently deviate. In our model, a deviation from the initial equilibrium affects not only wages and prices (keeping the rest of the system unchanged) but also demand, technology, workers’ motivation, and work intensity; as a result, productivity growth and ultimately equilibrium unemployment will change. There is in other words, nothing natural or inescapable about equilibrium unemployment, as is Friedman’s presumption, following Wicksell; rather, the NAIRU is a social construct, fluctuating in response to fiscal and monetary policies and labor market interventions. Its ephemeral (rather than structural) nature may explain why the best economists working on the NAIRU have persistently failed to agree on how high the NAIRU actually is and how to estimate it.
Many politicians and economists subscribe to the NAIRU story and its policy implication that attempts to promote full employment is doomed to fail, since governments and central banks can’t push unemployment below the critical NAIRU threshold without causing harmful runaway inflation.
Although this may sound convincing, it’s totally wrong!
One of the main problems with NAIRU is that it essentially is a timeless long-run equilibrium attractor to which actual unemployment (allegedly) has to adjust. But if that equilibrium is itself changing — and in ways that depend on the process of getting to the equilibrium — well, then we can’t really be sure what that equlibrium will be without contextualizing unemployment in real historical time. And when we do, we will — as highlighted by Storm and Naastepad — see how seriously wrong we go if we omit demand from the analysis. Demand policy has long-run effects and matters also for structural unemployment — and governments and central banks can’t just look the other way and legitimize their passivity re unemployment by refering to NAIRU.
The existence of long-run equilibrium is a very handy modeling assumption to use. But that does not make it easily applicable to real-world economies. Why? Because it is basically a timeless concept utterly incompatible with real historical events. In the real world it is the second law of thermodynamics and historical — not logical — time that rules.
This importantly means that long-run equilibrium is an awfully bad guide for macroeconomic policies. In a world full of genuine uncertainty, multiple equilibria, asymmetric information and market failures, the long run equilibrium is simply a non-existent unicorn.
NAIRU does not hold water simply because it does not exist — and to base economic policies on such a weak theoretical and empirical construct is nothing short of writing out a prescription for self-inflicted economic havoc.
NAIRU is a useless concept, and the sooner we bury it, the better.
Big Data — poor science
17 Feb, 2017 at 15:19 | Posted in Statistics & Econometrics | 1 CommentAlmost everything we do these days leaves some kind of data trace in some computer system somewhere. When such data is aggregated into huge databases it is called “Big Data”. It is claimed social science will be transformed by the application of computer processing and Big Data. The argument is that social science has, historically, been “theory rich” and “data poor” and now we will be able to apply the methods of “real science” to “social science” producing new validated and predictive theories which we can use to improve the world.
What’s wrong with this? … Firstly what is this “data” we are talking about? In it’s broadest sense it is some representation usually in a symbolic form that is machine readable and processable. And how will this data be processed? Using some form of machine learning or statistical analysis. But what will we find? Regularities or patterns … What do such patterns mean? Well that will depend on who is interpreting them …
Looking for “patterns or regularities” presupposes a definition of what a pattern is and that presupposes a hypothesis or model, i.e. a theory. Hence big data does not “get us away from theory” but rather requires theory before any project can commence.
What is the problem here? The problem is that a certain kind of approach is being propagated within the “big data” movement that claims to not be a priori committed to any theory or view of the world. The idea is that data is real and theory is not real. That theory should be induced from the data in a “scientific” way.
I think this is wrong and dangerous. Why? Because it is not clear or honest while appearing to be so. Any statistical test or machine learning algorithm expresses a view of what a pattern or regularity is and any data has been collected for a reason based on what is considered appropriate to measure. One algorithm will find one kind of pattern and another will find something else. One data set will evidence some patterns and not others. Selecting an appropriate test depends on what you are looking for. So the question posed by the thought experiment remains “what are you looking for, what is your question, what is your hypothesis?”
Ideas matter. Theory matters. Big data is not a theory-neutral way of circumventing the hard questions. In fact it brings these questions into sharp focus and it’s time we discuss them openly.
Nostalgin var bättre förr …
17 Feb, 2017 at 12:48 | Posted in Varia | Comments Off on Nostalgin var bättre förr …
Modern macroeconomics — too much micro and not enough macro
16 Feb, 2017 at 18:24 | Posted in Economics | 1 Comment
This paper … looks back into the pre-crisis (pre-2007) intellectual history of macroeconomic theory and argues that modern macro neglects the basic sources of both impulses and propagation mechanisms of business cycles. The basic problem is that modern macro consists of too much micro and not enough macro. Focus on individual preferences and production functions misses the essence of macro fluctuations — the coordination failures and macro externalities that convert interactions among individual choices into constraints that prevent workers from optimizing hours of work and firms from optimizing sales, production, and utilization. Also modern business-cycle macro has too narrow a view of the range of aggregate demand shocks that in the presence of sticky prices constrain the choices of workers and firms. Shocks that have little or nothing to do with technology, preferences, or monetary policy can interact and impose constraints on individual choices …
Modern business cycle macro is littered with contradictions resulting from its attempts to combine market clearing and utility maximization at the level of the individual household with a form of price rigidity or friction. Once the baby of full price flexibility has been thrown out, the bathwater must be changed because price rigidity is logically incompatible with market clearing … The contradictions come when modern macroeconomists attempt to explain non-market-clearing outcomes with market‐clearing language, or in Blanchard’s (2008) words “movements take place along a labor supply curve … this may give a misleading description of fluctuations.”
Blog at WordPress.com.
Entries and Comments feeds.