Macroeconomic quackery

20 July, 2014 at 13:41 | Posted in Economics | 2 Comments

In a recent interview Chicago übereconomist  Robert Lucas said

the evidence on postwar recessions … overwhelmingly supports the dominant importance of real shocks.

So, according to Lucas, changes in tastes and technologies should be able to explain the main fluctuations in e.g. the unemployment that we have seen during the last six or seven decades. But really — not even a Nobel laureate could in his wildest imagination come up with any warranted and justified explanation solely based on changes in tastes and technologies.

How do we protect ourselves from this kind of scientific nonsense? In The Scientific Illusion in Empirical Macroeconomics Larry Summers has a suggestion well worth considering:

Modern scientific macroeconomics sees a (the?) crucial role of theory as the development of pseudo worlds or in Lucas’s (1980b) phrase the “provision of fully articulated, artificial economic systems that can serve as laboratories in which policies that would be prohibitively expensive to experiment with in actual economies can be tested out at much lower cost” and explicitly rejects the view that “theory is a collection of assertions about the actual economy” …

image

A great deal of the theoretical macroeconomics done by those professing to strive for rigor and generality, neither starts from empirical observation nor concludes with empirically verifiable prediction …

The typical approach is to write down a set of assumptions that seem in some sense reasonable, but are not subject to empirical test … and then derive their implications and report them as a conclusion. Since it is usually admitted that many considerations are omitted, the conclusion is rarely treated as a prediction …

However, an infinity of models can be created to justify any particular set of empirical predictions … What then do these exercises teach us about the world? … If empirical testing is ruled out, and persuasion is not attempted, in the end I am not sure these theoretical exercises teach us anything at all about the world we live in …

Reliance on deductive reasoning rather than theory based on empirical evidence is particularly pernicious when economists insist that the only meaningful questions are the ones their most recent models are designed to address. Serious economists who respond to questions about how today’s policies will affect tomorrow’s economy by taking refuge in technobabble about how the question is meaningless in a dynamic games context abdicate the field to those who are less timid. No small part of our current economic difficulties can be traced to ignorant zealots who gained influence by providing answers to questions that others labeled as meaningless or difficult. Sound theory based on evidence is surely our best protection against such quackery.

Added 23:00 GMT: Commenting on this post, Brad DeLong writes:

What is Lucas talking about?

If you go to Robert Lucas’s Nobel Prize Lecture, there is an admission that his own theory that monetary (and other demand) shocks drove business cycles because unanticipated monetary expansions and contractions caused people to become confused about the real prices they faced simply did not work:

Robert Lucas (1995): Monetary Neutrality:
“Anticipated monetary expansions … are not associated with the kind of stimulus to employment and production that Hume described. Unanticipated monetary expansions, on the other hand, can stimulate production as, symmetrically, unanticipated contractions can induce depression. The importance of this distinction between anticipated and unanticipated monetary changes is an implication of every one of the many different models, all using rational expectations, that were developed during the 1970s to account for short-term trade-offs…. The discovery of the central role of the distinction between anticipated and unanticipated money shocks resulted from the attempts, on the part of many researchers, to formulate mathematically explicit models that were capable of addressing the issues raised by Hume. But I think it is clear that none of the specific models that captured this distinction in the 1970s can now be viewed as a satisfactory theory of business cycles”

And Lucas explicitly links that analytical failure to the rise of attempts to identify real-side causes:

“Perhaps in part as a response to the difficulties with the monetary-based business cycle models of the 1970s, much recent research has followed the lead of Kydland and Prescott (1982) and emphasized the effects of purely real forces on employ- ment and production. This research has shown how general equilibrium reasoning can add discipline to the study of an economy’s distributed lag response to shocks, as well as to the study of the nature of the shocks themselves…. Progress will result from the continued effort to formulate explicit theories that fit the facts, and that the best and most practical macroeconomics will make use of developments in basic economic theory.”

But these real-side theories do not appear to me to “fit the facts” at all.

And yet Lucas’s overall conclusion is:

“In a period like the post-World War II years in the United States, real output fluctuations are modest enough to be attributable, possibly, to real sources. There is no need to appeal to money shocks to account for these movements”

It would make sense to say that there is “no need to appeal to money shocks” only if there were a well-developed theory and models by which pre-2008 post-WWII business-cycle fluctuations are modeled as and explained by identified real shocks. But there isn’t. All Lucas will say is that post-WWII pre-2008 business-cycle fluctuations are “possibly” “attributable… to real shocks” because they are “modest enough”. And he says this even though:

“An event like the Great Depression of 1929-1933 is far beyond anything that can be attributed to shocks to tastes and technology. One needs some other possibilities. Monetary contractions are attractive as the key shocks in the 1929-1933 years, and in other severe depressions, because there do not seem to be any other candidates”

as if 2008-2009 were clearly of a different order of magnitude with a profoundly different signature in the time series than, say, 1979-1982.

Why does he think any of these things?

Yes, indeed, how could any person think any of those things …

Peter Dorman on economists’ obsession with homogeneity and average effects

19 July, 2014 at 20:41 | Posted in Economics | 6 Comments

Peter Dorman is one of those rare economists that it is always a pleasure to read. Here his critical eye is focussed on economists’ infatuation with homogeneity and averages:

You may feel a gnawing discomfort with the way economists use statistical techniques. Ostensibly they focus on the difference between people, countries or whatever the units of observation happen to be, but they nevertheless seem to treat the population of cases as interchangeable—as homogenous on some fundamental level. As if people were replicants.

You are right, and this brief talk is about why and how you’re right, and what this implies for the questions people bring to statistical analysis and the methods they use.

Our point of departure will be a simple multiple regression model of the form

y = β0 + β1 x1 + β2 x2 + …. + ε

where y is an outcome variable, x1 is an explanatory variable of interest, the other x’s are control variables, the β’s are coefficients on these variables (or a constant term, in the case of β0), and ε is a vector of residuals. We could apply the same analysis to more complex functional forms, and we would see the same things, so let’s stay simple.

notes7-2What question does this model answer? It tells us the average effect that variations in x1 have on the outcome y, controlling for the effects of other explanatory variables. Repeat: it’s the average effect of x1 on y.

This model is applied to a sample of observations. What is assumed to be the same for these observations? (1) The outcome variable y is meaningful for all of them. (2) The list of potential explanatory factors, the x’s, is the same for all. (3) The effects these factors have on the outcome, the β’s, are the same for all. (4) The proper functional form that best explains the outcome is the same for all. In these four respects all units of observation are regarded as essentially the same.

Now what is permitted to differ across these observations? Simply the values of the x’s and therefore the values of y and ε. That’s it.

Thus measures of the difference between individual people or other objects of study are purchased at the cost of immense assumptions of sameness. It is these assumptions that both reflect and justify the search for average effects …

In the end, statistical analysis is about imposing a common structure on observations in order to understand differentiation. Any structure requires assuming some kinds of sameness, but some approaches make much more sweeping assumptions than others. An unfortunate symbiosis has arisen in economics between statistical methods that excessively rule out diversity and statistical questions that center on average (non-diverse) effects. This is damaging in many contexts, including hypothesis testing, program evaluation, forecasting—you name it …

The first step toward recovery is admitting you have a problem. Every statistical analyst should come clean about what assumptions of homogeneity are being made, in light of their plausibility and the opportunities that exist for relaxing them.

Limiting model assumptions in economic science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we “export” them to our “target systems”, we have to be able to show that they do not only hold under ceteris paribus conditions and a fortiori only are of limited value to our understanding, explanations or predictions of real economic systems. As the always eminently quotable Keynes writes (emphasis added) in Treatise on Probability (1921):

The kind of fundamental assumption about the character of material laws, on which scientists appear commonly to act, seems to me to be [that] the system of the material universe must consist of bodies … such that each of them exercises its own separate, independent, and invariable effect, a change of the total state being compounded of a number of separate changes each of which is solely due to a separate portion of the preceding state … Yet there might well be quite different laws for wholes of different degrees of complexity, and laws of connection between complexes which could not be stated in terms of laws connecting individual parts … If different wholes were subject to different laws qua wholes and not simply on account of and in proportion to the differences of their parts, knowledge of a part could not lead, it would seem, even to presumptive or probable knowledge as to its association with other parts … These considerations do not show us a way by which we can justify induction … /427 No one supposes that a good induction can be arrived at merely by counting cases. The business of strengthening the argument chiefly consists in determining whether the alleged association is stable, when accompanying conditions are varied … /468 In my judgment, the practical usefulness of those modes of inference … on which the boasted knowledge of modern science depends, can only exist … if the universe of phenomena does in fact present those peculiar characteristics of atomism and limited variety which appears more and more clearly as the ultimate result to which material science is tending.

Econometrics may be an informative tool for research. But if its practitioners do not investigate and make an effort of providing a justification for the credibility of the assumptions on which they erect their building, it will not fulfill its tasks. There is a gap between its aspirations and its accomplishments, and without more supportive evidence to substantiate its claims, critics will continue to consider its ultimate argument as a mixture of rather unhelpful metaphors and metaphysics. Maintaining that economics is a science in the “true knowledge” business, yours truly remains a skeptic of the pretences and aspirations of econometrics. So far, I cannot really see that it has yielded very much in terms of relevant, interesting economic knowledge.

The marginal return on its ever higher technical sophistication in no way makes up for the lack of serious under-labouring of its deeper philosophical and methodological foundations that already Keynes complained about. The rather one-sided emphasis of usefulness and its concomitant instrumentalist justification cannot hide that neither Haavelmo, nor the legions of probabilistic econometricians following in his footsteps, give supportive evidence for their considering it “fruitful to believe” in the possibility of treating unique economic data as the observable results of random drawings from an imaginary sampling of an imaginary population. After having analyzed some of its ontological and epistemological foundations, I cannot but conclude that econometrics on the whole has not delivered “truth”. And I doubt if it has ever been the intention of its main protagonists.

Our admiration for technical virtuosity should not blind us to the fact that we have to have a cautious attitude towards probabilistic inferences in economic contexts. Science should help us penetrate to the causal process lying behind events and disclose the causal forces behind what appears to be simple facts. We should look out for causal relations, but econometrics can never be more than a starting point in that endeavour, since econometric (statistical) explanations are not explanations in terms of mechanisms, powers, capacities or causes. Firmly stuck in an empiricist tradition, econometrics is only concerned with the measurable aspects of reality. But there is always the possibility that there are other variables – of vital importance and although perhaps unobservable and non-additive, not necessarily epistemologically inaccessible – that were not considered for the model. Those who were can hence never be guaranteed to be more than potential causes, and not real causes. A rigorous application of econometric methods in economics really presupposes that the phenomena of our real world economies are ruled by stable causal relations between variables. A perusal of the leading econom(etr)ic journals shows that most econometricians still concentrate on fixed parameter models and that parameter-values estimated in specific spatio-temporal contexts are presupposed to be exportable to totally different contexts. To warrant this assumption one, however, has to convincingly establish that the targeted acting causes are stable and invariant so that they maintain their parametric status after the bridging. The endemic lack of predictive success of the econometric project indicates that this hope of finding fixed parameters is a hope for which there really is no other ground than hope itself.

Real world social systems are not governed by stable causal mechanisms or capacities. The kinds of “laws” and relations that econometrics has established, are laws and relations about entities in models that presuppose causal mechanisms being atomistic and additive. When causal mechanisms operate in real world social target systems they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made “nomological machines” they are rare, or even non-existant. Unfortunately that also makes most of the achievements of econometrics – as most of contemporary endeavours of mainstream economic theoretical modeling – rather useless.

Remember that a model is not the truth. It is a lie to help you get your point across. And in the case of modeling economic risk, your model is a lie about others, who are probably lying themselves. And what’s worse than a simple lie? A complicated lie.

Sam L. Savage The Flaw of Averages

Den svarta bilden

19 July, 2014 at 14:34 | Posted in Varia | Leave a comment

 

Till Isagel

19 July, 2014 at 09:54 | Posted in Varia | Leave a comment

 

Underbara tonsättningar av vår kanske främste diktarspråkekvilibrist — Harry Martinson
[h/t Jan Milch]

Chicago Follies (X)

19 July, 2014 at 08:34 | Posted in Economics | Leave a comment

 
capitalism-works-best

Although I never believed it when I was young and held scholars in great respect, it does seem to be the case that ideology plays a large role in economics. How else to explain Chicago’s acceptance of not only general equilibrium but a particularly simplified version of it as ‘true’ or as a good enough approximation to the truth? Or how to explain the belief that the only correct models are linear and that the von Neuman prices are those to which actual prices converge pretty smartly? This belief unites Chicago and the Classicals; both think that the ‘long-run’ is the appropriate period in which to carry out analysis. There is no empirical or theoretical proof of the correctness of this. But both camps want to make an ideological point. To my mind that is a pity since clearly it reduces the credibility of the subject and its practitioners.

Frank Hahn

Out of the 74 persons that have been awarded “The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel,” 28 — almost 40 % — have been affiliated to The University of Chicago.

The world is really a small place when it comes to economics …

Chicago Follies (IX)

19 July, 2014 at 08:18 | Posted in Economics | Leave a comment

new classicalTom Sargent is a bit out of touch with the real world up there in his office … Certain people have a capacity for ignoring facts which are patenty obvious, but are counter to their view of the world; so they just ignore them …

Sargent is a sort of tinkerer, playing an intellectual game. He looks at a puzzle to see if he can solve it in a particular way, exercising these fancy techniques.

Alan Blinder

Calibration economics — a religious conviction without any scientific value whatsoever

18 July, 2014 at 19:09 | Posted in Economics | Leave a comment

There are many kinds of useless economics held in high regard within mainstream economics establishment today. Few — if any — are less deserved than the macroeconomic theory/method — mostly connected with Nobel laureates Finn Kydland, Robert Lucas, Edward Prescott and Thomas Sargent — called calibration.

fraud-kit

Let’s see what two eminent econometricians have to say about the calibrationist claim of being a scientific advance in economics. In Journal of Economic Perspective (1996, vol. 10) Lars Peter Hansen and James J. Hickman writes:

It is only under very special circumstances that a micro parameter such as the inter-temporal elasticity of substitution or even a marginal propensity to consume out of income can be ‘plugged into’ a representative consumer model to produce an empirically concordant aggregate model … What credibility should we attach to numbers produced from their ‘computational experiments’, and why should we use their ‘calibrated models’ as a basis for serious quantitative policy evaluation? … There is no filing cabinet full of robust micro estimats ready to use in calibrating dynamic stochastic equilibrium models … The justification for what is called ‘calibration’ is vague and confusing.

Mathematical statistician Aris Spanos — in  Error and Inference (Mayo & Spanos, 2010, p. 240) — is no less critical:

Given that “calibration” purposefully foresakes error probabilities and provides no way to assess the reliability of inference, how does one assess the adequacy of the calibrated model? …

The idea that it should suffice that a theory “is not obscenely at variance with the data” (Sargent, 1976, p. 233) is to disregard the work that statistical inference can perform in favor of some discretional subjective appraisal … it hardly recommends itself as an empirical methodology that lives up to the standards of scientific objectivity

And this is the verdict of Paul Krugman :

The point is that if you have a conceptual model of some aspect of the world, which you know is at best an approximation, it’s OK to see what that model would say if you tried to make it numerically realistic in some dimensions.

But doing this gives you very little help in deciding whether you are more or less on the right analytical track. I was going to say no help, but it is true that a calibration exercise is informative when it fails: if there’s no way to squeeze the relevant data into your model, or the calibrated model makes predictions that you know on other grounds are ludicrous, something was gained. But no way is calibration a substitute for actual econometrics that tests your view about how the world works.

In physics it may possibly not be straining credulity too much to model processes as ergodic – where time and history do not really matter – but in social and historical sciences it is obviously ridiculous. If societies and economies were ergodic worlds, why do econometricians fervently discuss things such as structural breaks and regime shifts? That they do is an indication of the unrealisticness of treating open systems as analyzable with ergodic concepts.

The future is not reducible to a known set of prospects. It is not like sitting at the roulette table and calculating what the future outcomes of spinning the wheel will be. Reading Lucas, Sargent, Prescott, Kydland and other calibrationists one comes to think of Robert Clower’s apt remark that

much economics is so far removed from anything that remotely resembles the real world that it’s often difficult for economists to take their own subject seriously.

Instead of assuming calibration and rational expectations to be right, one ought to confront the hypothesis with the available evidence. It is not enough to construct models. Anyone can construct models. To be seriously interesting, models have to come with an aim. They have to have an intended use. If the intention of calibration and rational expectations  is to help us explain real economies, it has to be evaluated from that perspective. A model or hypothesis without a specific applicability is not really deserving our interest.

To say, as Edward Prescott that

one can only test if some theory, whether it incorporates rational expectations or, for that matter, irrational expectations, is or is not consistent with observations

is not enough. Without strong evidence all kinds of absurd claims and nonsense may pretend to be science. We have to demand more of a justification than this rather watered-down version of “anything goes” when it comes to rationality postulates. If one proposes rational expectations one also has to support its underlying assumptions. None is given, which makes it rather puzzling how rational expectations has become the standard modeling assumption made in much of modern macroeconomics. Perhaps the reason is, as Paul Krugman has it, that economists often mistake

beauty, clad in impressive looking mathematics, for truth.

But I think Prescott’s view is also the reason why calibration economists are not particularly interested in empirical examinations of how real choices and decisions are made in real economies. In the hands of Lucas, Prescott and Sargent, rational expectations has been transformed from an – in principle – testable hypothesis to an irrefutable proposition. Irrefutable propositions may be comfortable – like religious convictions or ideological dogmas – but it is not  science.

 

The Phillips curve — a multi-sector approach

18 July, 2014 at 18:33 | Posted in Economics | 1 Comment

keyseaerchThere is an old story about a policeman who sees a drunk looking for something under a streetlight and asks what he is looking for. The drunk replies he has lost his car keys and the policeman joins in the search. A few minutes later the policeman asks if he is sure he lost them here and the drunk replies “No, I lost them in the park.” The policeman then asks “So why are you looking here?” to which the drunk replies “Because this is where the light is.” That story has much relevance for the economics profession’s approach to the Phillips curve …

The question triggering the discussion is can Phillips curve (PC) theory account for inflation and the non-emergence of sustained deflation in the Great Recession? …

There is an obvious explanation that has been over-looked by mainstream economists for nearly forty years because they have preferred to keep looking under the “lamppost” of their conventional constructions. That alternative explanation rests on a combination of downward nominal wage rigidity plus incomplete incorporation of inflation expectations in a multi-sector economy.

The alternative has its roots in the seminal ideas of James Tobin, expressed in his 1971 presidential address to the American Economic Association. Tobin identified the critical multi-sector aspect of inflation for the Phillips curve. However, he failed to identify the issue of incomplete incorporation of inflation expectations, and nor did he present a mathematical model of the inflation process, which is a cardinal sin in today’s profession.

The mainstream profession’s focus has been, and continues to be, the formation of expectations (i.e. adaptive, rational, and most recently, near-rational). In my view, the real issue is the extent to which inflation expectations are incorporated into wage behavior. Workers may have absolutely correct expectations of inflation but not incorporate them into nominal wage demands because of job fears …

The simplest version of the multi-sector model has complete downward nominal wage rigidity and zero incorporation of inflation expectations in sectors with unemployment. That model does very well explaining inflation during the Great Recession. As the proportion of sectors with unemployment diminishes we should expect inflation to start increasing. That is my “old Keynesian” prediction, which I hope can inoculate against the inevitable claims that will come from the Friedman – Lucas tribe that we have over-shot the natural rate of unemployment.

Lastly, there is a clear lesson for economists. If the profession would move away from the lamppost, it might actually find the keys to inflation and the Phillips curve.

Thomas Palley

Chicago Follies (VIII)

18 July, 2014 at 10:46 | Posted in Economics | Leave a comment

Kevin Hoover: The Great Recession and the recent financial crisis have been widely viewed in both popular and professional commentary as a challenge to rational expectations and to efficient markets … I’m asking you whether you accept any of the blame … there’s been a lot of talk about whether rational expectations and the efficient-markets hypotheses is where we should locate the analytical problems that made us blind.

Robert Lucas: You know, people had no trouble having financial meltdowns in their economies before all this stuff we’ve been talking about came on board. We didn’t help, though; there’s no question about that. We may have focused attention on the wrong things, I don’t know.

Source

Chicago Follies (VII)

18 July, 2014 at 10:42 | Posted in Economics | 1 Comment

lucIn summary, it does not appear possible, even in principle, to classify individual unemployed people as either voluntarily or involuntarily unemployed depending on the characteristics of the decision problems they face. One cannot, even conceptually, arrive at a usable definition of full employment as a state in which no involuntary unemployment exists.

The difficulties are not the measurement error problems which necessarily arise in applied economics. They arise because the “thing” to be measured does not exist.

MancurOlsonThere are, of course, large numbers of people who voluntarily choose not to work for pay (such as the voluntarily retired, the idle rich, those who prefer handouts to working at jobs, those who stay at home full time to care for children, and so on)… But common sense and the observations and experiences of literally hundred millions of people testify that there is also involuntary unemploymnetr and that it is by no means an isolated or rare phenomenon … Only madmen — or an economist with both ‘trained incapacity’ and doctrinal passion — could deny the reality of involuntary unemployment.
 

« Previous PageNext Page »

Blog at WordPress.com. | The Pool Theme.
Entries and comments feeds.