NAIRU — a non-existent unicorn

30 Jun, 2020 at 17:27 | Posted in Economics | 8 Comments

powemp3In our extended NAIRU model, labor productivity growth is included in the wage bargaining process … The logical consequence of this broadening of the theoretical canvas has been that the NAIRU becomes endogenous itself and ceases to be an attractor — Milton Friedman’s natural, stable and timeless equilibrium point from which the system cannot permanently deviate. In our model, a deviation from the initial equilibrium affects not only wages and prices (keeping the rest of the system unchanged) but also demand, technology, workers’ motivation, and work intensity; as a result, productivity growth and ultimately equilibrium unemployment will change. There is in other words, nothing natural or inescapable about equilibrium unemployment, as is Friedman’s presumption, following Wicksell; rather, the NAIRU is a social construct, fluctuating in response to fiscal and monetary policies and labor market interventions. Its ephemeral (rather than structural) nature may explain why the best economists working on the NAIRU have persistently failed to agree on how high the NAIRU actually is and how to estimate it.

Servaas Storm & C. W. M. Naastepad

Many politicians and economists subscribe to the NAIRU story and its policy implication that attempts to promote full employment is doomed to fail since governments and central banks can’t push unemployment below the critical NAIRU threshold without causing harmful runaway inflation.

Although this may sound convincing, it’s totally wrong!

One of the main problems with NAIRU is that it essentially is a timeless long-run equilibrium attractor to which actual unemployment (allegedly) has to adjust. But if that equilibrium is itself changing — and in ways that depend on the process of getting to the equilibrium — well, then we can’t really be sure what that equilibrium will be without contextualizing unemployment in real historical time. And when we do, we will — as highlighted by Storm and Naastepad — see how seriously wrong we go if we omit demand from the analysis. Demand policy has long-run effects and matters also for structural unemployment — and governments and central banks can’t just look the other way and legitimize their passivity re unemployment by referring to NAIRU.

The existence of long-run equilibrium is a very handy modeling assumption to use. But that does not make it easily applicable to real-world economies. Why? Because it is basically a timeless concept utterly incompatible with real historical events. In the real world, it is the second law of thermodynamics and historical — not logical — time that rules.

This importantly means that long-run equilibrium is an awfully bad guide for macroeconomic policies. In a world full of genuine uncertainty, multiple equilibria, asymmetric information, and market failures, the long-run equilibrium is simply a non-existent unicorn.

NAIRU does not hold water simply because it does not exist — and to base economic policies on such a weak theoretical and empirical construct is nothing short of writing out a prescription for self-inflicted economic havoc.

NAIRU is a useless concept, and the sooner we bury it, the better.

Enlighten the night

30 Jun, 2020 at 09:24 | Posted in Varia | Leave a comment


P-value — a poor substitute for scientific reasoning

28 Jun, 2020 at 23:16 | Posted in Statistics & Econometrics | Leave a comment

All science entails human judgment, and using statistical models doesn’t relieve us of that necessity. Working with misspecified models, the scientific value of significance testing is actually zero — even though you’re making valid statistical inferences! Statistical models and concomitant significance tests are no substitutes for doing real science.

In its standard form, a significance test is not the kind of ‘severe test’ that we are looking for in our search for being able to confirm or disconfirm empirical scientific hypotheses. This is problematic for many reasons, one being that there is a strong tendency to accept the null hypothesis since they can’t be rejected at the standard 5% significance level. In their standard form, significance tests bias against new hypotheses by making it hard to disconfirm the null hypothesis.

And as shown over and over again when it is applied, people have a tendency to read “not disconfirmed” as “probably confirmed.” Standard scientific methodology tells us that when there is only say a 10 % probability that pure sampling error could account for the observed difference between the data and the null hypothesis, it would be more “reasonable” to conclude that we have a case of disconfirmation. Especially if we perform many independent tests of our hypothesis and they all give ​the same 10% result as our reported one, I guess most researchers would count the hypothesis as even more disconfirmed.

Statistics is no substitute for thinking. We should never forget that the underlying parameters we use when performing significance tests are model constructions. Our p-values mean next to nothing if the model is wrong. Statistical​ significance tests do not validate models!

monte_carloIn many social sciences, p-values and null hypothesis significance testing (NHST) is often used to draw far-reaching scientific conclusions — despite the fact that they are as a rule poorly understood and that there exist alternatives that are easier to understand and more informative.

Not the least using confidence intervals (CIs) and effect sizes are to be preferred to the Neyman-Pearson-Fisher mishmash approach that is so often practiced by applied researchers.

Running a Monte Carlo simulation with 100 replications of a fictitious sample having N = 20, confidence intervals of 95%, a normally distributed population with a mean = 10 and a standard deviation of 20, taking two-tailed p-values on a zero null hypothesis, we get varying CIs (since they are based on varying sample standard deviations), but with a minimum of 3.2 and a maximum of 26.1, we still get a clear picture of what would happen in an infinite limit sequence. On the other hand p-values (even though from a purely mathematical-statistical sense more or less equivalent to CIs) vary strongly from sample to sample, and jumping around between a minimum of 0.007 and a maximum of 0.999 doesn’t give you a clue of what will happen in an infinite limit sequence!

[In case you want to do your own Monte Carlo simulation, here’s an example I’ve made using Gretl:
nulldata 20
loop 100 –progressive
series y = normal(10,15)
scalar zs = (10-mean(y))/sd(y)
scalar df = $nobs-1
scalar ybar=mean(y)
scalar ysd= sd(y)
scalar ybarsd=ysd/sqrt($nobs)
scalar tstat = (ybar-10)/ybarsd
pvalue t df tstat
scalar lowb = mean(y) – critical(t,df,0.025)*ybarsd
scalar uppb = mean(y) + critical(t,df,0.025)*ybarsd
scalar pval = pvalue(t,df,tstat)
store E:\pvalcoeff.gdt lowb uppb pval

Golden ratio (student stuff)

28 Jun, 2020 at 18:58 | Posted in Statistics & Econometrics | Leave a comment



28 Jun, 2020 at 16:28 | Posted in Varia | Leave a comment


Old loves never die …

Hicks’ chef-d’oeuvre

28 Jun, 2020 at 16:15 | Posted in Economics | 2 Comments

When we cannot accept that the observations, along the time-series available to us, are independent, or cannot by some device be divided into groups that can be treated as independent, we get into much deeper water. For we have then, in strict logic, no more than one observation, all of the separate items having to be taken together. For the analysis of that the probability calculus is useless; it does not apply. We are left to use our judgement, making sense of what has happened as best we can, in the manner of the historian. Applied economics does then come back to history, after all.

hicksI am bold enough to conclude, from these considerations that the usefulness of ‘statistical’ or ‘stochastic’ methods in economics is a good deal less than is now conventionally supposed. We have no business to turn to them automatically; we should always ask ourselves, before we apply them, whether they are appropriate to the problem at hand. Very often they are not. Thus it is not at all sensible to take a small number of observations (sometimes no more than a dozen observations) and to use the rules of probability to deduce from them a ‘significant’ general law. For we are assuming, if we do so, that the variations from one to another of the observations are random, so that if we had a larger sample (as we do not) they would by some averaging tend to disappear. But what nonsense this is when the observations are derived, as not infrequently happens, from different countries, or localities, or industries — entities about which we may well have relevant information, but which we have deliberately decided, by our procedure, to ignore. By all means let us plot the points on a chart, and try to explain them; but it does not help in explaining them to suppress their names. The probability calculus is no excuse for forgetfulness.

John Hicks’ Causality in economics is an absolute masterpiece. It ought to be on the reading list of every course in economic methodology.

Top 15 Economics Blog of The World

25 Jun, 2020 at 08:34 | Posted in Economics | 3 Comments

Top-15Random Observations for Students of Economics
This blog is run by Gregory Mankiw …

Macro Musings Blog
Run by David Beckworth, a senior research fellow at George Mason University …

Confessions of a Supply-Side Liberal
Run by Miles Kimball, the Emeritus Professor Economics at Michigan University …

Marginal Revolution 
Alex Tabarrok and Tyler Cowen, both professors of economics at George Mason University, run this blog …

Naked Capitalism 
This blog was started in 2006 as a joint initiative of various economic scholars to address the issue of underreporting of the extent and severity of the underpricing of risk of all credit instruments in the US …

New Economic Perspectives
This blog is written by multiple authors and provides articles and analyses from several renowned economists, finance gurus, legal scholars, and academicians …

The Enlightened Economist
This blog is run by Dr. Diane Coyle at Cambridge University …

Real-Time Economics
This blog is run by a wing of the Wall Street Journal …

The Intelligent Economist
This blog is run by a group of economic students and graduates who wanted to share their love for their subject with the rest of the world …

The Undercover Economist
This blog is written by Tim Hardford …

Thoughts on Economics
This blog mostly features articles on Post Keynesian economic theories …

Ed Dolan’s Econ Blog
Dr Dolan has had the experience of teaching in some of the world’s finest universities in colleges …

Worthwhile Canadian Initiative
This is an economics blog with a focus on Canadian economic issues …

Lars P. Syll
Dr. Syll, a professor of economics at the Malmo University in Sweden, runs this blog to educate and enlighten his followers on subjects like realist social science, theories of distributive justice, and the methodology and philosophy of economics …

Mainly Macro
Written by Dr. Wren-Lewis, the Emeritus Professor of Economics and Fellow of Merton College at Oxford University …

The Brand Boy

The Deficit Myth

24 Jun, 2020 at 10:41 | Posted in Economics, Politics & Society | 15 Comments

keltonSoon after joining the Budget Committee, Kelton the deficit owl played a game with the staffers. She would first ask if they would wave a magic wand that had the power to eliminate the national debt. They all said yes. Then Kelton would ask, “Suppose that wand had the power to rid the world of US Treasuries. Would you wave it?” This question—even though it was equivalent to asking to wipe out the national debt—“drew puzzled looks, furrowed brows, and pensive expressions. Eventually, everyone would decide against waving the wand.”

Such is the spirit of Kelton’s book, The Deficit Myth. She takes the reader down trains of thought that turn conventional wisdom about federal budget deficits on its head. Kelton makes absurd claims that the reader will think surely can’t be true…but then she seems to justify them by appealing to accounting tautologies. And because she uses apt analogies and relevant anecdotes, Kelton is able to keep the book moving despite its dry subject matter. She promises the reader that MMT opens up grand new possibilities for the federal government to help the unemployed, the uninsured, and even the planet itself…if we would only open our minds to a paradigm shift …

Precisely because Kelton’s book is so unexpectedly impressive, I would urge longstanding critics of MMT to resist the urge to dismiss it with ridicule. Although it’s fun to lambaste “magical monetary theory” on social media and to ask, “Why don’t you move to Zimbabwe?” such moves will only serve to enhance the credibility of MMT in the eyes of those who are receptive to it.

Robert P. Murphy / Mises Institute

Can a government go bankrupt?
No. You cannot be indebted to yourself.

Can a central bank go bankrupt?
No. A central bank can in principle always ‘print’ more money.

Do taxpayers have to repay government debts?
No, at least not as long the debt is incurred in a country’s own currency.

Do increased public debts burden future generations?
No, not necessarily. It depends on what the debt is used for.

Does maintaining full employment mean the government has to increase its debt?

dec3bb27f72875e4fb4d4b62daebb2fd161b36392c1a0626f00cfd2ece207d84As the national debt increases, and with it the sum of private wealth, there will be an increasingly yield from taxes on higher incomes and inheritances, even if the tax rates are unchanged. These higher tax payments do not represent reductions of spending by the taxpayers. Therefore the government does not have to use these proceeds to maintain the requisite rate of spending, and can devote them to paying the interest on the national debt …

The greater the national debt the greater is the quantity of private wealth. The reason for this is simply that for every dollar of debt owed by the government there is a private creditor who owns the government obligations (possibly through a corporation in which he has shares), and who regards these obligations as part of his private fortune. The greater the private fortunes the less is the incentive to add to them by saving out of current income …

If for any reason the government does not wish to see private property grow too much … it can check this by taxing the rich instead of borrowing from them, in its program of financing government spending to maintain full employment.

Abba Lerner

The rhetoric of imaginary populations

24 Jun, 2020 at 09:16 | Posted in Economics, Statistics & Econometrics | Leave a comment

morg The most expedient population and data generation model to adopt is one in which the population is regarded as a realization of an infinite super population. This setup is the standard perspective in mathematical statistics, in which random variables are assumed to exist with fixed moments for an uncountable and unspecified universe of events …

This perspective is tantamount to assuming a population machine that spawns individuals forever (i.e., the analog to a coin that can be flipped forever). Each individual is born as a set of random draws from the distributions of Y¹, Y°, and additional variables collectively denoted by S …

Because of its expediency, we will usually write with the superpopulation model in the background, even though the notions of infinite superpopulations and sequences of sample sizes approaching infinity are manifestly unrealistic.

In econometrics one often gets the feeling that many of its practitioners think of it as a kind of automatic inferential machine: input data and out comes casual knowledge. This is like pulling a rabbit from a hat. Great — but first you have to put the rabbit in the hat. And this is where assumptions come into the picture.

The assumption of imaginary ‘super populations’ is one of the many dubious assumptions used in modern econometrics.

As social scientists — and economists — we have to confront the all-important question of how to handle uncertainty and randomness. Should we define randomness with probability? If we do, we have to accept that to speak of randomness we also have to presuppose the existence of nomological probability machines, since probabilities cannot be spoken of – and actually, to be strict, do not at all exist – without specifying such system-contexts. Accepting a domain of probability theory and sample space of infinite populations also implies that judgments are made on the basis of observations that are actually never made!

Infinitely repeated trials or samplings never take place in the real world. So that cannot be a sound inductive basis for a science with aspirations of explaining real-world socio-economic processes, structures or events. It’s not tenable.


In Statistical Models and Causal Inference: A Dialogue with the Social Sciences David Freedman also touched on this fundamental problem, arising when you try to apply statistical models outside overly simple nomological machines like coin tossing and roulette wheels:

freedLurking behind the typical regression model will be found a host of such assumptions; without them, legitimate inferences cannot be drawn from the model. There are statistical procedures for testing some of these assumptions. However, the tests often lack the power to detect substantial failures. Furthermore, model testing may become circular; breakdowns in assumptions are detected, and the model is redefined to accommodate. In short, hiding the problems can become a major goal of model building.

Using models to make predictions of the future, or the results of interventions, would be a valuable corrective. Testing the model on a variety of data sets – rather than fitting refinements over and over again to the same data set – might be a good second-best … Built into the equation is a model for non-discriminatory behavior: the coefficient d vanishes. If the company discriminates, that part of the model cannot be validated at all.

Regression models are widely used by social scientists to make causal inferences; such models are now almost a routine way of demonstrating counterfactuals. However, the “demonstrations” generally turn out to depend on a series of untested, even unarticulated, technical assumptions. Under the circumstances, reliance on model outputs may be quite unjustified. Making the ideas of validation somewhat more precise is a serious problem in the philosophy of science. That models should correspond to reality is, after all, a useful but not totally straightforward idea – with some history to it. Developing appropriate models is a serious problem in statistics; testing the connection to the phenomena is even more serious …

In our days, serious arguments have been made from data. Beautiful, delicate theorems have been proved, although the connection with data analysis often remains to be established. And an enormous amount of fiction has been produced, masquerading as rigorous science.

And as if this wasn’t enough, one could — as we’ve seen — also seriously wonder what kind of ‘populations’ these statistical and econometric models ultimately are based on. Why should we as social scientists — and not as pure mathematicians working with formal-axiomatic systems without the urge to confront our models with real target systems — unquestioningly accept models based on concepts like the ‘infinite super populations’ used in e.g. the ‘potential outcome’ framework that has become so popular lately in social sciences?

Of course one could treat observational or experimental data as random samples from real populations. I have no problem with that (although it has to be noted that most ‘natural experiments’ are not based on random sampling from some underlying population — which, of course, means that the effect-estimators, strictly seen, only are unbiased for the specific groups studied). But probabilistic econometrics does not content itself with that kind of populations. Instead, it creates imaginary populations of ‘parallel universes’ and assume that our data are random samples from that kind of  ‘infinite super populations.’

But this is actually nothing else but hand-waving! And it is inadequate for real science. As David Freedman writes:

With this approach, the investigator does not explicitly define a population that could in principle be studied, with unlimited resources of time and money. The investigator merely assumes that such a population exists in some ill-defined sense. And there is a further assumption, that the data set being analyzed can be treated as if it were based on a random sample from the assumed population. These are convenient fictions … Nevertheless, reliance on imaginary populations is widespread. Indeed regression models are commonly used to analyze convenience samples … The rhetoric of imaginary populations is seductive because it seems to free the investigator from the necessity of understanding how data were generated.

In social sciences — including economics — it’s always wise to ponder C. S. Peirce’s remark that universes are not as common as peanuts …

Statsskuldsretorikens metaforiska makt

23 Jun, 2020 at 14:23 | Posted in Economics | 1 Comment

Till det som är tabubelagt i Sverige hör att säga att statens pengar kommer från staten. När finanspolitiken nu expanderar under pandemin har man på ledarsidor kunnat läsa filosofiska resonemang om att de nya pengarna egentligen kommer från skattebetalarna, löntagarna eller företagen. Nationalekonomen John Hassler skrev i Expressen (8 maj) att staten finansierar sina krispaket genom att låna av oss medborgare. Att nämna det uppenbara – att det är nya pengar det handlar om, tycks uteslutet …

darling-let-s-get-deeply-into-debtAtt staten spenderar nu under krisen betyder inte att den måste spara efteråt. Men vårt bildspråk kan få det att kännas så. När finansministern talar om statskassan framstår den för vår inre syn som Joakim von Ankas pengabinge, där våra skattepengar ligger i förvar. Den är visserligen välfylld för tillfället, men en dag kan pengarna ta slut.

Den bilden gör oss mottagliga för idén att vi på sikt måste strama åt … Kravet på framtida åtstramningar har lyfts på flera håll, bland annat av den så kallade ”omstartskommissionen” vid Stockholms Handelskammare, som innehåller Maria Wetterstrand och Klas Eklund. Förre finansministern Anders Borg säger att vi efter krisen ”behöver föra en väldigt stram finanspolitik i fem till tio år”.

Vi borde fråga dem, och alla andra åtstramningsivrare, om de kan förklara exakt hur de menar att statsskulden annars kan bli ett problem – i bokstavliga termer, utan att gömma sig bakom liknelser eller metaforer.

Max Jerneck / Aftonbladet

Yours truly och några få andra nationalekonomer — som fortfarande har lite kontakt med verkligheten — har under ett par års tid nu frågat sig varför vi i det här landet har en regering som inte vågar satsa på en offensiv finanspolitik och låna mer. Inte minst mot bakgrund av de historiskt låga räntorna är det ett gyllene tillfälle att satsa på investeringar inom infrastruktur, vård, skola och välfärd.

Vad många politiker och mediala så kallade experter inte verkar (vilja) förstå är att det finns en avgörande skillnad mellan privata och offentliga skulder. Om en individ försöker spara och dra ner på sina skulder, så kan det mycket väl vara rationellt. Men om alla försöker göra det, blir följden att den aggregerade efterfrågan sjunker och arbetslösheten riskerar ökar.

En enskild individ måste alltid betala sina skulder. Men en stat kan alltid betala tillbaka sina gamla skulder med nya skulder. Staten är inte en individ. Statliga skulder är inte som privata skulder. En stats skulder är väsentligen en skuld till den själv, till dess medborgare.

En statsskuld är varken bra eller dålig. Den ska vara ett medel att uppnå två övergripande makroekonomiska mål — full sysselsättning och prisstabilitet. Vad som är ‘heligt’ är inte att ha en balanserad budget eller att hålla nere statsskulden. Om idén om ‘sunda’ statsfinanser leder till ökad arbetslöshet och instabila priser borde det vara självklart att den överges. ‘Sunda’ statsfinanser är osunt.

Tyvärr verkar det som om Magdalena Andersson — i likhet med många andra gamla studenter från Handelshögskolan i Stockholm — har rejäla kunskapsluckor. Kanske borde man sluta lära ut monetaristiskt Chicago-nonsens från 70-talet och istället följa med i teoriutvecklingen. Lite ‘functional finance’ och MMT kanske inte skulle skada även på maktelitens lekskola …

Ett lands statsskuld är sällan en orsak till ekonomisk kris, utan snarare ett symtom på en kris som sannolikt blir värre om inte underskotten i de offentliga finan­serna får öka.

Den ­svenska utlandsskulden är historiskt låg. Den konsoliderade statsskulden ligger idag på lite över 20 procent av BNP och enligt regeringens prognoser kommer den att vara kring 16 procent om två år. Med tanke på de stora utmaningar som Sverige står inför i coronavirusets kölvatten är fortsatt tal om “ansvar” för statsbudgeten minst sagt oansvarigt. I stället för att ”värna om statsfinanserna” bör en ansvarsfull rege­ringen se till att värna om samhällets framtid. När numera t.o.m. IMF insett att det är kontraproduktivt att föra en ekonomisk politik med syfte att minska statsskulden, är det minst sagt bedrövligt när en regering inte insett att problemet med en statsskuld i en situation med nästintill negativa räntor inte är att den är för stor, utan för liten.

Über linke und rechte Identitätspolitik

22 Jun, 2020 at 14:00 | Posted in Politics & Society | 4 Comments

Wie links ist es etwa, wenn jüdische Studenten der Universität von Virginia von einer Initiative gegen weisse Suprematisten ausgeschlossen werden, weil sie aus Sicht ihrer gojischen, progressiven Kommilitonen und eifrigen Identitätskommissare als Juden bestimmt Israel unterstützen und damit dieselben Rassisten sind wie alle Israelis? … …

Identity_politicsEs gibt, glaube ich, noch eine weitere schreckliche Gemeinsamkeit zwischen linker und rechter Identitätspolitik … So wie die alten 20.-Jahrhundert-Nazis in ihrer Jugend ein Haufen zu kurz gekommener, neidischer, völlig unbegabter, asozialer deutscher Kleinstspießer und Lumpenproletarier waren, die sich in der bürgerlichen und ziemlich jüdischen Leistungsgesellschaft des ausgehenden 19. Jahrhunderts nicht zurechtfanden und sich, statt selbst gute Schriftsteller, Börsenmakler oder Ärzte zu werden, kurzerhand zu privilegierten Quoten-Deutschen, auch bekannt als Herrenmenschen erklärten, was die Deutschen an sich, als ewiges Zukurzgekommenenvolk, eh super fanden – genauso kommen mir heute viele, wirklich sehr viele Identitäts-Politik-People vor, die mit dem tränenreichen, stigmatisierenden Hinweis auf die sie angeblich beleidigende sexuelle, soziale, geschlechtliche, moralische Zugehörigkeit von Irgendwem zu Irgendwas einfach nur gesellschaftliche und berufliche Konkurrenten aus dem Weg räumen wollen, um zum Schluss selbst ihren Platz einzunehmen.

Maxim Biller / Die Zeit

Maxim Biller ist mal wieder gereizt. Eine Provokation? Ja. Aber auch ernüchternde!

La Sagrada Famiglia (personal)

22 Jun, 2020 at 11:35 | Posted in Varia | 4 Comments

Having five grown-up kids it is not that often we manage to come together all of us. But this year we managed to spend Midsummer weekend together at the summer residence on our island. Lovely!

Der Unterschied zwischen Schweden und Deutschland

22 Jun, 2020 at 10:28 | Posted in Politics & Society | Leave a comment

expertZEIT: Wenn Sie sehen, wie Experten im Fernsehen befragt werde, tut Ihnen das weh?

Angner: Ja, schon. Manchmal möchte ich meinen Fernseher hineinschreien: “Das können Sie doch gar nicht wissen!”

ZEIT: Der ehemalige schwedische Staatsepidemiologe Johan Giesecke hat gesagt: “Der Unterschied zwischen Schweden und Deutschland ist, dass Deutschland seine Wirtschaft ruiniert.”

Angner: Johan Giesecke ist so viel in den Medien, weil er sehr selbstbewusst ist und einfach sachen raushaut. Ein klarer fall von Selbstüberchätzung.

Die Zeit

Cultures of expertise

21 Jun, 2020 at 23:32 | Posted in Theory of Science & Methodology | Leave a comment


The ultimate takedown of teflon coated defenders of rational expectations

21 Jun, 2020 at 09:01 | Posted in Economics | 2 Comments


James Heckman, winner of the “Nobel Prize” in economics (2000), did an interview with John Cassidy in 2010. It’s an interesting read (Cassidy’s words in italics):

What about the rational-expectations hypothesis, the other big theory associated with modern Chicago? How does that stack up now?

I could tell you a story about my friend and colleague Milton Friedman. In the nineteen-seventies, we were sitting in the Ph.D. oral examination of a Chicago economist who has gone on to make his mark in the world. His thesis was on rational expectations. After he’d left, Friedman turned to me and said, “Look, I think it is a good idea, but these guys have taken it way too far.”

It became a kind of tautology that had enormously powerful policy implications, in theory. But the fact is, it didn’t have any empirical content. When Tom Sargent, Lars Hansen, and others tried to test it using cross equation restrictions, and so on, the data rejected the theories. There were a certain section of people that really got carried away. It became quite stifling.

What about Robert Lucas? He came up with a lot of these theories. Does he bear responsibility?

Well, Lucas is a very subtle person, and he is mainly concerned with theory. He doesn’t make a lot of empirical statements. I don’t think Bob got carried away, but some of his disciples did. It often happens. The further down the food chain you go, the more the zealots take over.

What about you? When rational expectations was sweeping economics, what was your reaction to it? I know you are primarily a micro guy, but what did you think?

What struck me was that we knew Keynesian theory was still alive in the banks and on Wall Street. Economists in those areas relied on Keynesian models to make short-run forecasts. It seemed strange to me that they would continue to do this if it had been theoretically proven that these models didn’t work.

What about the efficient-markets hypothesis? Did Chicago economists go too far in promoting that theory, too?

Some did. But there is a lot of diversity here. You can go office to office and get a different view …

So, today, what survives of the Chicago School? What is left?

I think the underlying ideas of the Chicago School are still very powerful. The basis of the rocket is still intact. It is what I see as the booster stage—the rational-expectation hypothesis and the vulgar versions of the efficient-markets hypothesis that have run into trouble. They have taken a beating—no doubt about that. I think that what happened is that people got too far away from the data, and confronting ideas with data. That part of the Chicago tradition was neglected, and it was a strong part of the tradition.

When Bob Lucas was writing that the Great Depression was people taking extended vacations—refusing to take available jobs at low wages—there was another Chicago economist, Albert Rees, who was writing in the Chicago Journal saying, No, wait a minute. There is a lot of evidence that this is not true …

When Friedman died, a couple of years ago, we had a symposium for the alumni devoted to the Friedman legacy. I was talking about the permanent income hypothesis; Lucas was talking about rational expectations. We have some bright alums. One woman got up and said, “Look at the evidence on 401k plans and how people misuse them, or don’t use them. Are you really saying that people look ahead and plan ahead rationally?” And Lucas said, “Yes, that’s what the theory of rational expectations says, and that’s part of Friedman’s legacy.” I said, “No, it isn’t. He was much more empirically minded than that.” People took one part of his legacy and forgot the rest. They moved too far away from the data.

Yes indeed, they certainly “moved too far away from the data.”

For more on the issue, permit me to self-indulgently recommend reading my article Rational expectations — a fallacious foundation for macroeconomics in a non-ergodic world in real-world economics review no. 62.

Next Page »

Blog at
Entries and comments feeds.