People are not, and cannot be, the infinitely foresightful, unbounded rational utility maximizers assumed in DSGE models. On the contrary, economic behavior, even that of highly sophisticated actors like the ‘rocket scientists’ who design financial instruments for investment banks, is inevitably driven by a partial view of the world. Heuristics and unconsidered assumptions inevitably play a crucial role. For finite beings in a world of boundless possibilities, nothing else is possible …
The problem is to focus on behavioral foundations that are most relevant to the problems of macroeconomics. An obvious place to start is with attitudes toward risk and uncertainty.
In his latest blogpost Noah Smith writes that
Rational expectations have not been empirically disconfirmed.
Noah Smith is, of course, entitled to have whatever view he likes (it’s, to say the least, difficult to empirically disconfirm the non-existence of Gods …) — but for the rest of us, let’s see how rational expectations really fares as an empirical assumption. Empirical efforts at testing the correctnes of the hypothesis has resulted in a series of empirical studies that have more or less concluded that it is not consistent with the facts. In one of the more well-known and highly respected evaluation reviews made, Michael Lovell (1986) concluded:
it seems to me that the weight of empirical evidence is sufficiently strong to compel us to suspend belief in the hypothesis of rational expectations, pending the accumulation of additional empirical evidence.
And this is how Nikolay Gertchev summarizes studies on the empirical correctness of the hypothesis:
More recently, it even has been argued that the very conclusions of dynamic models assuming rational expectations are contrary to reality: “the dynamic implications of many of the specifications that assume rational expectations and optimizing behavior are often seriously at odds with the data” (Estrella and Fuhrer 2002, p. 1013). It is hence clear that if taken as an empirical behavioral assumption, the RE hypothesis is plainly false; if considered only as a theoretical tool, it is unfounded and selfcontradictory.
For more on the issue, permit me to self-indulgently recommend reading my article Rational expectations — a fallacious foundation for macroeconomics ina non-ergodic world in real-world economics review no. 62.
Noah Smith points out advances in neoclassical DSGE models. Lars Syll does not agree. But what do DSGE economists themselves think about their models? The failure of the DSGE paradigm is clearly shown by the utter lack of independent measurement or even conceptual description of the core variables of these models, like ‘social indifference curves’ which, of course, leads to a proliferation of models: each economist his own rational expectations, consistent with his and only his model. Proper science however does not only consist of theory but also of the measurement (and discovery!) of variables. DSGE economics fails this test. As shown by Frank Schorfheide, a DSGE economist from the university of Pennsylvania. He does what ever DSGE economist should be doing. In the beginning of his paper he does not invoke the usual bland canonical incantations that this is about a non-trivial, micro-founded and sound model but actually investigates if these models really are non-trivial.
And he finds, as predicted above, ad-hoc chaos …
The answer to this chaos, indicative of the lack of empirical discipline of these models should be: independent micro-measurement of this variable and aggregating the micro data in a credible, non-trivial way (i.e. the procedure which the ‘national accounts’ statisticians have been following for decades…) – that, and only that, would yield a really sound, really micro-founded model. But that’s not what’s happening. They are just going on …
Comparable remarks can be made about the use of inflation metrics: an ad-hoc conceptual and empirical mess.
A common mistake amongst Ph.D. students is to place too much weight on the ability of mathematics to solve an economic problem. They take a model off the shelf and add a new twist. A model that began as an elegant piece of machinery designed to illustrate a particular economic issue, goes through five or six amendments from one paper to the next. By the time it reaches the n’th iteration it looks like a dog designed by committee.
Mathematics doesn’t solve economic problems. Economists solve economic problems. My advice: never formalize a problem with mathematics until you have already figured out the probable answer. Then write a model that formalizes your intuition and beat the mathematics into submission. That last part is where the fun begins because the language of mathematics forces you to make your intuition clear. Sometimes it turns out to be right. Sometimes you will realize your initial guess was mistaken. Always, it is a learning process.
Good advice — coming from a professor of economics and fellow of the Econometric Society and research associate of the NBER — well worth following.
In her interesting Pufendorf lectures Nancy Cartwright presents a theory of evidence and explains why randomized controlled trials (RCTs) are not at all the “gold standard” that it has lately often been portrayed as. As yours truly has repeatedly argued on this blog (e.g. here and here), RCTs usually do not provide evidence that their results are exportable to other target systems. The almost religious belief with which its advocates portray it, cannot hide the fact that RCTs cannot be taken for granted to give generalizable results. That something works somewhere is no warranty for it to work for us or even that it works generally.
Those of us in the economics community who are impolite enough to dare question the preferred methods and models applied in mainstream macroeconomics, are as a rule met with disapproval. But although people seem to get very agitated and upset by the critique — just read the commentaries on this blog if you don’t believe me — defenders of “received theory” always say that the critique is “nothing new”, that they have always been “well aware” of the problems, and so on, and so on.
So, for the benefit of all mindless practitioners of mainstream macroeconomic modeling — like Noah Smith, who defends mainstream macroeconomics with arguments like “the speed with which macro has put finance at the center of its theories of the business cycle has been nothing less than stunning,” and re the patently ridiculous representative-agent modeling, maintains that there “have been efforts to put heterogeneity into big DSGE-type models” but that these models “didn’t get quite as far, because this kind of thing is very technically difficult to model,” and as for rational expectations admits that “so far, macroeconomists are still very timid about abandoning this pillar of the Lucas/Prescott Revolution,” but that “there’s no clear alternative” — and who don’t want to be disturbed in their doings, eminent mathematical statistician David Freedman has put together a very practical list of vacuous responses to criticism that can be freely used to save your peace of mind:
We know all that. Nothing is perfect … The assumptions are reasonable. The assumptions don’t matter. The assumptios are conservative. You can’t prove the assumptions are wrong. The biases will cancel. We can model the biases. We’re only doing what evereybody else does. Now we use more sophisticated techniques. If we don’t do it, someone else will. What would you do? The decision-maker has to be better off with us than without us … The models aren’t totally useless. You have to do the best you can with the data. You have to make assumptions in order to make progress. You have to give the models the benefit of the doubt. Where’s the harm?
Added 21:30 GMT: And if you think yours truly is the only one critical of Noah’s post, you’ve better read what one of his former guest bloggers — Peter Dorman — writes in the comments field:
“….macroeconomists are definitely thinking about heterogeneity.” Come on, you must surely see that one can have both heterogeneity and representative agents. If there are 300 million agents in an economy and you model them as two or three decision-makers, on balance you are doing a lot more homogenizing than heterogenizing. This matters because just about everything we know about complex systems tells us that the density of interaction effects is central. An economy of you, me and a few other people simply isn’t going to have the same dynamics as an economy of millions of interacting agents. This is true even if agent-based modeling turns out to be unproductive. It’s enough to know that the model people are using is systematically giving bad advice. Microfounding macro is a choice, and if the there aren’t any good microfoundations at hand, you don’t have to do it.
“….there’s no clear alternative [to rational expectations].” The previous paragraph applies here as well. If the only microfoundations you can find are empirically disconfirmed, regularly and broadly, then you may just have to postpone this microfounding business until you can come up with better models. Beyond that, I think the core problem is that the models are structured to permit the solution of equilibrium conditions, and that this imposes a restrictive framework for thinking about rationality, optimization. If the point were to model adjustment, we could use a much looser but more empirically defensible conception of rationality. Of course, that would also mean severing economic analysis from welfarism: we’d have to give up trying to answer questions like“what’s the welfare cost of this situation compared to the optimum?” In the end, the attachment of economists, micro and macro alike, to equilibrium models with rational agents is that they want to be able make definitive judgments about what society should do. I prefer Keynes’ dentists: they don’t tell you whether you have an optimal dental structure, but they can help you get the structure you tell them you want.
“….[macro] looks like a vigorous, energetic field full of excited young true believers and respected older figures who are still blazing new trails.” The more accurate criticism of macro is not that it is simply an ideological smokescreen or an unthinking herd, but that it operates on a tilted playing field. It is openly acknowledged that the “leading” (i.e. career-determining) journals have engaged in tendentious selection practices for the past generation. Lots of shoddy research (which in my book includes calibration exercises promoted as “testing” theories) has gotten the star treatment, alongside a stream of genuinely significant macro work. Ideologically loaded assumptions, such as those typically used in public choice, are dropped in without any justification. Not all macro is bad! But the problem is that (a) the bad stuff of a certain ideological orientation gets an extra push that the good stuff doesn’t get, and (b) there isn’t a clear process by which the bad stuff is weeded out over time as its badness becomes evident. We refight the same damned macro battles year after year.
This short book explores a core group of 40 topics that tend to go unexplored in an Introductory Economics course. Though not a replacement for an introductory text, the work is intended as a supplement to provoke further thought and discussion by juxtaposing blackboard models of the economy with empirical observations. Each chapter starts with a short “refresher” of standard neoclassical economic modeling before getting into real world economic life. The book is an affordable supplement for all basic economics courses or for anyone who wants to review the basic ideas of economics with clear eyes.