Among a couple of really good intermediate – neoclassical – macroeconomics textbooks, Chad Jones textbook Macroeconomics (3rd ed, W W Norton, 2014) stands out as perhaps the best alternative, by combining more traditional short-run macroeconomic analysis with a marvellously accessible coverage of the Romer model – the foundation of modern growth theory.
Unfortunately it also contains some utter nonsense!
In chapter 7 – on “The Labor Market, Wages, and Unemployment” – Jones writes (p. 181):
The point of this experiment is to show that wage rigidities can lead to large movements in employment. Indeed, they are the reason John Maynard Keynes gave, in The General Theory of Employment, Interest, and Money (1936), for the high unemployment of the Great Depression.
But this is pure nonsense. For although Keynes in General Theory devoted substantial attention to the subject of wage rigidities, he certainly did not hold the view that wage rigidities were “the reason … for the high unemployment of the Great Depression.”
What Keynes actually did argue in General Theory, was that the classical proposition that lowering wages would lower unemployment and ultimately take economies out of depressions, was ill-founded and basically wrong.
To Keynes, flexible wages would only make things worse by leading to erratic price-fluctuations. The basic explanation for unemployment is insufficient aggregate demand, and that is mostly determined outside the labor market.
Unfortunately, Jones macroeconomics textbook is not the only one containing this kind of utter nonsense on Keynes. Similar distortions of Keynes views can be found in , e. g., the economics textbooks of the “new Keynesian” – a grotesque misnomer – Greg Mankiw. How is this possible? Probably because these economists have but a very superficial acquaintance with Keynes own works, and rather depend on second-hand sources like Hansen, Samuelson, Hicks and the likes.
But the problems don’t end here. The rather embarrassing history revision on Keynes is followed up a couple of pages later with the following gobsmacking remark:
In recent years, different countries in Europe have sought to reform their labor market institutions. As a result, unemployment rates in Spain, Ireland, and the Netherlands, for example, have decreased substantially from levels in the 1980s.
Checking up on Spain I get the following graph:
Hardly consistent with the textbook’s “have decreased substantially” …
Tom Sargent is a bit out of touch with the real world up there in his office … Certain people have a capacity for ignoring facts which are patenty obvious, but are counter to their view of the world; so they just ignore them …
Sargent is a sort of tinkerer, playing an intellectual game. He looks at a puzzle to see if h ecan solve it in a particular way, exercising these fancy techniques.
Do you think this is too harsh? Well, then I suggest you read the following excerpt from the interview with Sargent in Arjo Klamer’s The New Classical Macroeconomics (1984):
It is true that these assumptions are unrealistic.
Do you feel comfortable with them?
Yes, about certain matters. I’m aware of all the problems with them. There are philosophical contradictions about using this methodoology. Deep down I don’t believe in them, but I don’t have a better method of understanding what’s going on out there.
But if the best is not good enough? Wittgenstein’s dictum in Tractatus Logico-Philosophicus comes to mind:
Wovon man nicht sprechen kann, darüber muss man schweigen.
Hypothesis testing and p-values are so compelling in that they fit in so well with the Popperian model in which science advances via refutation of hypotheses. For both theoretical and practical reasons I am supportive of a (modified) Popperian philosophy of science in which models are advanced and then refuted (Gelman and Shalizi 2013). But a necessary part of falsificationism is that the models being rejected are worthy of consideration. If a group of researchers in some scientific field develops an interesting scientific model with predictive power, then I think it very appropriate to use this model for inference and to check it rigorously, eventually abandoning it and replacing it with something better if it fails to make accurate predictions in a definitive series of experiments. This is the form of hypothesis testing and falsification that is valuable to me. In common practice, however, the “null hypothesis” is a straw man that exists only to be rejected. In this case, I am typically much more interested in the size of the effect, its persistence, and how it varies across different situations. I would like to reserve hypothesis testing for the exploration of serious hypotheses and not as in indirect form of statistical inference that typically has the effect of reducing scientific explorations to yes/no conclusions.
Assumptions in scientific theories/models are often based on (mathematical) tractability (and so necessarily simplifying) and used for more or less self-evidently necessary theoretical consistency reasons. But one should also remember that assumptions are selected for a specific purpose, and so the arguments (in economics shamelessly often totally non-existent) put forward for having selected a specific set of assumptions, have to be judged against that background to check if they are warranted.
This, however, only shrinks the assumptions set minimally – it is still necessary to decide on which assumptions are innocuous and which are harmful, and what constitutes interesting/important assumptions from an ontological & epistemological point of view (explanation, understanding, prediction). Especially so if you intend to refer your theories/models to a specific target system — preferably the real world. To do this one should start by applying a real world filter in the form of a Smell Test: Is the theory/model reasonable given what we know about the real world? If not, why should we care about it? If not – we shouldn’t apply it (remember time is limited and economics is a science on scarcity & optimization …)
Many mainstream economists have the unfounded and ridiculous idea that because heterodox people like yours truly often criticize the application of mathematics in mainstream economics, we are critical of math per se.
I don’t know how many times I’ve been asked to answer this straw-man objection to heterodox economics, but here we go again.
No, there is nothing wrong with mathematics per se.
No, there is nothing wrong with applying mathematics to economics.
Mathematics is one valuable tool among other valuable tools for understanding and explaining things in economics.
What is, however, totally wrong, are the utterly simplistic beliefs that
• “math is the only valid tool”
• “math is always and everywhere self-evidently applicable”
• “math is all that really counts”
• “if it’s not in math, it’s not really economics”
• “almost everything can be adequately understood and analyzed with math”
So — please — let’s have no more of this feeble-minded pseudo debate where heterodox economics is described as simply anti-math!
A common mistake amongst Ph.D. students is to place too much weight on the ability of mathematics to solve an economic problem. They take a model off the shelf and add a new twist. A model that began as an elegant piece of machinery designed to illustrate a particular economic issue, goes through five or six amendments from one paper to the next. By the time it reaches the n’th iteration it looks like a dog designed by committee.
Mathematics doesn’t solve economic problems. Economists solve economic problems. My advice: never formalize a problem with mathematics until you have already figured out the probable answer. Then write a model that formalizes your intuition and beat the mathematics into submission. That last part is where the fun begins because the language of mathematics forces you to make your intuition clear. Sometimes it turns out to be right. Sometimes you will realize your initial guess was mistaken. Always, it is a learning process.
Good advice — coming from a professor of economics and fellow of the Econometric Society and research associate of the NBER — well worth following.