In the standard mainstream economic analysis — take a quick look in e.g. Mankiw’s or Krugman’s textbooks — a demand expansion may very well raise measured productivity in the short run. But in the long run, expansionary demand policy measures cannot lead to sustained higher productivity and output levels.
In some non-standard heterodox analyses, however, labour productivity growth is often described as a function of output growth. The rate of technical progress varies directly with the rate of growth according to the Verdoorn law. Growth and productivity is in this view highly demand-determined, not only in the short run but also in the long run.
Given that the Verdoorn law is operative, expansionary economic policies actually may lead to increases in productivity and growth. Living in a world permeated by genuine Keynes-type uncertainty, we can, of course, not with any greater precision forecast how great those effects would be.
So, the nodal point is — has the Verdoorn Law been validated or not in empirical studies?
There have been hundreds of studies that have tried to answer that question, and as could be imagined, the answers differ. The law has been investigated with different econometric methods (time-series, IV, OLS, ECM, cointegration, etc.). The statistical and econometric problems are enormous (especially when it comes to the question on the direction of causality). Given this, however, most studies on the country level do confirm that the Verdoorn law holds.
Conclusion: demand policy measures may have long run effects.
The orthodox Keynesianism of the time did have a theoretical explanation for recessions and depressions. Proponents saw the economy as a self-regulating machine in which individual decisions typically lead to a situation of full employment and healthy growth. The primary reason for periods of recession and depression was because wages did not fall quickly enough. If wages could fall rapidly and extensively enough, then the economy would absorb the unemployed. Orthodox Keynesians also took Keynes’ approach to monetary economics to be similar to the classical economists.
Leijonhufvud got something entirely different from reading the General Theory. The more he looked at his footnotes, originally written in puzzlement at the disparity between what he took to be the Keynesian message and the orthodox Keynesianism of his time, the confident he felt. The implications were amazing. Had the whole discipline catastrophically misunderstood Keynes’ deeply revolutionary ideas? Was the dominant economics paradigm deeply flawed and a fatally wrong turn in macroeconomic thinking? And if this was the case, what was Keynes actually proposing?
Leijonhufvud’s “Keynesian Economics and the Economics of Keynes” exploded onto the academic stage the following year; no mean feat for an economics book that did not contain a single equation. The book took no prisoners and aimed squarely at the prevailing metaphor about the self-regulating economy and the economics of the orthodoxy. He forcefully argued that the free movement of wages and prices can sometimes be destabilizing and could move the economy away from full employment.
A must-read (not least because of the interview videos where Leijonhufvud gets the opportunity to comment on the ‘madness’ of modern mainstream macroeconomics)!
If macroeconomic models — no matter of what ilk — build on microfoundational assumptions of representative actors, rational expectations, market clearing and equilibrium, and we know that real people and markets cannot be expected to obey these assumptions, the warrants for supposing that conclusions or hypothesis of causally relevant mechanisms or regularities can be bridged, are obviously non-justifiable. Trying to represent real-world target systems with models flagrantly at odds with reality is futile. And if those models are New Classical or ‘New Keynesian’ makes very little difference.
So, indeed, there really is something about the way macroeconomists construct their models nowadays that obviously doesn’t sit right.
Fortunately — when you’ve got tired of the kind of macroeconomic apologetics produced by ‘New Keynesian’ macroeconomists — there are still some real Keynesian macroeconomists to read. One of them is Axel Leijonhufvud.
Thus your standard New Keynesian model will use Calvo pricing and model the current inflation rate as tightly coupled to the present value of expected future output gaps. Is this a requirement anyone really wants to put on the model intended to help us understand the world that actually exists out there? Thus your standard New Keynesian model will calculate the expected path of consumption as the solution to some Euler equation plus an intertemporal budget constraint, with current wealth and the projected real interest rate path as the only factors that matter. This is fine if you want to demonstrate that the model can produce macroeconomic pathologies. But is it a not-stupid thing to do if you want your model to fit reality?
I remember attending the first lecture in Tom Sargent’s evening macroeconomics class back when I was in undergraduate: very smart man from whom I have learned the enormous amount, and well deserving his Nobel Prize. But…
He said … we were going to build a rigorous, micro founded model of the demand for money: We would assume that everyone lived for two periods, worked in the first period when they were young and sold what they produced to the old, held money as they aged, and then when they were old use their money to buy the goods newly produced by the new generation of young. Tom called this “microfoundations” and thought it gave powerful insights into the demand for money that you could not get from money-in-the-utility-function models.
I thought that it was a just-so story, and that whatever insights it purchased for you were probably not things you really wanted to buy. I thought it was dangerous to presume that you understood something because you had “microfoundations” when those microfoundations were wrong. After all, Ptolemaic astronomy had microfoundations: Mercury moved more rapidly than Saturn because the Angel of Mercury left his wings more rapidly than the Angel of Saturn and because Mercury was lighter than Saturn…
Brad DeLong is of course absolutely right here, and one could only wish that other mainstream economists would listen to him …
Oxford macroeconomist Simon Wren-Lewis elaborates in a post on his blog on why he thinks the New Classical Counterrevolution was so successful in replacing older theories, despite the fact that the New Classical models were not able to explain what happened to output and inflation in the 1970s and 1980s:
The new theoretical ideas New Classical economists brought to the table were impressive, particularly to those just schooled in graduate micro. Rational expectations is the clearest example …
However, once the basics of New Keynesian theory had been established, it was quite possible to incorporate concepts like rational expectations or Ricardian Eqivalence into a traditional structural econometric model (SEM) …
The real problem with any attempt at synthesis is that a SEM is always going to be vulnerable to the key criticism in Lucas and Sargent, 1979: without a completely consistent microfounded theoretical base, there was the near certainty of inconsistency brought about by inappropriate identification restrictions …
So why does this matter? … If mainstream academic macroeconomists were seduced by anything, it was a methodology – a way of doing the subject which appeared closer to what at least some of their microeconomic colleagues were doing at the time, and which was very different to the methodology of macroeconomics before the New Classical Counterrevolution. The old methodology was eclectic and messy, juggling the competing claims of data and theory. The new methodology was rigorous!
Unlike Brad DeLong, Wren-Lewis seems to be impressed by the ‘rigour’ brought to macroeconomics by the New Classical counterrevolution and its rational expectations, microfoundations and ‘Lucas Critique’.
It is difficult to see why.
Wren-Lewis’s ‘portrayal’ of rational expectations is not as innocent as it may look. Rational expectations in the neoclassical economists’s world implies that relevant distributions have to be time independent. This amounts to assuming that an economy is like a closed system with known stochastic probability distributions for all different events. In reality it is straining one’s beliefs to try to represent economies as outcomes of stochastic processes. An existing economy is a single realization tout court, and hardly conceivable as one realization out of an ensemble of economy-worlds, since an economy can hardly be conceived as being completely replicated over time. It is — to say the least — very difficult to see any similarity between these modelling assumptions and the expectations of real persons. In the world of the rational expectations hypothesis we are never disappointed in any other way than as when we lose at the roulette wheels. But real life is not an urn or a roulette wheel. And that’s also the reason why allowing for cases where agents ‘make predictable errors’ in the ‘New Keynesian’ models doesn’t take us a closer to a relevant and realist depiction of actual economic decisions and behaviours. If we really want to have anything of interest to say on real economies, financial crisis and the decisions and choices real people make we have to replace the rational expectations hypothesis with more relevant and realistic assumptions concerning economic agents and their expectations than childish roulette and urn analogies.
‘Rigorous’ and ‘precise’ New Classical models — and that goes for the ‘New Keynesian’ variety too — cannot be considered anything else than unsubstantiated conjectures as long as they aren’t supported by evidence from outside the theory or model. To my knowledge no in any way decisive empirical evidence has been presented.
No matter how precise and rigorous the analysis, and no matter how hard one tries to cast the argument in modern mathematical form, they do not push economic science forwards one single millimeter if they do not stand the acid test of relevance to the target. No matter how clear, precise, rigorous or certain the inferences delivered inside these models are, they do not per se say anything about real world economies.
Proving things ‘rigorously’ in mathematical models is at most a starting-point for doing an interesting and relevant economic analysis. Forgetting to supply export warrants to the real world makes the analysis an empty exercise in formalism without real scientific value.
Keynes’s intellectual revolution was to shift economists from thinking normally in terms of a model of reality in which a dog called savings wagged his tail labelled investment to thinking in terms of a model in which a dog called investment wagged his tail labelled savings
At the end of every principle is a promise
This one is for you — David, Tora, Linnea, Amanda, and Sebastian
Including all relevant material – good, bad, and indifferent – in meta-analysis admits the subjective judgments that meta-analysis was designed to avoid. Several problems arise in meta-analysis: regressions are often non -linear; effects are often multivariate rather than univariate; coverage can be restricted; bad studies may be included; the data summarised may not be homogeneous; grouping different causal factors may lead to meaningless estimates of effects; and the theory-directed approach may obscure discrepancies. Meta-analysis may not be the one best method for studying the diversity of fields for which it has been used …
Glass and Smith carried out a meta-analysis of research on class size and achievement and concluded that “a clear and strong relationship between class size and achievement has emerged.”10 The study was done and analysed well; it might almost be cited as an example of what meta-analysis can do. Yet the conclusion is very misleading, as is the estimate of effect size it presents: “between class-size of 40 pupils and one pupil lie more than 30 percentile ranks of achievement.” Such estimates imply a linear regression, yet the regression is extremely curvilinear, as one of the authors’ figures shows: between class sizes of 20 and 40 there is absolutely no difference in achievement; it is only with unusually small classes that there seems to be an effect. For a teacher the major result is that for 90% of all classes the number of pupils makes no difference at all to their achievement. The conclusions drawn by the authors from their meta-analysis are normally correct, but they are statistically meaningless and particularly misleading. No estimate of effect size is meaningful unless regressions are linear, yet such linearity is seldom investigated, or, if not present, taken seriously.
Systematic reviews in sciences are extremely important to undertake in our search for robust evidence and explanations — simply averaging data from different populations, places, and contexts, is not.
From the times of Galileo and Newton, physicists have learned not to confuse what is happening in the model with what instead is happening in reality. Physical models are compared with observations to prove if they are able to provide precise explanations … Can one argue that the use of mathematics in neoclassical economics serves similar purposes? … Gillies‘s conclusion is that, while in physics mathematics was used to obtain precise explanations and successful predictions, one cannot draw the same conclusion about the use of mathematics in neoclassical economics in the last half century. This analysis reinforces the conclusion about the pseudo-scientific nature of neoclassical economics … given the systematic failure of predictions of neoclassical economics.
Francesco Sylos Labini is a researcher in physics. His book is to be highly recommended reading to anyone with an interest in understanding the pseudo-scientific character of modern mainstream economics. Turning economics into a ‘pseudo-natural-science’ is — as Keynes made very clear in a letter to Roy Harrod already back in 1938 — something that has to be firmly ‘repelled.’
In Milton Friedman’s infamous essay The Methodology of Positive Economics (1953) it was argued that the realism or ‘truth of a theory’s assumptions isn’t important. The only thing that really matters is how good are the predictions made by the theory.
Please feel free to apply that science norm to the following statement by Robert Lucas in Wall Street Journal, September 19, 2007:
I am skeptical about the argument that the subprime mortgage problem will contaminate the whole mortgage market, that housing construction will come to a halt, and that the economy will slip into a recession. Every step in this chain is questionable and none has been quantified. If we have learned anything from the past 20 years it is that there is a lot of stability built into the real economy.
Robert Lucas — a lousy pseudo-scientist and an even worse forecaster!
The construction of theoretical models is our way to bring order to the way we think about the world, but the process necessarily involves ignoring some evidence or alternative theories – setting them aside. That can be hard to do – facts are facts – and sometimes my unconscious mind carries out the abstraction for me: I simply fail to see some of the data or some alternative theory.
And that guy even got a ‘Nobel prize’ in economics …
Ultimately, the problem isn’t with worshipping models of the stars, but rather with uncritical worship of the language used to model them, and nowhere is this more prevalent than in economics. The economist Paul Romer at New York University has recently begun calling attention to an issue he dubs ‘mathiness’ – first in the paper ‘Mathiness in the Theory of Economic Growth’ (2015) and then in a series of blog posts. Romer believes that macroeconomics, plagued by mathiness, is failing to progress as a true science should, and compares debates among economists to those between 16th-century advocates of heliocentrism and geocentrism. Mathematics, he acknowledges, can help economists to clarify their thinking and reasoning. But the ubiquity of mathematical theory in economics also has serious downsides: it creates a high barrier to entry for those who want to participate in the professional dialogue, and makes checking someone’s work excessively laborious. Worst of all, it imbues economic theory with unearned empirical authority.
‘I’ve come to the position that there should be a stronger bias against the use of math,’ Romer explained to me. ‘If somebody came and said: “Look, I have this Earth-changing insight about economics, but the only way I can express it is by making use of the quirks of the Latin language”, we’d say go to hell, unless they could convince us it was really essential. The burden of proof is on them.’
Right now, however, there is widespread bias in favour of using mathematics. The success of math-heavy disciplines such as physics and chemistry has granted mathematical formulas with decisive authoritative force. Lord Kelvin, the 19th-century mathematical physicist, expressed this quantitative obsession:
“When you can measure what you are speaking about and express it in numbers you know something about it; but when you cannot measure it… in numbers, your knowledge is of a meagre and unsatisfactory kind.”
The trouble with Kelvin’s statement is that measurement and mathematics do not guarantee the status of science – they guarantee only the semblance of science. When the presumptions or conclusions of a scientific theory are absurd or simply false, the theory ought to be questioned and, eventually, rejected. The discipline of economics, however, is presently so blinkered by the talismanic authority of mathematics that theories go overvalued and unchecked.