Olivier Blanchard, the IMF’s chief economist, recently wrote:
“We in the field did think of the economy as roughly linear, constantly subject to different shocks, constantly fluctuating, but naturally returning to its steady state over time. Instead of talking about fluctuations, we increasingly used the term “business cycle.” Even when we later developed techniques to deal with nonlinearities, this generally benign view of fluctuations remained dominant.”
The models that macroeconomic practitioners developed reflected this essentially linear view. Blanchard went on to observe that although macroeconomists did not ignore the possibility of extreme tail risk events, they regarded them as a thing of the past in developed countries … If you get the policy settings right, linear models will work.
Except that they won’t. And that is because these models are not realistic views of how the economy actually works. Representative agents aren’t actually representative of anyone. Rational expectations are driven as much by emotion as logic …
Leaving banks out of economic models, or – worse – modelling their money-creating function incorrectly, made it impossible for mainstream economists to understand the significance of the build-up of credit that led to the financial crisis. The warnings came principally from people outside mainstream economics, particularly the followers of Hyman Minsky. After the crisis, Minsky’s “financial instability hypothesis”, long consigned to a dusty shelf in a dark cupboard, suddenly became hot news. Unsurprisingly, since we had just lived through something that looked very like a “Minsky moment”.
Clearly, the exclusion of the financial industry from models of the macroeconomy was a major omission. Equally clearly, the fact that most macroeconomists did not, and to a large extent still do not, understand the mechanisms by which money is created and circulated in the modern monetary economy, is a big, big problem. Central banks are now “adding” the financial sector to existing DSGE models: but this does not begin to address the essential non-linearity of a monetary economy whose heart is a financial system that is not occasionally but NORMALLY far from equilibrium. Until macroeconomists understand this, their models will remain inadequate …
Some of the most influential people in macroeconomics have spent their lives developing theories and models that have been shown to be at best inadequate and at worst dangerously wrong. Olivier Blanchard’s call for policymakers to set policy in such a way that linear models will still work should be seen for what it is – the desperate cry of an aging economist who discovers that the foundations upon which he has built his career are made of sand. He is far from alone.
I dessa tider — när ljudrummet dränks i den kommersiella radions tyckmyckentrutade ordbajseri och fullständigt intetsägande melodifestivalskval — har man ju nästan gett upp.
Men det finns ljus i mörkret! I radions P2 går varje lördagmorgon ett vederkvickelsens och den seriösa musikens Lördagsmorgon i P2.
Så passa på och börja dagen med en musikalisk örontvätt och rensa hörselgångarna från kvarvarande musikslagg. Här kan man till exempel lyssna på musik av Vassilis Tsabropoulos, John Tavener, Gustav Mahler och Arvo Pärt. Att i tre timmar få lyssna till sådan musik ger sinnet ro och får hoppet att återvända. Tack public-service-radio!
Och tack Eva Sjöstrand. Att i tre timmar få lyssna till underbar musik och en programledare som har något att säga och inte bara låter foderluckan glappa hela tiden — vilken lisa för själen!
Tillägnad alla gamla radikala vänner och kollegor som numera glatt traskar patrull och omfamnar allt de en gång i tiden hade modet att våga kritisera och ifrågasätta …
By the early 1980s it was already common knowledge among people I hung out with that the only way to get non-crazy macroeconomics published was to wrap sensible assumptions about output and employment in something else, something that involved rational expectations and intertemporal stuff and made the paper respectable. And yes, that was conscious knowledge, which shaped the kinds of papers we wrote.
More or less says it all, doesn’t it?
And for those of us who do not want to play according these sickening hypocritical rules — well, here’s one good alternative.
We are storytellers, operating much of the time in worlds of make believe. We do not find that the realm of imagination and ideas is an alternative to, or retreat from, practical reality. On the contrary, it is the only way we have found to think seriously about reality. In a way, there is nothing more to this method than maintaining the conviction … that imagination and ideas matter … there is no practical alternative”
Robert Lucas (1988) What Economists Do
Sounds great, doesn’t it? And here’s an example of the outcome of that serious think about reality …
In summary, it does not appear possible, even in principle, to classify individual unemployed people as either voluntarily or involuntarily unemployed depending on the characteristics of the decision problems they face. One cannot, even conceptually, arrive at a usable definition of full employment as a state in which no involuntary unemployment exists.
The difficulties are not the measurement error problems which necessarily arise in applied economics. They arise because the “thing” to be measured does not exist.
As made clear from my Friedman-post earlier today, we have pretty little to learn from libertarians on questions of fairness and wage discrimination. Happily there are others who have something of substance to say instead of just talking nonsense:
So let’s say a woman faces discrimination by this definition – she loses out to a man with weaker credentials. “Loses out” itself is pretty vague and could reasonably be consistent with several different observed labor market outcomes, two of which are:
Outcome A: She gets hired to the same job as the man but at lower pay, and
Outcome B: She doesn’t get the job and instead takes her next best offer in a different occupation at lower pay. Let’s further say that she is paid her real productivity in this job.
Let’s say the woman’s wage in Outcome A and the wage in Outcome B is exactly the same.
Under Outcome A, a wage regression with occupational dummies and a gender dummy is going reliably report the magnitude of the discrimination in the gender dummy. Under Outcome B, a wage regression with occupational dummies and a gender dummy is going to report all of the discrimination under the occupational dummies. If you interpret the results thinking that “discrimination” as Scott D defines it is only in the gender coefficient, you would say there is discrimination in the case of Outcome A, but that there’s no discrimination in the case of Outcome B.
It would be one thing if these were very, very different sorts of discrimination but these are two reasonable outcomes from the exact same act of discrimination.
This is why people like Claudia Goldin see occupational dummies as describing the components of the wage gap and not as some way of eliminating part of the gap that isn’t really about gender.
“Equal pay for equal work” is a principle that I should hope everyone can agree on. It’s great stuff. And I for one think the courts might have some role to play in ensuring the principle is abided by in our society. But it’s a pretty vacuous phrase when it comes to economic science. It’s not entirely clear what it means or how it can be operationalized. Outcome A is clearly not equal pay for equal work, but what about Outcome B? After all the woman is being paid “fairly” for the work she ended up doing. Is that equal pay for equal work? You could make the argument but it doesn’t feel right and in any case it’s clearly incommensurate with the data analysis we’re doing. When two things are incommensurate it’s typically a good idea to keep them separate. Let “equal pay for equal work” ring out as a rallying call for a basic point of fairness and don’t act like you can either affirm it or refute it with economic science. As far as I can tell you can’t.
In econometrics one often gets the feeling that many of its practitioners think of it as a kind of automatic inferential machine: input data and out comes casual knowledge. This is like pulling a rabbit from a hat. Great — but first you have to put the rabbit in the hat. And this is where assumptions come in to the picture.
As social scientists — and economists — we have to confront the all-important question of how to handle uncertainty and randomness. Should we define randomness with probability? If we do, we have to accept that to speak of randomness we also have to presuppose the existence of nomological probability machines, since probabilities cannot be spoken of – and actually, to be strict, do not at all exist – without specifying such system-contexts.
Accepting a domain of probability theory and a sample space of “infinite populations” — which is legion in modern econometrics — also implies that judgments are made on the basis of observations that are actually never made! Infinitely repeated trials or samplings never take place in the real world. So that cannot be a sound inductive basis for a science with aspirations of explaining real-world socio-economic processes, structures or events. It’s not tenable.
In his great book Statistical Models and Causal Inference: A Dialogue with the Social Sciences David Freedman touched on this fundamental problem, arising when you try to apply statistical models outside overly simple nomological machines like coin tossing and roulette wheels:
Lurking behind the typical regression model will be found a host of such assumptions; without them, legitimate inferences cannot be drawn from the model. There are statistical procedures for testing some of these assumptions. However, the tests often lack the power to detect substantial failures. Furthermore, model testing may become circular; breakdowns in assumptions are detected, and the model is redefined to accommodate. In short, hiding the problems can become a major goal of model building.
Using models to make predictions of the future, or the results of interventions, would be a valuable corrective. Testing the model on a variety of data sets – rather than fitting refinements over and over again to the same data set – might be a good second-best … Built into the equation is a model for non-discriminatory behavior: the coefficient d vanishes. If the company discriminates, that part of the model cannot be validated at all.
Regression models are widely used by social scientists to make causal inferences; such models are now almost a routine way of demonstrating counterfactuals. However, the “demonstrations” generally turn out to depend on a series of untested, even unarticulated, technical assumptions. Under the circumstances, reliance on model outputs may be quite unjustified. Making the ideas of validation somewhat more precise is a serious problem in the philosophy of science. That models should correspond to reality is, after all, a useful but not totally straightforward idea – with some history to it. Developing appropriate models is a serious problem in statistics; testing the connection to the phenomena is even more serious …
In our days, serious arguments have been made from data. Beautiful, delicate theorems have been proved, although the connection with data analysis often remains to be established. And an enormous amount of fiction has been produced, masquerading as rigorous science.
Making outlandish statistical assumptions does not provide a solid ground for doing relevant social science.
Modern econometrics is fundamentally based on assuming — usually without any explicit justification — that we can gain causal knowledge by considering independent variables that may have an impact on the variation of a dependent variable. This is however, far from self-evident. Often the fundamental causes are constant forces that are not amenable to the kind of analysis econometrics supplies us with. As Stanley Lieberson has it in his modern classic Making It Count:
One can always say whether, in a given empirical context, a given variable or theory accounts for more variation than another. But it is almost certain that the variation observed is not universal over time and place. Hence the use of such a criterion first requires a conclusion about the variation over time and place in the dependent variable. If such an analysis is not forthcoming, the theoretical conclusion is undermined by the absence of information …
Moreover, it is questionable whether one can draw much of a conclusion about causal forces from simple analysis of the observed variation … To wit, it is vital that one have an understanding, or at least a working hypothesis, about what is causing the event per se; variation in the magnitude of the event will not provide the answer to that question.
Causality in social sciences — and economics — can never solely be a question of statistical inference. Causality entails more than predictability, and to really in depth explain social phenomena requires theory. Analysis of variation – the foundation of all econometrics – can never in itself reveal how these variations are brought about. First when we are able to tie actions, processes or structures to the statistical relations detected, can we say that we are getting at relevant explanations of causation. Too much in love with axiomatic-deductive modeling, neoclassical economists especially tend to forget that accounting for causation — how causes bring about their effects — demands deep subject-matter knowledge and acquaintance with the intricate fabrics and contexts. As already Keynes argued in his A Treatise on Probability, statistics and econometrics should not primarily be seen as means of inferring causality from observational data, but rather as description of patterns of associations and correlations that we may use as suggestions of possible causal realations.
Until , when the banking industry came crashing down and depression loomed for the first time in my lifetime, I had never thought to read The General Theory of Employment, Interest, and Money, despite my interest in economics … I had heard that it was a very difficult book and that the book had been refuted by Milton Friedman, though he admired Keynes’s earlier work on monetarism. I would not have been surprised by, or inclined to challenge, the claim made in 1992 by Gregory Mankiw, a prominent macroeconomist at Harvard, that “after fifty years of additional progress in economic science, The General Theory is an outdated book. . . . We are in a much better position than Keynes was to figure out how the economy works.”
Baffled by the profession’s disarray, I decided I had better read The General Theory. Having done so, I have concluded that, despite its antiquity, it is the best guide we have to the crisis …
It is an especially difficult read for present-day academic economists, because it is based on a conception of economics remote from theirs. This is what made the book seem “outdated” to Mankiw — and has made it, indeed, a largely unread classic … The dominant conception of economics today, and one that has guided my own academic work in the economics of law, is that economics is the study of rational choice … Keynes wanted to be realistic about decision-making rather than explore how far an economist could get by assuming that people really do base decisions on some approximation to cost-benefit analysis …
Economists may have forgotten The General Theory and moved on, but economics has not outgrown it, or the informal mode of argument that it exemplifies, which can illuminate nooks and crannies that are closed to mathematics. Keynes’s masterpiece is many things, but “outdated” it is not.