The bias toward the superficial and the response to extraneous influences on research are both examples of real harm done in contemporary social science by a roughly Bayesian paradigm of statistical inference as the epitome of empirical argument. For instance the dominant attitude toward the sources of black-white differential in United States unemployment rates (routinely the rates are in a two to one ratio) is “phenomenological.” The employment differences are traced to correlates in education, locale, occupational structure, and family background. The attitude toward further, underlying causes of those correlations is agnostic … Yet on reflection, common sense dictates that racist attitudes and institutional racism must play an important causal role. People do have beliefs that blacks are inferior in intelligence and morality, and they are surely influenced by these beliefs in hiring decisions … Thus, an overemphasis on Bayesian success in statistical inference discourages the elaboration of a type of account of racial disadavantages that almost certainly provides a large part of their explanation.
For all scholars seriously interested in questions on what makes up a good scientific explanation, Richard Miller’s Fact and Method is a must read. His incisive critique of Bayesianism is still unsurpassed.
Paul Krugman has often tried to explain why we should continue to use neoclassical hobby horses like IS-LM and Aggregate Supply-Aggregate Demand models. Here’s one example:
So why do AS-AD? … We do want, somewhere along the way, to get across the notion of the self-correcting economy, the notion that in the long run, we may all be dead, but that we also have a tendency to return to full employment via price flexibility. Or to put it differently, you do want somehow to make clear the notion (which even fairly Keynesian guys like me share) that money is neutral in the long run.
Well, this “fairly Keynesian” guy is not impressed. And I doubt that Keynes himself would have been impressed by having his theory being characterized with catchwords like “tendency to return to full employment” and “money is neutral in the long run.”
One of Keynes’s central tenets — in clear contradistinction to the beliefs of neoclassical economists — is that there is no automatic tendency for economies to move toward full employment levels in monetary economies.
Money doesn’t matter in neoclassical macroeconomic models. That’s true. But in the real world in which we happen to live in, money does certainly matter. Money is not neutral and money matters in both the short run and the long run:
The theory which I desiderate would deal … with an economy in which money plays a part of its own and affects motives and decisions, and is, in short, one of the operative factors in the situ-ation, so that the course of events cannot be predicted in either the long period or in the short, without a knowledge of the behaviour of money between the first state and the last. And it is this which we ought to mean when we speak of a monetary economy.
J. M. Keynes A monetary theory of production (1933)
- Always, but always, plot your data.
- Remember that data quality is at least as important as data quantity.
- Always ask yourself, “Do these results make economic/common sense”?
- Check whether your “statistically significant” results are also “numerically/economically significant”.
- Be sure that you know exactly what assumptions are used/needed to obtain the results relating to the properties of any estimator or test that you use.
- Just because someone else has used a particular approach to analyse a problem that looks like yours, that doesn’t mean they were right!
- “Test, test, test”! (David Hendry). But don’t forget that “pre-testing” raises some important issues of its own.
- Don’t assume that the computer code that someone gives to you is relevant for your application, or that it even produces correct results.
- Keep in mind that published results will represent only a fraction of the results that the author obtained, but is not publishing.
- Don’t forget that “peer-reviewed” does NOT mean “correct results”, or even “best practices were followed”.
The euro has taken away the possibility for national governments to manage their economies in a meaningful way — and in Greece the people has had to pay the true costs of its concomitant misguided austerity policies.
First, there was the widely discussed mea culpa in the October 2012 World Economic Outlook, when the IMF staff basically disavowed their own previous estimates of the size of multipliers, and in doing so they certified that austerity could not, and would not work …
Then, the Fund tackled the issue of income inequality, and broke another taboo, i.e. the dichotomy between fairness and efficiency. Turns out that unequal societies tend to perform less well, and IMF staff research reached the same conclusion …
Then, of course, the “public Investment is a free lunch” chapter three of the World Economic Outlook, in the fall 2014.
In between, they demolished another building block of the Washington Consensus: free capital movements may sometimes be destabilizing …
These results are not surprising per se. All of these issues are highly controversial, so it is obvious that research does not find unequivocal support for a particular view. All the more so if that view, like the Washington Consensus, is pretty much an ideological construction. Yet, the fact that research coming from the center of the empire acknowledges that the world is complex, and interactions among agents goes well beyond the working of efficient markets, is in my opinion quite something.
Let’s say we have a stationary process. That does not guarantee that it is also ergodic. The long-run time average of a single output function of the stationary process may not converge to the expectation of the corresponding variables — and so the long-run time average may not equal the probabilistic (expectational) average. Say we have two coins, where coin A has a probability of 1/2 of coming up heads, and coin B has a probability of 1/4 of coming up heads. We pick either of these coins with a probability of 1/2 and then toss the chosen coin over and over again. Now let H1, H2, … be either one or zero as the coin comes up heads or tales. This process is obviously stationary, but the time averages — [H1 + … + Hn]/n — converges to 1/2 if coin A is chosen, and 1/4 if coin B is chosen. Both these time averages have a probability of 1/2 and so their expectational average is 1/2 x 1/2 + 1/2 x 1/4 = 3/8, which obviously is not equal to 1/2 or 1/4. The time averages depend on which coin you happen to choose, while the probabilistic (expectational) average is calculated for the whole “system” consisting of both coin A and coin B.
“To put it bluntly, the discipline of economics has yet to get over its childish passion for mathematics and for purely theoretical and often highly ideological speculation, at the expense of historical research and collaboration with the other social sciences.”
The quote is, of course, from Piketty’s Capital in the 21st Century. Judging by Noah Smith’s recent blog entry, there is still progress to be made.
Smith observes that the performance of DSGE models is dependably poor in predicting future macroeconomic outcomes—precisely the task for which they are widely deployed. Critics of DSGE are however dismissed because—in a nutshell—there’s nothing better out there.
This argument is deficient in two respects. First, there is a self-evident flaw in a belief that, despite overwhelming and damning evidence that a particular tool is faulty—and dangerously so—that tool should not be abandoned because there is no obvious replacement.
The second deficiency relates to the claim that there is no alternative way to approach macroeconomics:
“When I ask angry “heterodox” people “what better alternative models are there?”, they usually either mention some models but fail to provide links and then quickly change the subject, or they link me to reports that are basically just chartblogging.”
Although Smith is too polite to accuse me directly, this refers to a Twitter exchange from a few days earlier. This was triggered when I took offence at a previous post of his in which he argues that the triumph of New Keynesian sticky-price models over their Real Business Cycle predecessors was proof that “if you just keep pounding away with theory and evidence, even the toughest orthodoxy in a mean, confrontational field like macroeconomics will eventually have to give you some respect”.
When I put it to him that, rather then supporting his point, the failure of the New Keynesian model to be displaced—despite sustained and substantiated criticism—rather undermined it, he responded—predictably—by asking what should replace it.
The short answer is that there is no single model that will adequately tell you all you need to know about a macroeconomic system. A longer answer requires a discussion of methodology and the way that we, as economists, think about the economy. To diehard supporters of the ailing DSGE tradition, “a model” means a collection of dynamic simultaneous equations constructed on the basis of a narrow set of assumptions around what individual “agents” do—essentially some kind of optimisation problem. Heterodox economists argue for a much broader approach to understanding the economic system in which mathematical models are just one tool to aid us in thinking about economic processes.
What all this means is that it is very difficult to have a discussion with people for whom the only way to view the economy is through the lens of mathematical models—and a particularly narrowly defined class of mathematical models—because those individuals can only engage with an argument by demanding to be shown a sheet of equations.
[h/t Jan Milch]