The euro has taken away the possibility for national governments to manage their economies in a meaningful way — and in Greece the people has had to pay the true costs of its concomitant misguided austerity policies.
First, there was the widely discussed mea culpa in the October 2012 World Economic Outlook, when the IMF staff basically disavowed their own previous estimates of the size of multipliers, and in doing so they certified that austerity could not, and would not work …
Then, the Fund tackled the issue of income inequality, and broke another taboo, i.e. the dichotomy between fairness and efficiency. Turns out that unequal societies tend to perform less well, and IMF staff research reached the same conclusion …
Then, of course, the “public Investment is a free lunch” chapter three of the World Economic Outlook, in the fall 2014.
In between, they demolished another building block of the Washington Consensus: free capital movements may sometimes be destabilizing …
These results are not surprising per se. All of these issues are highly controversial, so it is obvious that research does not find unequivocal support for a particular view. All the more so if that view, like the Washington Consensus, is pretty much an ideological construction. Yet, the fact that research coming from the center of the empire acknowledges that the world is complex, and interactions among agents goes well beyond the working of efficient markets, is in my opinion quite something.
Let’s say we have a stationary process. That does not guarantee that it is also ergodic. The long-run time average of a single output function of the stationary process may not converge to the expectation of the corresponding variables — and so the long-run time average may not equal the probabilistic (expectational) average. Say we have two coins, where coin A has a probability of 1/2 of coming up heads, and coin B has a probability of 1/4 of coming up heads. We pick either of these coins with a probability of 1/2 and then toss the chosen coin over and over again. Now let H1, H2, … be either one or zero as the coin comes up heads or tales. This process is obviously stationary, but the time averages — [H1 + … + Hn]/n — converges to 1/2 if coin A is chosen, and 1/4 if coin B is chosen. Both these time averages have a probability of 1/2 and so their expectational average is 1/2 x 1/2 + 1/2 x 1/4 = 3/8, which obviously is not equal to 1/2 or 1/4. The time averages depend on which coin you happen to choose, while the probabilistic (expectational) average is calculated for the whole “system” consisting of both coin A and coin B.
“To put it bluntly, the discipline of economics has yet to get over its childish passion for mathematics and for purely theoretical and often highly ideological speculation, at the expense of historical research and collaboration with the other social sciences.”
The quote is, of course, from Piketty’s Capital in the 21st Century. Judging by Noah Smith’s recent blog entry, there is still progress to be made.
Smith observes that the performance of DSGE models is dependably poor in predicting future macroeconomic outcomes—precisely the task for which they are widely deployed. Critics of DSGE are however dismissed because—in a nutshell—there’s nothing better out there.
This argument is deficient in two respects. First, there is a self-evident flaw in a belief that, despite overwhelming and damning evidence that a particular tool is faulty—and dangerously so—that tool should not be abandoned because there is no obvious replacement.
The second deficiency relates to the claim that there is no alternative way to approach macroeconomics:
“When I ask angry “heterodox” people “what better alternative models are there?”, they usually either mention some models but fail to provide links and then quickly change the subject, or they link me to reports that are basically just chartblogging.”
Although Smith is too polite to accuse me directly, this refers to a Twitter exchange from a few days earlier. This was triggered when I took offence at a previous post of his in which he argues that the triumph of New Keynesian sticky-price models over their Real Business Cycle predecessors was proof that “if you just keep pounding away with theory and evidence, even the toughest orthodoxy in a mean, confrontational field like macroeconomics will eventually have to give you some respect”.
When I put it to him that, rather then supporting his point, the failure of the New Keynesian model to be displaced—despite sustained and substantiated criticism—rather undermined it, he responded—predictably—by asking what should replace it.
The short answer is that there is no single model that will adequately tell you all you need to know about a macroeconomic system. A longer answer requires a discussion of methodology and the way that we, as economists, think about the economy. To diehard supporters of the ailing DSGE tradition, “a model” means a collection of dynamic simultaneous equations constructed on the basis of a narrow set of assumptions around what individual “agents” do—essentially some kind of optimisation problem. Heterodox economists argue for a much broader approach to understanding the economic system in which mathematical models are just one tool to aid us in thinking about economic processes.
What all this means is that it is very difficult to have a discussion with people for whom the only way to view the economy is through the lens of mathematical models—and a particularly narrowly defined class of mathematical models—because those individuals can only engage with an argument by demanding to be shown a sheet of equations.
[h/t Jan Milch]
In its standard form, a significance test is not the kind of “severe test” that we are looking for in our search for being able to confirm or disconfirm empirical scientific hypothesis. This is problematic for many reasons, one being that there is a strong tendency to accept the null hypothesis since they can’t be rejected at the standard 5% significance level. In their standard form, significance tests bias against new hypotheses by making it hard to disconfirm the null hypothesis.
And as shown over and over again when it is applied, people have a tendency to read “not disconfirmed” as “probably confirmed.” Standard scientific methodology tells us that when there is only say a 10 % probability that pure sampling error could account for the observed difference between the data and the null hypothesis, it would be more “reasonable” to conclude that we have a case of disconfirmation. Especially if we perform many independent tests of our hypothesis and they all give about the same 10 % result as our reported one, I guess most researchers would count the hypothesis as even more disconfirmed.
Most importantly — we should never forget that the underlying parameters we use when performing significance tests are model constructions. Our p-values mean next to nothing if the model is wrong. As eminent mathematical statistician David Freedman writes:
I believe model validation to be a central issue. Of course, many of my colleagues will be found to disagree. For them, fitting models to data, computing standard errors, and performing significance tests is “informative,” even though the basic statistical assumptions (linearity, independence of errors, etc.) cannot be validated. This position seems indefensible, nor are the consequences trivial. Perhaps it is time to reconsider.
Invariance assumptions need to be made in order to draw causal conclusions from non-experimental data: parameters are invariant to interventions, and so are errors or their distributions. Exogeneity is another concern. In a real example, as opposed to a hypothetical, real questions would have to be asked about these assumptions. Why are the equations “structural,” in the sense that the required invariance assumptions hold true? Applied papers seldom address such assumptions, or the narrower statistical assumptions: for instance, why are errors IID?
The tension here is worth considering. We want to use regression to draw causal inferences from non-experimental data. To do that, we need to know that certain parameters and certain distributions would remain invariant if we were to intervene. Invariance can seldom be demonstrated experimentally. If it could, we probably wouldn’t be discussing invariance assumptions. What then is the source of the knowledge?
“Economic theory” seems like a natural answer, but an incomplete one. Theory has to be anchored in reality. Sooner or later, invariance needs empirical demonstration, which is easier said than done.
Examining the Coase Theorem relies on a critical analysis of economic theory. The fundamental shortcomings of the most developed theory of the market, general equilibrium theory, as well as the restrictions imposed by the use of partial equilibrium and cases of a bilateral monopoly, undermine the assertions of the Coase Theorem. In the case of a bilateral monopoly, this construct involves serious distributional problems, and the invariance component of the theorem is seriously called into question. In addition, it is possible that the negotiations process may stop when mutually beneficial transactions take place outside of the contract curve. In those cases, social efficiency in the restricted Pareto-optimum sense will not be the outcome …
Faith in the idea that markets allocate resources efficiently is severely shaken by the set of difficulties in general equilibrium theory discussed in this article. The shortcomings of general equilibrium theory in stability theory should alert anyone tempted by the Law and Economics (L&E) movement and its
applicability to fields of legal practice. The bottom line is that we do not have a theory showing how, if at all, markets reach equilibrium allocations. Because efficiency, in terms of Pareto-optimality, is an attribute only of equilibrium allocations, very serious negative implications exist for anyone claiming that markets allocate resources efficiently.
We have concentrated our critique of L&E based on the fact that economic theory is in a very sad state. Proponents of L&E seem to ignore this, appearing instead to believe that there exists somewhere a robust theoretical construct that satisfactorily explains how markets allocate resources efficiently– this article has shown such faith to be groundless. This should be enough to dismiss L&E as another example of the triumph of ideology over science. In addition, the extreme version of L&E transforms justice into a commodity and represents a disturbing backward movement in social thought. The critiques raised in this article should also suffice to call into question the idea that the main objective of legal systems is efficiency, and that efficiency is attained through the market system. There are no grounds to believe in the efficiency of the market system.
One final thought on the role of mathematics is important. In its development, economics as a discipline has been obsessed with the use of mathematical models to build a theory of competitive markets. The only function for the very awkward assumptions … was to allow the theoretician to have access to certain mathematical theorems. Functioning in this manner, economic theory has sacrificed the construction of relevant economic concepts for the sake of using mathematical tools. This is not how scientific discourse should advance, and the followers of L&E are probably not aware of this. In fact, they may have fallen victim to the illusion of scientific rigor conferred by the use, and abuse, of mathematics.
[For yours truly’s own take on the Coase Theorem and Law & Economics — in Swedish only, sorry — see here or my “Dr Pangloss, Coase och välfärdsteorins senare öden,” Zenit, 4/1996]
“We encourage all countries to be absolutely determined to go back to a sustainable mode for their fiscal policies,” Trichet said, speaking after the ECB rate decision on Thursday. “Our message is the same for all, and we trust that it is absolutely decisive not only for each country individually, but for prosperity of all.”
“Not because it is an elementary recommendation to care for your sons and daughter and not overburden them, but because it is good for confidence, consumption and investment today”.
Well, think again. Here is the abstract of ECB Working Paper no 1770, March 2015:
“We explore how fiscal consolidations affect private sector confidence, a possible channel for the fiscal transmission that has received particular attention recently as a result of governments embarking on austerity trajectories in the aftermath of the crisis … The effects are stronger for revenue-based measures and when institutional arrangements, such as fiscal rules, are weak … Consumer confidence falls around announcements of consolidation measures, an effect driven by revenue-based measures. Moreover, the effects are most relevant for European countries with weak institutional arrangements, as measured by the tightness of fiscal rules or budgetary transparency.”
The confidence fairy seems to have turned into a confidence witch. One more victim of the crisis. But this one will not be missed.
Ten years ago, a survey published in the Journal of Economic Perspectives found that 77 percent of the doctoral candidates in the leading American economics programs agreed or strongly agreed with the statement “economics is the most scientific of the social sciences.”
In the intervening decade, a massive economic crisis rocked the global economy, and most economists never saw it coming. Nevertheless, little has changed: A new paper from the same publication reveals how economists continue to believe that their science is superior to all other social sciences, such as political science, sociology, anthropology, etc. While there may be budding intentions to appeal to other disciplines in order to enrich their theories (especially psychology and neuroscience), the reality is that economists almost exclusively study—and cite—each other …
The world is still living with the effects of the most recent economic crisis, and the inability of economists to offer solutions with a significant degree of agreement shows how urgently their discipline needs to be disrupted by an injection of new ideas, methods, and assumptions about human behavior. Unfortunately, there are powerful obstacles to this disruption: elite control and lack of gender diversity …
Ten years ago, I suggested that economists would “be well advised to trade in their intellectual haughtiness for a more humble disposition.” That’s advice that has yet to be heeded.
Abstraction is the most valuable ladder of any science. In the social sciences, as Marx forcefully argued, it is all the more indispensable since there ‘the force of abstraction’ must compensate for the impossibility of using microscopes or chemical reactions. However, the task of science is not to climb up the easiest ladder and remain there forever distilling and redistilling the same pure stuff. Standard economics, by opposing any suggestions that the economic process may consist of something more than a jigsaw puzzle with all its elements given, has identified itself with dogmatism. And this is a privilegium odiosum that has dwarfed the understanding of the economic
process wherever it has been exercised.