Much of current academic teaching and research has been criticized for its lack of relevance, that is, of immediate practical impact … I submit that the consistently indifferent performance in practical applications is in fact a symptom of a fundamental imbalance in the present state of our discipline. The weak and all too slowly growing empirical foundation clearly cannot support the proliferating superstructure of pure, or should I say, speculative economic theory …
Uncritical enthusiasm for mathematical formulation tends often to conceal the ephemeral substantive content of the argument behind the formidable front of algebraic signs … In the presentation of a new model, attention nowadays is usually centered on a step-by-step derivation of its formal properties. But if the author — or at least the referee who recommended the manuscript for publication — is technically com- petent, such mathematical manipulations, however long and intricate, can even without further checking be accepted as correct. Nevertheless, they are usually spelled out at great length. By the time it comes to interpretation of the substantive conclusions, the assumptions on which the model has been based are easily forgotten. But it is precisely the empirical validity of these assumptions on which the usefulness of the entire exercise depends.
What is really needed, in most cases, is a very difficult and seldom very neat assessment and verification of these assumptions in terms of observed facts. Here mathematics cannot help and because of this, the interest and enthusiasm of the model builder suddenly begins to flag: “If you do not like my set of assumptions, give me another and I will gladly make you another model; have your pick.” …
But shouldn’t this harsh judgment be suspended in the face of the impressive volume of econometric work? The answer is decidedly no. This work can be in general characterized as an attempt to compensate for the glaring weakness of the data base available to us by the widest possible use of more and more sophisticated statistical techniques. Alongside the mounting pile of elaborate theoretical models we see a fast-growing stock of equally intricate statistical tools. These are intended to stretch to the limit the meager supply of facts … Like the economic models they are supposed to implement, the validity of these statistical tools depends itself on the acceptance of certain convenient assumptions pertaining to stochastic properties of the phenomena which the particular models are intended to explain; assumptions that can be seldom verified.
Simon Wren-Lewis today has yet another post up on forecasting activities. In an earlier post yours truly criticized his views, and now he seems to admit that there might be a point in questioning an activity like forecasting, that admittedly has little value and come up with results no better than “intelligent guessing.” Forecasting obviously isn’t — ” trivial” or not — a costless activity, so why pay for it then?
This time focusing on model-based forecasting by central banks, Wren-Lewis writes:
You can see the problem. By using an intelligent guess to forecast, the bank appears to be ignoring information, and it seems to be telling inconsistent stories. Central banks that are accountable do not want to get put in this position. From their point of view, it would be much easier if they used their main policy analysis model, plus judgement, to also make unconditional forecasts. They can always let the intelligent guesswork inform their judgement. If these forecasts are not worse than intelligent guesswork, then the cost to them of using the model to produce forecasts – a few extra economists – are trivial.
To me this whole discussion really underlines how important it is in social sciences — and economics in particular — to incorporate Keynes’s far-reaching and incisive analysis of induction and evidential weight in his seminal A Treatise on Probability (1921).
According to Keynes we live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but “rational expectations.” Keynes rather thinks that we base our expectations on the confidence or “weight” we put on different events and alternatives. To Keynes expectations are a question of weighing probabilities by “degrees of belief,” beliefs that often have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents as modeled by “modern” social sciences. And often we “simply do not know.”
How strange that social scientists and mainstream economists as a rule do not even touch upon these aspects of scientific methodology that seems to be so fundamental and important for anyone trying to understand how we learn and orient ourselves in an uncertain world. An educated guess on why this is a fact would be that Keynes concepts are not possible to squeeze into a single calculable numerical “probability.” In the quest for measurable quantities one puts a blind eye to qualities and looks the other way.
So why do companies, governments, and central banks, continue with this more or less expensive, but obviously worthless, activity?
A couple of months ago yours truly was interviewed by a public radio journalist working on a series on Great Economic Thinkers. We were discussing the monumental failures of the predictions-and-forecasts-business. But — the journalist asked — if these cocksure economists with their “rigorous” and “precise” mathematical-statistical-econometric models are so wrong again and again — why do they persist wasting time on it?
In a discussion on uncertainty and the hopelessness of accurately modeling what will happen in the real world — in M. Szenberg’s Eminent Economists: Their Life Philosophies — Nobel laureate Kenneth Arrow comes up with what is probably the most plausible reason:
It is my view that most individuals underestimate the uncertainty of the world. This is almost as true of economists and other specialists as it is of the lay public. To me our knowledge of the way things work, in society or in nature, comes trailing clouds of vagueness … Experience during World War II as a weather forecaster added the news that the natural world as also unpredictable. An incident illustrates both uncer-tainty and the unwilling-ness to entertain it. Some of my colleagues had the responsi-bility of preparing long-range weather forecasts, i.e., for the following month. The statisticians among us subjected these forecasts to verification and found they differed in no way from chance. The forecasters themselves were convinced and requested that the forecasts be discontinued. The reply read approximately like this: ‘The Commanding General is well aware that the forecasts are no good. However, he needs them for planning purposes.’
To this one might also add some concerns about ideology and apologetics. Forecasting is a non-negligible part of the labour market for (mainstream) economists, and so, of course, those in the business do not want to admit that they are occupied with worthless things (not to mention how hard it would be to sell the product with that kind of frank truthfulness …). Governments, the finance sector and (central) banks also want to give the impression to customers and voters that they, so to say, have the situation under control (telling people that next years X will be 3.048 % makes wonders in that respect). Why else would anyone want to pay them or vote for them? These are sure not glamorous aspects of economics as a science, but as a scientist it would be unforgivably dishonest to pretend that economics doesn’t also perform an ideological function in society.
Oxford professor Simon Wren-Lewis had a post up some time ago commenting on traction gaining “attacks on mainstream economics”:
One frequent accusation … often repeated by heterodox economists, is that mainstream economics and neoliberal ideas are inextricably linked. Of course economics is used to support neoliberalism. Yet I find mainstream economics full of ideas and analysis that permits a wide ranging and deep critique of these same positions. The idea that the two live and die together is just silly.
Silly? Maybe. But maybe Wren-Lewis and other economists who want to enlighten themselves on the subject also should take a look at this video:
Or maybe read this essay, where I try to further analyze — much inspired by the works of Amartya Sen — what kind of philosophical-ideological-political-economic doctrine neoliberalism is, and why it so often comes natural for mainstream neoclassical economists to embrace neoliberal ideals.
Or maybe — if you know some Swedish — you could take a look in this book on the connection between the dismal science and neoliberalism (sorry for shameless self-promotion).
The gross substitution axiom assumes that if the demand for good x goes up, its relative price will rise, inducing demand to spill over to the now relatively cheaper substitute good y. For an economist to deny this ‘universal truth’ of gross substitutability between objects of demand is revolutionary heresy – and as in the days of the Inquisition, the modern-day College of Cardinals of mainstream economics destroys all non-believers, if not by burning them at the stake, then by banishing them from the mainstream professional journals. Yet in Keynes’s (1936, ch. 17) analysis ‘The Essential Properties of Interest and Money’ require that:
1. The elasticity of production of liquid assets including money is approximately zero. This means that private entrepreneurs cannot produce more of these assets by hiring more workers if the demand for liquid assets increases. In other words, liquid assets are not producible by private entrepreneurs’ hiring of additional workers; this means that money (and other liquid assets) do not grow on trees.
2. The elasticity of substitution between all liquid assets, including money (which are not reproducible by labour in the private sector) and producibles (in the private sector), is zero or negligible. Accordingly, when the price of money increases, people will not substitute the purchase of the products of industry for their demand for money for liquidity (savings) purposes.
These two elasticity properties that Keynes believed are essential to the concepts of money and liquidity mean that a basic axiom of Keynes’s logical framework is that non- producible assets that can be used to store savings are not gross substitutes for producible assets in savers’ portfolios. If this elasticity of substitution between liquid assets and the products of industry is significantly different from zero (if the gross substitution axiom is ubiquitously true), then even if savers attempt to use non-reproducible assets for storing their increments of wealth, this increase in demand will increase the price of non- producibles. This relative price rise in non-producibles will, under the gross substitution axiom, induce savers to substitute reproducible durables for non-producibles in their wealth holdings and therefore non-producibles will not be, in Hahn’s terminology, ‘ultimate resting places for savings’. The gross substitution axiom therefore restores Say’s Law and denies the logical possibility of involuntary unemployment.
Arrow: I think the idea that a society has to be responsible for all of its citizens, those who do well and those who do not, is really a precondition of a good society. Let me say that from the time I first understood economic principles, I was always concerned also that any system be operated on an efficient basis, which meant decentralization because knowledge is not concentrated anywhere …
On the other hand, markets are not, in my opinion, a full solution to any problem. The obvious problem they don’t meet is the concerns of the welfare of individuals who may get lost in the operation of the system — the distributional question. We’ve seen this growing as we go further and further toward a market ideology in the United States and the United Kingdom. We’ve seen a decline in the welfare of the working poor, leaving aside any other pathologies, just the working poor, a very distinct increase at the very top levels …
I think on the efficiency level, not only the distribution level, capitalism is a flawed system. It probably has the same virtues as Churchill attributed to democracy: It’s the worst system except for any other. And I think that’s right, but it cannot be thought that some unmitigated belief in free markets is a cure even from the efficiency point of view …
Region: Recent macroeconomic research in Minnesota — both at the Federal Reserve Bank and at the University of Minnesota — has been greatly influenced by your work in the 1950s in the theory of general equilibrium. At the time, did you think that your work would contribute to a better understanding of the macroeconomy?
Arrow: I did, but not in the way it’s turned out. The vision I had that wasn’t articulated in my articles exactly was that the macroeconomy was the disequilibrium phenomenon. The idea that we could interpret economic fluctuations as an equilibrium phenomenon was something that did not cross my mind. And I’m still not sure that the disequilibrium interpretation isn’t more appropriate …
I do think the interpretation of unemployment specifically is not well represented in the equilibrium models. I don’t believe that unemployment is all voluntary, by anticipation of future wage movements or this sort of thing. I know you can modify the models by taking into account the indivisibilities, but I don’t really think that people are voluntarily unemployed …
Region: So you’re not going to buy into the argument that these are just people in search?
Arrow: No, because I can’t see why search should vary that much …
Region: The Nobel Prize in Economics has recently been given to Robert E. Lucas Jr. for the influence that he has had in macroeconomics and, in particular, for his contributions to the theory of Rational Expectations … Do you think that the current theories of learning will prove more successful than the theory of Rational Expectations for certain questions? And if so, which questions?
Arrow: I think the answer is yes, that learning models will turn out to be more accurate. More useful is another question, because usefulness depends on tractability …
October ’87 is a wonderful example. Rational expectations would suggest that prices would change when news changes. In October ’87 there was no news that anybody could identify even in retrospect as relevant. There was a 20 percent drop in the day. Now that’s a kind of internal dynamics of the market. And part of it undoubtedly is that investors have a model of the future which says that if prices start falling they’re going to continue to fall for a while, and therefore one ought to get out of it now and then buy at a lower price later. So one rushes to get out. That’s oversimplifying it, but it’s built into the computer programs. Now that’s not based on a rational expectation.
It struck me when this thing started that it’s perfectly obvious that prices were to get back up again. I held on. It seemed to be quite obvious that they were going to come back because there was no reason why they shouldn’t. But the fact is that people didn’t behave that way. The fact is you have these periods of alternating volatility and lack of volatility. So it seems to me that at least as far as the financial markets are concerned, there is increasing evidence against rational expectations, even at the macro level. When you look at any experimental work not directly related to economics, but trying to test rational behavior in other ways, experiments have conspicuously failed to show rational behavior … Finally, there aren’t enough repetitions to justify rational expectations. The world is changing. We’re not really proceeding on a stationary basis.
Evidence-based theories and policies are highly valued nowadays. Randomization is supposed to control for bias from unknown confounders. The received opinion is that evidence based on randomized experiments therefore is the best.
More and more economists have also lately come to advocate randomization as the principal method for ensuring being able to make valid causal inferences.
I would however rather argue that randomization, just as econometrics, promises more than it can deliver, basically because it requires assumptions that in practice are not possible to maintain.
Especially when it comes to questions of causality, randomization is nowadays considered some kind of “gold standard”. Everything has to be evidence-based, and the evidence has to come from randomized experiments.
But just as econometrics, randomization is basically a deductive method. Given the assumptions (such as manipulability, transitivity, separability, additivity, linearity, etc.) these methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. And although randomization may contribute to controlling for confounding, it does not guarantee it, since genuine ramdomness presupposes infinite experimentation and we know all real experimentation is finite. And even if randomization may help to establish average causal effects, it says nothing of individual effects unless homogeneity is added to the list of assumptions. Real target systems are seldom epistemically isomorphic to our axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by randomization procedures may be valid in “closed” models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.
When does a conclusion established in population X hold for target population Y? Only under very restrictive conditions!
Ideally controlled experiments (still the benchmark even for natural and quasi experiments) tell us with certainty what causes what effects – but only given the right “closures”. Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. “It works there” is no evidence for “it will work here”. Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of “rigorous” and “precise” methods is despairingly small.
Like us, you want evidence that a policy will work here, where you are. Randomized controlled trials (RCTs) do not tell you that. They do not even tell you that a policy works. What they tell you is that a policy worked there, where the trial was carried out, in that population. Our argument is that the changes in tense – from “worked” to “work” – are not just a matter of grammatical detail. To move from one to the other requires hard intellectual and practical effort. The fact that it worked there is indeed fact. But for that fact to be evidence that it will work here, it needs to be relevant to that conclusion. To make RCTs relevant you need a lot more information and of a very different kind. What kind? That’s what this book is about.
[h/t Anders Ramsay]
A couple of years ago yours truly wrote a paper — What is (wrong) with economic theory? — in an attempt to articulate a methodological critique of the predominant model-building strategy in modern mainstream economics.
In this INET video, James K Galbraith, George Akerlof, Brad DeLong, and a couple of other renowned economists, tell us how they view models in economics and what characterize really good ones.
For Solow, it is Milton Friedman who is the real Mephistopheles here. It is Friedman who over and over again would frame the issues as freedom vs. socialism, when actually the issue is “which of the defects of a ‘free’, unregulated economy should be repaired by regulation, subsidization, or taxation? Which… tolerated… because the best available fix would have even more costly side-effects?” It was Friedman whose “rhetoric… irrelevant or, worse, misleading, or, even worse, intentionally misleading… made… [the] policy discussion more difficult to have… [and] did the market economy a disservice.”
I disagree: I see Friedman and Hayek as being equally willing to call social democracy “socialism”, and equally likely to see it as corrosive of individual freedom.
But I see Friedman as being much more moderate than Hayek along a number of dimensions. First, Friedman is a true social and personal libertarian. Second, Friedman seems to me to have a more sophisticated view of social insurance. And Friedman has a belief in both democracy and educationplus a willingness to (sometimes) mark his beliefs to market–and these, I think, Hayek definitely lacked.
When stabilizing the growth of the money supply did not produce the smooth aggregate demand path that Friedman had expected, he changed his mind–and became a big advocate of quantitative easing. And Friedman’s view was explicitly that giving economic advice to Pinochet was analogous to giving economic advice to Honecker: trying to make an unpleasant regime less unpleasant for the people it ruled. Hayek, by contrast, had rather more affection for Pinochet than Plato did for Dionysius of Syracuse…