Commenting on the state of standard modern macroeconomics, Willem Buiter argues that neither New Classical nor New Keynesian microfounded DSGE macro models have helped us foresee, understand or craft solutions to the problems of today’s economies:
The Monetary Policy Committee of the Bank of England I was privileged to be a ‘founder’ external member of during the years 1997-2000 contained, like its successor vintages of external and executive members, quite a strong representation of academic economists and other professional economists with serious technical training and backgrounds. This turned out to be a severe handicap when the central bank had to switch gears and change from being an inflation-targeting central bank under conditions of orderly financial markets to a financial stability-oriented central bank under conditions of widespread market illiquidity and funding illiquidity. Indeed, the typical graduate macroeconomics and monetary economics training received at Anglo-American universities during the past 30 years or so, may have set back by decades serious investigations of aggregate economic behaviour and economic policy-relevant understanding. It was a privately and socially costly waste of time and other resources.
Most mainstream macroeconomic theoretical innovations since the 1970s … have turned out to be self-referential, inward-looking distractions at best. Research tended to be motivated by the internal logic, intellectual sunk capital and aesthetic puzzles of established research programmes rather than by a powerful desire to understand how the economy works …
Both the New Classical and New Keynesian complete markets macroeconomic theories not only did not allow questions about insolvency and illiquidity to be answered. They did not allow such questions to be asked …
Charles Goodhart, who was fortunate enough not to encounter complete markets macroeconomics and monetary economics during his impressionable, formative years, but only after he had acquired some intellectual immunity, once said of the Dynamic Stochastic General Equilibrium approach which for a while was the staple of central banks’ internal modelling: “It excludes everything I am interested in”. He was right. It excludes everything relevant to the pursuit of financial stability.
The Bank of England in 2007 faced the onset of the credit crunch with too much Robert Lucas, Michael Woodford and Robert Merton in its intellectual cupboard. A drastic but chaotic re-education took place and is continuing.
I believe that the Bank has by now shed the conventional wisdom of the typical macroeconomics training of the past few decades. In its place is an intellectual potpourri of factoids, partial theories, empirical regularities without firm theoretical foundations, hunches, intuitions and half-developed insights. It is not much, but knowing that you know nothing is the beginning of wisdom.
Reading Buiter’s article is certainly a very worrying confirmation of what Paul Romer wrote last week. Modern macroeconomics is becoming more and more a total waste of time.
But why are all these macro guys wasting their time and efforts on these models? Besides simply having the usual aspirations of being published, I think maybe Frank Hahn gave the truest answer back in 2005, when interviewed on the occasion of his 80th birthday, he confessed that some economic assumptions didn’t really say anything about “what happens in the world,” but still had to be considered very good “because it allows us to get on this job.”
Hahn’s suggestion reminds me of an episode, twenty years ago, when Phil Mirowski was invited to give a speech on themes from his book More Heat than Light at my economics department in Lund, Sweden. All the mainstream neoclassical professors were there. Their theories were totally mangled and no one — absolutely no one — had anything to say even remotely reminiscent of a defense. Being at a nonplus, one of them, in total desperation, finally asked “But what shall we do then?”
Yes indeed — what shall they do when their emperor has turned out to be naked?
Today the trend to greater equality of incomes which characterised the postwar period has been reversed. Inequality is now rising rapidly. Contrary to the rising-tide hypothesis, the rising tide has only lifted the large yachts, and many of the smaller boats have been left dashed on the rocks. This is partly because the extraordinary growth in top incomes has coincided with an economic slowdown.
The trickle-down notion— along with its theoretical justification, marginal productivity theory— needs urgent rethinking. That theory attempts both to explain inequality— why it occurs— and to justify it— why it would be beneficial for the economy as a whole. This essay looks critically at both claims. It argues in favour of alternative explanations of inequality, with particular reference to the theory of rent-seeking and to the influence of institutional and political factors, which have shaped labour markets and patterns of remuneration. And it shows that, far from being either necessary or good for economic growth, excessive inequality tends to lead to weaker economic performance. In light of this, it argues for a range of policies that would increase both equity and economic well-being.
Mainstream economics textbooks usually refer to the interrelationship between technological development and education as the main causal force behind increased inequality. If the educational system (supply) develops at the same pace as technology (demand), there should be no increase, ceteris paribus, in the ratio between high-income (highly educated) groups and low-income (low education) groups. In the race between technology and education, the proliferation of skilled-biased technological change has, however, allegedly increased the premium for the highly educated group.
Another prominent explanation is that globalization – in accordance with Ricardo’s theory of comparative advantage and the Wicksell-Heckscher-Ohlin-Stolper-Samuelson factor price theory – has benefited capital in the advanced countries and labour in the developing countries. The problem with these theories are that they explicitly assume full employment and international immobility of the factors of production. Globalization means more than anything else that capital and labour have to a large extent become mobile over country borders. These mainstream trade theories are really not applicable in the world of today, and they are certainly not able to explain the international trade pattern that has developed during the last decades. Although it seems as though capital in the developed countries has benefited from globalization, it is difficult to detect a similar positive effect on workers in the developing countries.
There are, however, also some other quite obvious problems with these kinds of inequality explanations. The World Top Incomes Database shows that the increase in incomes has been concentrated especially in the top 1%. If education was the main reason behind the increasing income gap, one would expect a much broader group of people in the upper echelons of the distribution taking part of this increase. It is dubious, to say the least, to try to explain, for example, the high wages in the finance sector with a marginal productivity argument. High-end wages seem to be more a result of pure luck or membership of the same ‘club’ as those who decide on the wages and bonuses, than of ‘marginal productivity.’
Mainstream economics, with its technologically determined marginal productivity theory, seems to be difficult to reconcile with reality. Although card-carrying neoclassical apologetics like Greg Mankiw want to recall John Bates Clark’s (1899) argument that marginal productivity results in an ethically just distribution, that is not something – even if it were true – we could confirm empirically, since it is impossible realiter to separate out what is the marginal contribution of any factor of production. The hypothetical ceteris paribus addition of only one factor in a production process is often heard of in textbooks, but never seen in reality.
When reading mainstream economists like Mankiw who argue for the ‘just desert’ of the 0.1 %, one gets a strong feeling that they are ultimately trying to argue that a market economy is some kind of moral free zone where, if left undisturbed, people get what they ‘deserve.’ To most social scientists that probably smacks more of being an evasive action trying to explain away a very disturbing structural ‘regime shift’ that has taken place in our societies. A shift that has very little to do with ‘stochastic returns to education.’ Those were in place also 30 or 40 years ago. At that time they meant that perhaps a top corporate manager earned 10–20 times more than ‘ordinary’ people earned. Today it means that they earn 100–200 times more than ‘ordinary’ people earn. A question of education? Hardly. It is probably more a question of greed and a lost sense of a common project of building a sustainable society.
Since the race between technology and education does not seem to explain the new growing income gap – and even if technological change has become more and more capital augmenting, it is also quite clear that not only the wages of low-skilled workers have fallen, but also the overall wage share – mainstream economists increasingly refer to ‘meritocratic extremism,’ ‘winners-take-all markets’ and ‘super star-theories’ for explanation. But this is also highly questionable.
Fans may want to pay extra to watch top-ranked athletes or movie stars performing on television and film, but corporate managers are hardly the stuff that people’s dreams are made of – and they seldom appear on television and in the movie theaters.
Everyone may prefer to employ the best corporate manager there is, but a corporate manager, unlike a movie star, can only provide his services to a limited number of customers. From the perspective of ‘super-star theories,’ a good corporate manager should only earn marginally better than an average corporate manager. The average earnings of corporate managers of the 50 biggest Swedish companies today, is equivalent to the wages of 46 blue-collar workers.
It is difficult to see the takeoff of the top executives as anything else but a reward for being a member of the same illustrious club. That they should be equivalent to indispensable and fair productive contributions – marginal products – is straining credulity too far. That so many corporate managers and top executives make fantastic earnings today, is strong evidence the theory is patently wrong and basically functions as a legitimizing device of indefensible and growing inequalities.
No one ought to doubt that the idea that capitalism is an expression of impartial market forces of supply and demand, bears but little resemblance to actual reality. Wealth and income distribution, both individual and functional, in a market society is to an overwhelmingly high degree influenced by institutionalized political and economic norms and power relations, things that have relatively little to do with marginal productivity in complete and profit-maximizing competitive market models – not to mention how extremely difficult, if not outright impossible it is to empirically disentangle and measure different individuals’ contributions in the typical team work production that characterize modern societies; or, especially when it comes to ‘capital,’ what it is supposed to mean and how to measure it. Remunerations do not necessarily correspond to any marginal product of different factors of production – or to ‘compensating differentials’ due to non-monetary characteristics of different jobs, natural ability, effort or chance.
Put simply – highly paid workers and corporate managers are not always highly productive workers and corporate managers, and less highly paid workers and corporate managers are not always less productive. History has over and over again disconfirmed the close connection between productivity and remuneration postulated in mainstream income distribution theory.
Neoclassical marginal productivity theory is a collapsed theory from both a historical and a theoretical point of view, as shown already by Sraffa in the 1920s, and in the Cambridge capital controversy in the 1960s and 1970s. As Joan Robinson wrote in 1953:
The production function has been a powerful instrument of miseducation. The student of economic theory is taught to write Q = f (L, K) where L is a quantity of labor, K a quantity of capital and Q a rate of output of commodities. He is instructed to assume all workers alike, and to measure L in man-hours of labor; he is told something about the index-number problem in choosing a unit of output; and then he is hurried on to the next question, in the hope that he will forget to ask in what units K is measured. Before he ever does ask, he has become a professor, and so sloppy habits of thought are handed on from one generation to the next.
It’s great that Stiglitz has joined those of us who for decades have criticised marginal productivity theory. Institutional, political and social factors have an overwhelming influence on wages and the relative shares of labour and capital.
When a theory is impossible to reconcile with facts there is only one thing to do — scrap it!
The one reaction that puzzles me goes something like this: “Romer’s critique of RBC models is dated; we’ve known all along that those models make no sense.”
If we know that the RBC model makes no sense, why was it left as the core of the DSGE model? Those phlogiston shocks are still there. Now they are mixed together with a bunch of other made-up shocks.
Moreover, I see no reason to be confident about what we will learn if some econometrician adds sticky prices and then runs a horse to see if the shocks are more or less important than the sticky prices. The essence of the identification problem is that the data do not tell you who wins this kind of race. The econometrician picks the winner.
Those of us in the economics community who have been unpolite enough to dare questioning the preferred methods and models applied in macroeconomics are as a rule met with disapproval. Although people seem to get very agitated and upset by the critique, defenders of ‘received theory’ always say that the critique is ‘nothing new,’ that they have always been ‘well aware’ of the problems, and so on, and so on.
So, for the benefit of all macroeconomists who, like Simon Wren-Lewis, don’t want to be disturbed in their doings — eminent mathematical statistician David Freedman has put together a very practical list of vacuous responses to criticism that can be freely used to save your peace of mind:
We know all that. Nothing is perfect … The assumptions are reasonable. The assumptions don’t matter. The assumptions are conservative. You can’t prove the assumptions are wrong. The biases will cancel. We can model the biases. We’re only doing what evereybody else does. Now we use more sophisticated techniques. If we don’t do it, someone else will. What would you do? The decision-maker has to be better off with us than without us … The models aren’t totally useless. You have to do the best you can with the data. You have to make assumptions in order to make progress. You have to give the models the benefit of the doubt. Where’s the harm?
As yours truly wrote last week, there has been much discussion going on in the economics academia on Paul Romer’s recent critique of ‘modern’ macroeconomics.
Now Oxford professor Simon Wren-Lewis has a blog post up arguing that Romer’s critique is
unfair and wide of the mark in places … Paul’s discussion of real effects from monetary policy, and the insistence on productivity shocks as business cycle drivers, is pretty dated … Yet it took a long time for RBC models to be replaced by New Keynesian models, and you will still see RBC models around. Elements of the New Classical counter revolution of the 1980s still persist in some places … The impression Paul Romer’s article gives, might just have been true in a few years in the 1980s before New Keynesian theory arrived. Since the 1990s New Keynesian theory is now the orthodoxy, and is used by central banks around the world.
Now this rather unsuccessful attempt to disarm the real force of Romer’s critique should come as no surprise for anyone who has been following Wren-Lewis’ writings over the years.
In a recent paper — Unravelling the New Classical Counter Revolution — Wren-Lewis writes approvingly about all the ‘impressive’ theoretical insights New Classical economics has brought to macroeconomics:
The theoretical insights that New Classical economists brought to the table were impressive: besides rational expectations, there was a rationalisation of permanent income and the life-cycle models using intertemporal optimisation, time inconsistency and more …
A new revolution, that replaces current methods with older ways of doing macroeconomics, seems unlikely and I would argue is also undesirable. The discipline does not need to advance one revolution at a time …
To understand modern academic macroeconomics, it is no longer essential that you start with The General Theory. It is far more important that you read Lucas and Sargent (1979), which is a central text in what is generally known as the New Classical Counter Revolution (NCCR). That gave birth to DSGE models and the microfoundations programme, which are central to mainstream macroeconomics today …
There’s something that just does not sit very well with this picture of modern macroeconomics.
‘Read Lucas and Sargent (1979)’. Yes, why not. That is exactly what Romer did!
One who has also read it is Wren-Lewis’s ‘New Keynesian’ buddy Paul Krugman. And this is what he has to say on that reading experience:
Lucas and his school … went even further down the equilibrium rabbit hole, notably with real business cycle theory. And here is where the kind of willful obscurantism Romer is after became the norm. I wrote last year about the remarkable failure of RBC theorists ever to offer an intuitive explanation of how their models work, which I at least hinted was willful:
“But the RBC theorists never seem to go there; it’s right into calibration and statistical moments, with never a break for intuition. And because they never do the simple version, they don’t realize (or at any rate don’t admit to themselves) how fundamentally silly the whole thing sounds, how much it’s at odds with lived experience.”
And so has Truman F. Bewley:
Lucas and Rapping (1969) claim that cyclical increases in unemployment occur when workers quit their jobs because wages or salaries fall below expectations …
According to this explanation, when wages are unusually low, people become unemployed in order to enjoy free time, substituting leisure for income at a time when they lose the least income …
According to the theory, quits into unemployment increase during recessions, whereas historically quits decrease sharply and roughly half of unremployed workers become jobless because they are laid off … During the recession I studied, people were even afraid to change jobs because new ones might prove unstable and lead to unemployment …
If wages and salaries hardly ever fall, the intertemporal substitution theory is widely applicable only if the unemployed prefer jobless leisure to continued employment at their old pay. However, the attitude and circumstances of the unemployed are not consistent with their having made this choice …
In real business cycle theory, unemployment is interpreted as leisure optimally selected by workers, as in the Lucas-Rapping model. It has proved difficult to construct business cycle models consistent with this assumption and with real wage fluctuations as small as they are in reality, relative to fluctuations in employment.
This is, of course, only what you would expect of New Classical Chicago economists.
So, what’s the problem?
The problem is that sadly enough this extraterrestial view of unemployment is actually shared by Wren-Lewis and other so called ‘New Keynesians’ — a school whose microfounded dynamic stochastic general equilibrium models cannot even incorporate such a basic fact of reality as involuntary unemployment!
Of course, working with microfounded representative agent models, this should come as no surprise. If one representative agent is employed, all representative agents are. The kind of unemployment that occurs is voluntary, since it is only adjustments of the hours of work that these optimizing agents make to maximize their utility.
In the basic DSGE models used by most ‘New Keynesians’, the labour market is always cleared – responding to a changing interest rate, expected life time incomes, or real wages, the representative agent maximizes the utility function by varying her labour supply, money holding and consumption over time. Most importantly – if the real wage somehow deviates from its ‘equilibrium value,’ the representative agent adjust her labour supply, so that when the real wage is higher than its ‘equilibrium value,’ labour supply is increased, and when the real wage is below its ‘equilibrium value,’ labour supply is decreased.
In this model world, unemployment is always an optimal choice to changes in the labour market conditions. Hence, unemployment is totally voluntary. To be unemployed is something one optimally chooses to be.
To Wren-Lewis is seems as though the ‘New Keynesian’ acceptance of rational expectations, representative agents and microfounded DSGE models is something more or less self-evidently good. Not all economists (yours truly included) share that view:
While one can understand that some of the elements in DSGE models seem to appeal to Keynesians at first sight, after closer examination, these models are in fundamental contradiction to Post-Keynesian and even traditional Keynesian thinking. The DSGE model is a model in which output is determined in the labour market as in New Classical models and in which aggregate demand plays only a very secondary role, even in the short run.
In addition, given the fundamental philosophical problems presented for the use of DSGE models for policy simulation, namely the fact that a number of parameters used have completely implausible magnitudes and that the degree of freedom for different parameters is so large that DSGE models with fundamentally different parametrization (and therefore different policy conclusions) equally well produce time series which fit the real-world data, it is also very hard to understand why DSGE models have reached such a prominence in economic science in general.
Neither New Classical nor ‘New Keynesian’ microfounded DSGE macro models have helped us foresee, understand or craft solutions to the problems of today’s economies.
Wren-Lewis ultimately falls back on the same kind of models that he criticize, and it would sure be interesting to once hear him explain how silly assumptions like ‘hyperrationality’ and ‘representative agents’ help him work out the fundamentals of a truly relevant macroeconomic analysis.
In a recent paper on modern macroeconomics, another of Wren-Lewis’s ‘New Keynesian’ buddies, macroeconomist Greg Mankiw, wrote:
The real world of macroeconomic policymaking can be disheartening for those of us who have spent most of our careers in academia. The sad truth is that the macroeconomic research of the past three decades has had only minor impact on the practical analysis of monetary or fiscal policy. The explanation is not that economists in the policy arena are ignorant of recent developments. Quite the contrary: The staff of the Federal Reserve includes some of the best young Ph.D.’s, and the Council of Economic Advisers under both Democratic and Republican administrations draws talent from the nation’s top research universities. The fact that modern macroeconomic research is not widely used in practical policymaking is prima facie evidence that it is of little use for this purpose. The research may have been successful as a matter of science, but it has not contributed significantly to macroeconomic engineering.
So, then what is the raison d’être of macroeconomics, if it has nothing to say about the real world and the economic problems out there?
If macoeconomic models – no matter of what ilk – assume representative actors, rational expectations, market clearing and equilibrium, and we know that real people and markets cannot be expected to obey these assumptions, the warrants for supposing that conclusions or hypothesis of causally relevant mechanisms or regularities can be bridged, are obviously non-justifiable. Macroeconomic theorists – regardless of being ‘New Monetarist’, ‘New Classical’ or ‘New Keynesian’ – ought to do some ontological reflection and heed Keynes’ warnings on using thought-models in economics:
The object of our analysis is, not to provide a machine, or method of blind manipulation, which will furnish an infallible answer, but to provide ourselves with an organized and orderly method of thinking out particular problems; and, after we have reached a provisional conclusion by isolating the complicating factors one by one, we then have to go back on ourselves and allow, as well as we can, for the probable interactions of the factors amongst themselves. This is the nature of economic thinking. Any other way of applying our formal principles of thought (without which, however, we shall be lost in the wood) will lead us into error.
So, these are some of my arguments for why I think that Simon Wren-Lewis ought to be even more critical of the present state of macroeconomics — including ‘New Keynesian’ macroeconomics — than he is. If macroeconomic models – no matter of what ilk – build on microfoundational assumptions of representative actors, rational expectations, market clearing and equilibrium, and we know that real people and markets cannot be expected to obey these assumptions, the warrants for supposing that conclusions or hypothesis of causally relevant mechanisms or regularities can be bridged, are obviously non-justifiable. Trying to represent real-world target systems with models flagrantly at odds with reality is futile. And if those models are New Classical or ‘New Keynesian’ makes very little difference.
Fortunately — when you’ve got tired of the kind of macroeconomic apologetics produced by ‘New Keynesian’ macroeconomists like Wren-Lewis, Mankiw, and Krugman, there still are some real Keynesian macroeconomists to read. One of them — Axel Leijonhufvud — writes:
For many years now, the main alternative to Real Business Cycle Theory has been a somewhat loose cluster of models given the label of New Keynesian theory. New Keynesians adhere on the whole to the same DSGE modeling technology as RBC macroeconomists but differ in the extent to which they emphasise inflexibilities of prices or other contract terms as sources of shortterm adjustment problems in the economy. The “New Keynesian” label refers back to the “rigid wages” brand of Keynesian theory of 40 or 50 years ago. Except for this stress on inflexibilities this brand of contemporary macroeconomic theory has basically nothing Keynesian about it …
I conclude that dynamic stochastic general equilibrium theory has shown itself an intellectually bankrupt enterprise. But this does not mean that we should revert to the old Keynesian theory that preceded it (or adopt the New Keynesian theory that has tried to compete with it). What we need to learn from Keynes … are about how to view our responsibilities and how to approach our subject.
No matter how brilliantly silly ‘New Keynesian’ DSGE models central banks, Wren-Lewis, and his buddies come up with, they do not help us working with the fundamental issues of modern economies. Using that kind of models only confirms Robert Gordon‘s dictum that today
rigor competes with relevance in macroeconomic and monetary theory, and in some lines of development macro and monetary theorists, like many of their colleagues in micro theory, seem to consider relevance to be more or less irrelevant.
Tired of the idea of an infallible mainstream neoclassical economics and its perpetuation of spoon-fed orthodoxy, yours truly launched this blog five years ago. The number of visitors has increased steadily, and with now having had my posts viewed more than 3 million times, I have to admit of still being — given the rather wonkish character of the blog, with posts mostly on economic theory, statistics, econometrics, theory of science and methodology — rather gobsmacked that so many are interested and take their time to read the often rather geeky stuff posted here.
In the 21st century the blogosphere has without any doubts become one of the greatest channels for dispersing new knowledge and information. As a blogger I can specia-lize in those particular topics an economist and critical realist professor of social science happens to have both deep knowledge of and interest in. That, of course, also means — in the modern long tail world — being able to target a segment of readers with much narrower and specialized interests than newspapers and magazines as a rule could aim for — and still attract quite a lot of readers.
In 2007 Thomas Sargent gave a graduation speech at University of California at Berkeley, giving the grads “a short list of valuable lessons that our beautiful subject teaches”:
1. Many things that are desirable are not feasible.
2. Individuals and communities face trade-oﬀs.
3. Other people have more information about their abilities, their eﬀorts, and their preferences than you do.
4. Everyone responds to incentives, including people you want to help. That is why social safety nets don’t always end up working as intended.
5. There are trade oﬀs between equality and eﬃciency.
6. In an equilibrium of a game or an economy, people are satisﬁed with their choices. That is why it is diﬃcult for well meaning outsiders to change things for better or worse.
7. In the future, you too will respond to incentives. That is why there are some promises that you’d like to make but can’t. No one will believe those promises because they know that later it will not be in your interest to deliver. The lesson here is this: before you make a promise, think about whether you will want to keep it if and when your circumstances change. This is how you earn a reputation.
8. Governments and voters respond to incentives too. That is why governments sometimes default on loans and other promises that they have made.
9. It is feasible for one generation to shift costs to subsequent ones. That is what national government debts and the U.S. social security system do (but not the social security system of Singapore).
10. When a government spends, its citizens eventually pay, either today or tomorrow, either through explicit taxes or implicit ones like inﬂation.
11. Most people want other people to pay for public goods and government transfers (especially transfers to themselves).
12. Because market prices aggregate traders’ information, it is diﬃcult to forecast stock prices and interest rates and exchange rates.
Reading through this list of “valuable lessons” things suddenly fall in place.
This kind of self-righteous neoliberal drivel has again and again been praised and prized. And not only by econ bloggers and right-wing think-tanks.
Out of the seventy-six laureates that have been awarded ‘The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel,’ twenty-eight have been affiliated to The University of Chicago. The world is really a small place when it comes to economics …
Some of the economists who agree about the state of macro in private conversations will not say so in public. This is consistent with the explanation based on different prices. Yet some of them also discourage me from disagreeing openly, which calls for some other explanation.
They may feel that they will pay a price too if they have to witness the unpleasant reaction that criticism of a revered leader provokes. There is no question that the emotions are intense. After I criticized a paper by Lucas, I had a chance encounter with someone who was so angry that at first he could not speak. Eventually, he told me, “You are killing Bob.”
But my sense is that the problem goes even deeper that avoidance. Several economists I know seem to have assimilated a norm that the post-real macroeconomists actively promote – that it is an extremely serious violation of some honor code for anyone to criticize openly a revered authority figure – and that neither facts that are false, nor predictions that are wrong, nor models that make no sense matter enough to worry about …
Science, and all the other research fields spawned by the enlightenment, survive by “turning the dial to zero” on these innate moral senses. Members cultivate the conviction that nothing is sacred and that authority should always be challenged … By rejecting any reliance on central authority, the members of a research field can coordinate their independent efforts only by maintaining an unwavering commitment to the pursuit of truth, established imperfectly, via the rough consensus that emerges from many independent assessments of publicly disclosed facts and logic; assessments that are made by people who honor clearly stated disagreement, who accept their own fallibility, and relish the chance to subvert any claim of authority, not to mention any claim of infallibility.
This is part of why yours truly appreciate Romer’s article, and even find it ‘brave.’ Everyone knows what he says is true, but few have the courage to openly speak and write about it. The ‘honour code’ in academia certainly needs revision.
The excessive formalization and mathematization of economics since WW II has made mainstream — neoclassical — economists more or less obsessed with formal, deductive-axiomatic models. Confronted with the critique that they do not solve real problems, they often react as Saint-Exupéry’s Great Geographer, who, in response to the questions posed by The Little Prince, says that he is too occupied with his scientific work to be be able to say anything about reality. Confronting economic theory’s lack of relevance and ability to tackle real probems, one retreats into the wonderful world of economic models. While the economic problems in the world around us steadily increase, one is rather happily playing along with the latest toys in the mathematical toolbox.
Modern mainstream economics is sure very rigorous — but if it’s rigorously wrong, who cares?
Instead of making formal logical argumentation based on deductive-axiomatic models the message, I think we are better served by economists who more than anything else try to contribute to solving real problems. And then the motto of John Maynard Keynes is more valid than ever:
It is better to be vaguely right than precisely wrong
Peter Dorman is one of those rare economists that it is always a pleasure to read. Here his critical eye is focussed on economists’ infatuation with homogeneity and averages:
You may feel a gnawing discomfort with the way economists use statistical techniques. Ostensibly they focus on the difference between people, countries or whatever the units of observation happen to be, but they nevertheless seem to treat the population of cases as interchangeable—as homogenous on some fundamental level. As if people were replicants.
You are right, and this brief talk is about why and how you’re right, and what this implies for the questions people bring to statistical analysis and the methods they use.
Our point of departure will be a simple multiple regression model of the form
y = β0 + β1 x1 + β2 x2 + …. + ε
where y is an outcome variable, x1 is an explanatory variable of interest, the other x’s are control variables, the β’s are coefficients on these variables (or a constant term, in the case of β0), and ε is a vector of residuals. We could apply the same analysis to more complex functional forms, and we would see the same things, so let’s stay simple.
What question does this model answer? It tells us the average effect that variations in x1 have on the outcome y, controlling for the effects of other explanatory variables. Repeat: it’s the average effect of x1 on y.
This model is applied to a sample of observations. What is assumed to be the same for these observations? (1) The outcome variable y is meaningful for all of them. (2) The list of potential explanatory factors, the x’s, is the same for all. (3) The effects these factors have on the outcome, the β’s, are the same for all. (4) The proper functional form that best explains the outcome is the same for all. In these four respects all units of observation are regarded as essentially the same.
Now what is permitted to differ across these observations? Simply the values of the x’s and therefore the values of y and ε. That’s it.
Thus measures of the difference between individual people or other objects of study are purchased at the cost of immense assumptions of sameness. It is these assumptions that both reflect and justify the search for average effects …
In the end, statistical analysis is about imposing a common structure on observations in order to understand differentiation. Any structure requires assuming some kinds of sameness, but some approaches make much more sweeping assumptions than others. An unfortunate symbiosis has arisen in economics between statistical methods that excessively rule out diversity and statistical questions that center on average (non-diverse) effects. This is damaging in many contexts, including hypothesis testing, program evaluation, forecasting—you name it …
The first step toward recovery is admitting you have a problem. Every statistical analyst should come clean about what assumptions of homogeneity are being made, in light of their plausibility and the opportunities that exist for relaxing them.
Limiting model assumptions in economic science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we “export” them to our “target systems”, we have to be able to show that they do not only hold under ceteris paribus conditions and a fortiori only are of limited value to our understanding, explanations or predictions of real economic systems.
Our admiration for technical virtuosity should not blind us to the fact that we have to have a cautious attitude towards probabilistic inferences in economic contexts. Science should help us penetrate to the causal process lying behind events and disclose the causal forces behind what appears to be simple facts. We should look out for causal relations, but econometrics can never be more than a starting point in that endeavour, since econometric (statistical) explanations are not explanations in terms of mechanisms, powers, capacities or causes. Firmly stuck in an empiricist tradition, econometrics is only concerned with the measurable aspects of reality. But there is always the possibility that there are other variables – of vital importance and although perhaps unobservable and non-additive, not necessarily epistemologically inaccessible – that were not considered for the model. Those who were can hence never be guaranteed to be more than potential causes, and not real causes. A rigorous application of econometric methods in economics really presupposes that the phenomena of our real world economies are ruled by stable causal relations between variables. A perusal of the leading econom(etr)ic journals shows that most econometricians still concentrate on fixed parameter models and that parameter-values estimated in specific spatio-temporal contexts are presupposed to be exportable to totally different contexts. To warrant this assumption one, however, has to convincingly establish that the targeted acting causes are stable and invariant so that they maintain their parametric status after the bridging. The endemic lack of predictive success of the econometric project indicates that this hope of finding fixed parameters is a hope for which there really is no other ground than hope itself.
Real world social systems are not governed by stable causal mechanisms or capacities. The kinds of “laws” and relations that econometrics has established, are laws and relations about entities in models that presuppose causal mechanisms being atomistic and additive. When causal mechanisms operate in real world social target systems they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made “nomological machines” they are rare, or even non-existant. Unfortunately that also makes most of the achievements of econometrics – as most of contemporary endeavours of mainstream economic theoretical modeling – rather useless.
Remember that a model is not the truth. It is a lie to help you get your point across. And in the case of modeling economic risk, your model is a lie about others, who are probably lying themselves. And what’s worse than a simple lie? A complicated lie.
Sam L. Savage The Flaw of Averages