Almost a century and a half after Léon Walras founded neoclassical general equilibrium theory, economists still have not been able to show that markets move economies to equilibria. What we do know is that — under very restrictive assumptions — unique Pareto-efficient equilibria do exist.
But what good does that do? As long as we cannot show, except under exceedingly unrealistic assumptions, that there are convincing reasons to suppose there are forces which lead economies to equilibria – the value of general equilibrium theory is nil. As long as we cannot really demonstrate that there are forces operating — under reasonable, relevant and at least mildly realistic conditions — at moving markets to equilibria, there cannot really be any sustainable reason for anyone to pay any interest or attention to this theory. A stability that can only be proved by assuming “Santa Claus” conditions is of no avail. Most people do not believe in Santa Claus anymore. And for good reasons. Santa Claus is for kids.
Continuing to model a world full of agents behaving as economists — “often wrong, but never uncertain” — and still not being able to show that the system under reasonable assumptions converges to equilibrium (or simply assume the problem away), is a gross misallocation of intellectual resources and time.
In case you think this verdict is only a heterodox idiosyncrasy, here’s what one of the world’s greatest microeconomists — Alan Kirman — writes in his thought provoking paper The intrinsic limits of modern economic theory:
If one maintains the fundamentally individualistic approach to constructing economic models no amount of attention to the walls will prevent the citadel from becoming empty. Empty in the sense that one cannot expect it to house the elements of a scientific theory, one capable of producing empirically falsifiable propositions …
Starting from ‘badly behaved’ individuals, we arrive at a situation in which not only is aggregate demand a nice function but, by a result of Debreu, equilibrium will be ‘locally unique. Whilst this means that at least there is some hope for local stability, the real question is, can we hope to proceed and obtain global uniqueness and stability?
The unfortunate answer is a categorical no! [The results of Sonnenchein (1972), Debreu (1974), Mantel (1976) and Mas Collel (1985)] shows clearly why any hope for uniqueness or stability must be unfounded … There is no hope that making the distribution of preferences or income ‘not to dispersed’ or ‘single peaked’ will help us to avoid the fundamental problem …
The problem seems to be embodied in what is an essential feature of a centuries-long tradition in economics, that of treating individuals as acting independently of each other …
To argue in this way suggests … that once the appropriate signals are given, individuals behave in isolation and the result of their behaviour may simply be added together …
The idea that we should start at the level of the isolated individual is one which we may well have to abandon … we should be honest from the outset and assert simply that by assumption we postulate that each sector of the economy behaves as one individual and not claim any spurious microjustification …
Economists therefore should not continue to make strong assertions about this behaviour based on so-called general equilibrium models which are, in reality, no more than special examples with no basis in economic theory as it stands.
Getting around Sonnenschein-Mantel-Debreu using representative agents may be — from a purely formalistic point of view — very expedient. But — from a scientific point of view — relevant and realistic? No way! Although garmented as representative agent, the emperor is still naked.
The purported strength of New Classical and “New Keynesian” macroeconomics is that they have firm anchorage in preference-based microeconomics, and especially the decisions taken by inter-temporal utility maximizing forward-loooking individuals.
To some of us, however, this has come at too high a price. The almost quasi-religious insistence that macroeconomics has to have microfoundations – without ever presenting neither ontological nor epistemological justifications for this claim – has put a blind eye to the weakness of the whole enterprise of trying to depict a complex economy based on an all-embracing representative actor equipped with superhuman knowledge, forecasting abilities and forward-looking rational expectations. It is as if – after having swallowed the sour grapes of the Sonnenschein-Mantel-Debreu-theorem – these economists want to resurrect the omniscient walrasian auctioneer in the form of all-knowing representative actors equipped with rational expectations and assumed to somehow know the true structure of our model of the world (how that could even be conceivable is beyond imagination, given that the ongoing debate on mdoleing in economics, if anything, shows that not even we, the economists, can come to agreement on a common model).
Following the greatest economic depression since the 1930s, the grand old man of modern economic growth theory, Nobel laureate Robert Solow, on July 20, 2010, gave a prepared statement on “Building a Science of Economics for the Real World” for a hearing in the U. S. Congress. According to Solow modern macroeconomics has not only failed at solving present economic and financial problems, but is “bound” to fail. Building dynamically stochastic general equilibrium models (DSGE) on “assuming the economy populated by a representative agent” – consisting of “one single combination worker-owner-consumer-everything-else who plans ahead carefully and lives forever” – do not pass “the smell test: does this really make sense?” One cannot but concur in Solow’s surmise that a thoughtful person “faced with the thought that economic policy was being pursued on this basis, might reasonably wonder what planet he or she is on.”
This wasn’t the first time Solow had manifested a deeply felt uneasiness on the “development” of mainstream macroeconomics. Already in his little masterpiece from 2003 – Dumb and dumber in macroeconomics — he says it all:
So how did macroeconomics arrive at its current state?
The original impulse to look for better or more explicit micro foundations was probably reasonable. What emerged was not a good idea. The preferred model has a single representative consumer optimizing over infinite time with perfect foresight or rational expectations, in an environment that realizes the resulting plans more or less flawlessly through perfectly competitive forward-looking markets for goods and labor, and perfectly flexible prices and wages.
How could anyone expect a sensible short-to-medium-run macroeconomics to come out of that set-up? My impression is that this approach (which seems now to be the mainstream, and certainly dominates the journals, if not the workaday world of macroeconomics) has had no empirical success; but that is not the point here. I start from the presumption that we want macroeconomics to account for the occasional aggregative pathologies that beset modern capitalist economies, like recessions, intervals of stagnation, inflation, “stagflation,” not to mention negative pathologies like unusually good times. A model that rules out pathologies by definition is unlikely to help. It is always possible to claim that those “pathologies” are delusions, and the economy is merely adjusting optimally to some exogenous shock. But why should reasonable people accept this? …
What is needed for a better macroeconomics? [S]ome of the gross implausibilities … need to be eliminated. The clearest candidate is the representative agent. Heterogeneity is the essence of a modern economy. In real life we worry about the relations between managers and shareowners, between banks and their borrowers, between workers and employers, between venture capitalists and entrepreneurs, you name it. We worry about those interfaces because they can and do go wrong, with likely macroeconomic consequences. We know for a fact that heterogeneous agents have different and sometimes conflicting goals, different information, different capacities to process it, different expectations, different beliefs about how the economy works. Representative-agent models exclude all this landscape, though it needs to be abstracted and included in macro-models.
I also doubt that universal rational expectations provide a useful framework for macroeconomics …
Now here is a peculiar thing. When I was in advanced middle age, I suddenly woke up to the fact that my colleagues in macroeconomics, the ones I most admired, thought that the fundamental problem of macro theory was to understand how nominal events could have real consequences. This is just a way of stating some puzzle or puzzles about the sources for sticky wages and prices. This struck me as peculiar in two ways.
First of all, when I was even younger, nobody thought this was a puzzle. You only had to look around you to stumble on a hundred different reasons why various prices and factor prices should be much less than perfectly flexible. I once wrote, archly I admit, that the world has its reasons for not being Walrasian. Of course I soon realized that what macroeconomists wanted was a formal account of price stickiness that would fit comfortably into rational, optimizing models. OK, that is a harmless enough activity, especially if it is not taken too seriously. But price and wage stickiness themselves are not a major intellectual puzzle unless you insist on making them one.
“Sorta-kinda New Keynesian” economist Paul Krugman now has learned from Francesco Saraceno — who links to yours truly — that “some people are attacking” him for “defending an economic orthodoxy that has failed.” Krugman writes:
It’s kind of an odd place to find myself, given how critical I’ve been of the way the economics profession has dealt with the crisis. But it’s not entirely unfair: I am quite skeptical of people whose response to the sorry state of affairs is to declare that what we need is a whole new field …
My answer, to put it in technical terms, is “Well, duh.” Maybe grad students at some departments, who are several generations into the law of diminishing disciples, really don’t know that rational behavior is at best a useful fiction, that markets aren’t perfect, etc, etc …
The question is what you do with this insight.
[W]hat I do … plus many others does, is a more modest, more eclectic form of analysis. You use maximization and equilibrium where it seems reasonably consistent with reality, because of its clarifying power, but you introduce ad hoc deviations where experience seems to demand them — downward rigidity of wages, balance-sheet constraints, bubbles (which are hard to predict, but you can say a lot about their consequences).
You may say that what we need is reconstruction from the ground up — an economics with no vestige of equilibrium analysis. Well, show me some results. As it happens, the hybrid, eclectic approach I’ve just described has done pretty well in this crisis, so you had better show me some really superior results before it gets thrown out the window.
Does this mean that nothing should change in the way we teach economics? By no means — it’s quite clear that the teaching of macroeconomics has gone seriously astray. As Saraceno says, the simple models that have proved so useful since 2008 are by and large taught only at the undergrad level — they’re treated as too simple, too ad hoc, whatever, to make it into the grad courses even at places that aren’t very ideological.
Let me just start with an observation on Krugman’s allusion (“simple models”) to IS-LM. This, of course, comes as no surprise, since we who have followed Krugman’s writings over the years, know that he is very fond of referring to and defending the old and dear IS-LM model.
What is perhaps less well-known is that John Hicks, the man who invented the IS-LM in his 1937 Econometrica review of Keynes’ General Theory – Mr. Keynes and the ‘Classics’. A Suggested Interpretation – returned to it in an article in 1980 – IS-LM: an explanation – in Journal of Post Keynesian Economics, and self-critically wrote:
I accordingly conclude that the only way in which IS-LM analysis usefully survives — as anything more than a classroom gadget, to be superseded, later on, by something better – is in application to a particular kind of causal analysis, where the use of equilibrium methods, even a drastic use of equilibrium methods, is not inappropriate. I have deliberately interpreted the equilibrium concept, to be used in such analysis, in a very stringent manner (some would say a pedantic manner) not because I want to tell the applied economist, who uses such methods, that he is in fact committing himself to anything which must appear to him to be so ridiculous, but because I want to ask him to try to assure himself that the divergences between reality and the theoretical model, which he is using to explain it, are no more than divergences which he is entitled to overlook. I am quite prepared to believe that there are cases where he is entitled to overlook them. But the issue is one which needs to be faced in each case.
When one turns to questions of policy, looking toward the future instead of the past, the use of equilibrium methods is still more suspect. For one cannot prescribe policy without considering at least the possibility that policy may be changed. There can be no change of policy if everything is to go on as expected-if the economy is to remain in what (however approximately) may be regarded as its existing equilibrium. It may be hoped that, after the change in policy, the economy will somehow, at some time in the future, settle into what may be regarded, in the same sense, as a new equilibrium; but there must necessarily be a stage before that equilibrium is reached …
I have paid no attention, in this article, to another weakness of IS-LM analysis, of which I am fully aware; for it is a weakness which it shares with General Theory itself. It is well known that in later developments of Keynesian theory, the long-term rate of interest (which does figure, excessively, in Keynes’ own presentation and is presumably represented by the r of the diagram) has been taken down a peg from the position it appeared to occupy in Keynes. We now know that it is not enough to think of the rate of interest as the single link between the financial and industrial sectors of the economy; for that really implies that a borrower can borrow as much as he likes at the rate of interest charged, no attention being paid to the security offered. As soon as one attends to questions of security, and to the financial intermediation that arises out of them, it becomes apparent that the dichotomy between the two curves of the IS-LM diagram must not be pressed too hard.
Back in 1937 John Hicks said that he was building a model of John Maynard Keynes’ General Theory. He wasn’t. And it’s about time that neoclassical economists – as Krugman, Mankiw, Wren-Lewis or what have you — set the record straight and stop promoting something that the creator himself admits was a total failure. Why not study the real thing itself – General Theory – in full and without looking the other way when it comes to fundamental aspects of reality such as non-ergodicity and uncertainty?
Secondly, when it comes to modeling philosophy, Paul Krugman has in an earlier piece defended his position in the following words (my italics):
I don’t mean that setting up and working out microfounded models is a waste of time. On the contrary, trying to embed your ideas in a microfounded model can be a very useful exercise — not because the microfounded model is right, or even better than an ad hoc model, but because it forces you to think harder about your assumptions, and sometimes leads to clearer thinking. In fact, I’ve had that experience several times
I am however not convinced by the argument. If people put that enormous amount of time and energy that they do into constructing macroeconomic models, then they really have to be substantially contributing to our understanding and ability to explain and grasp real macroeconomic processes. If not, they should – after somehow perhaps being able to sharpen our thoughts – be thrown into the waste-paper-basket (something the father of macroeconomics, Keynes, used to do), and not as today, being allowed to overrun our economics journals and giving their authors celestial academic prestige.
Krugman’s explications on this issue is really interesting also because they shed light on a kind of inconsistency in his art of argumentation. During a couple of years Krugman has in more than one article criticized mainstream economics for using too much (bad) mathematics and axiomatics in their model-building endeavours. But when it comes to defending his own position on various issues he usually himself ultimately falls back on the same kind of models. In his End This Depression Now – just to take one example — Paul Krugman maintains that although he doesn’t buy “the assumptions about rationality and markets that are embodied in many modern theoretical models, my own included,” he still find them useful “as a way of thinking through some issues carefully.”
When it comes to methodology and assumptions, Krugman obviously has a lot in common with the kind of model-building he otherwise criticizes.
The same critique – that when it comes to defending his own position on various issues he usually himself ultimately falls back on the same kind of models that he otherwise criticize – can be directed against his new post. Krugman has said these things before, but I am still waiting for him to really explain HOW the silly assumptions of hyperrationality and representative agents helps him work with the fundamental issues. If one can only use those assumptions with — as Krugman says, “tongue in cheek” – well, why then use them at all? Wouldn’t it be better to use more adequately realistic assumptions and be able to talk clear without any tongue in cheek?
Thirdly, I notice again and again, that on most macroeconomic policy discussions I find myself in agreement with Krugman. To me that just shows that Krugman is right in spite of and not thanks to those neoclassical models he ultimately refers to. When he is discussing austerity measures, Ricardian equivalence or problems with the euro, he is actually not using those models, but rather simpler and more adequate and relevant thought-constructions in the vein of Keynes.
The final court of appeal for macroeconomic models is the real world, and as long as no convincing justification is put forward for how the inferential bridging de facto is made, macroeconomic model building is little more than “hand waving” that give us rather little warrant for making inductive inferences from models to real world target systems. If substantive questions about the real world are being posed, it is the formalistic-mathematical representations utilized to analyze them that have to match reality, not the other way around. As Keynes has it:
Economics is a science of thinking in terms of models joined to the art of choosing models which are relevant to the contemporary world. It is compelled to be this, because, unlike the natural science, the material to which it is applied is, in too many respects, not homogeneous through time.
If macroeconomic models – no matter of what ilk – assume representative actors, rational expectations, market clearing and equilibrium, and we know that real people and markets cannot be expected to obey these assumptions, the warrants for supposing that conclusions or hypothesis of causally relevant mechanisms or regularities can be bridged, are obviously non-justifiable. Macroeconomic theorists – regardless of being “New Monetarist”, “New Classical” or ”New Keynesian” – ought to do some ontological reflection and heed Keynes’ warnings on using thought-models in economics:
The object of our analysis is, not to provide a machine, or method of blind manipulation, which will furnish an infallible answer, but to provide ourselves with an organized and orderly method of thinking out particular problems; and, after we have reached a provisional conclusion by isolating the complicating factors one by one, we then have to go back on ourselves and allow, as well as we can, for the probable interactions of the factors amongst themselves. This is the nature of economic thinking. Any other way of applying our formal principles of thought (without which, however, we shall be lost in the wood) will lead us into error.
Lastly, I think people — like Paul Krugman — calling themselves “New Keynesians” ought to be rather embarrassed by the fact that the kind of dynamic stochastic general equilibrium models they use, cannot incorporate such a basic fact of reality as involuntary unemployment! Of course, working with representative agent models, this should come as no surprise. If one representative agent is employed, all representative agents are. The kind of unemployment that occurs is voluntary, since it is only adjustments of the hours of work that these optimizing agents make to maximize their utility. Being a “New Keynesian” it ought to be of interest to know what Keynes had to say on the issue. In General Theory he writes:
The classical school [maintains that] while the demand for labour at the existing money-wage may be satisfied before everyone willing to work at this wage is employed, this situation is due to an open or tacit agreement amongst workers not to work for less, and that if labour as a whole would agree to a reduction of money-wages more employment would be forthcoming. If this is the case, such unemployment, though apparently involuntary, is not strictly so, and ought to be included under the above category of ‘voluntary’ unemployment due to the effects of collective bargaining, etc …
The classical theory … is best regarded as a theory of distribution in conditions of full employment. So long as the classical postulates hold good, unemployment, which is in the above sense involuntary, cannot occur. Apparent unemployment must, therefore, be the result either of temporary loss of work of the ‘between jobs’ type or of intermittent demand for highly specialised resources or of the effect of a trade union ‘closed shop’ on the employment of free labour. Thus writers in the classical tradition, overlooking the special assumption underlying their theory, have been driven inevitably to the conclusion, perfectly logical on their assumption, that apparent unemployment (apart from the admitted exceptions) must be due at bottom to a refusal by the unemployed factors to accept a reward which corresponds to their marginal productivity …
Obviously, however, if the classical theory is only applicable to the case of full employment, it is fallacious to apply it to the problems of involuntary unemployment – if there be such a thing (and who will deny it?). The classical theorists resemble Euclidean geometers in a non-Euclidean world who, discovering that in experience straight lines apparently parallel often meet, rebuke the lines for not keeping straight – as the only remedy for the unfortunate collisions which are occurring. Yet, in truth, there is no remedy except to throw over the axiom of parallels and to work out a non-Euclidean geometry. Something similar is required to-day in economics. We need to throw over the second postulate of the classical doctrine and to work out the behaviour of a system in which involuntary unemployment in the strict sense is possible.
So, these are some of my arguments for why I think that Paul Krugman and other neoclassical macro economists ought to be even more critical of the present state of macroeconomics than they are. If macroeconomic models – no matter of what ilk – build on assumptions of representative actors, rational expectations, market clearing and equilibrium, and we know that real people and markets cannot be expected to obey these assumptions, the warrants for supposing that conclusions or hypothesis of causally relevant mechanisms or regularities can be bridged, are obviously non-justifiable. Incompatibility between actual behaviour and the behaviour in macroeconomic models building on representative actors and rational expectations is not a symptom of “irrationality”. It rather shows the futility of trying to represent real-world target systems with models flagrantly at odds with reality.
A gadget is just a gadget – and brilliantly silly simple models — IS-LM included — do not help us working with the fundamental issues of modern economies any more than brilliantly silly complicated models — calibrated DSGE and RBC models included. That’s also a reason why I — unlike Paul Krugman — support the young economics students who ask for Rethinking Economics.
Sorta-kinda “New Keynesian” Paul Krugman has a post up arguing that the problem with the academic profession is that some macroeconomists aren’t “bothered to actually figure out” how the New Keynesian model with its Euler conditions — ”based on the assumption that people have perfect access to capital markets, so that they can borrow and lend at the same rate” — really works. According to Krugman, this shouldn’t be hard at all — “at least it shouldn’t be for anyone with a graduate training in economics.”
If people (not the representative agent) at least sometimes can’t help being off their labour supply curve — as in the real world — then what are these hordes of Euler equations that you find ad nauseam in “New Keynesian” macromodels gonna help us?
It is clear that the New Keynesian model, even extended to allow, say, for presence of investment and capital accumulation, or for the presence of both discrete price and nominal wage setting, is still just a toy model, and that it lacks many of the details which might be needed to understand fluctuations …
One striking (and unpleasant) characteristic of the basic New Keynesian model is that there is no unemployment! Movements take place along a labor supply curve, either at the intensive margin (with workers varying hours) or at the extensive margin (with workers deciding whether or not to participate). One has a sense, however, that this may give a misleading description of fluctuations, in positive terms, and, even more so, in normative terms: The welfare cost of fluctuations is often thought to fall disproportionately on the unemployed.
The other day yours truly was interviewed by a public radio journalist working on a series on Great Economic Thinkers. We were discussing the monumental failures of the predictions-and-forecasts-business. But — the journalist asked — if these cocksure economists with their “rigorous” and “precise” mathematical-statistical-econometric models are so wrong again and again — why do they persist wasting time on it?
In a discussion on uncertainty and the hopelessness of accurately modeling what will happen in the real world – in M. Szenberg’s Eminent Economists: Their Life Philosophies – Nobel laureate Kenneth Arrow comes up with what is probably the right answer:
It is my view that most individuals underestimate the uncertainty of the world. This is almost as true of economists and other specialists as it is of the lay public. To me our knowledge of the way things work, in society or in nature, comes trailing clouds of vagueness … Experience during World War II as a weather forecaster added the news that the natural world as also unpredictable. An incident illustrates both uncer-tainty and the unwilling-ness to entertain it. Some of my colleagues had the responsi-bility of preparing long-range weather forecasts, i.e., for the following month. The statisticians among us subjected these forecasts to verification and found they differed in no way from chance. The forecasters themselves were convinced and requested that the forecasts be discontinued. The reply read approximately like this: ‘The Commanding General is well aware that the forecasts are no good. However, he needs them for planning purposes.’
So in what sense is this “dynamic stochastic general equilibrium” model firmly grounded in the principles of economic theory? I do not want to be misunderstood. Friends have reminded me that much of the effort of “modern macro” goes into the incorporation of important deviations from the Panglossian assumptions that underlie the simplistic application of the Ramsey model to positive macroeconomics. Research focuses on the implications of wage and price stickiness, gaps and asymmetries of information, long-term contracts, imperfect competition, search, bargaining and other forms of strategic behavior, and so on. That is indeed so, and it is how progress is made.
But this diversity only intensifies my uncomfortable feeling that something is being put over on us, by ourselves. Why do so many of those research papers begin with a bow to the Ramsey model and cling to the basic outline? Every one of the deviations that I just mentioned was being studied by macroeconomists before the “modern” approach took over. That research was dis missed as “lacking microfoundations.” My point is precisely that attaching a realistic or behavioral deviation to the Ramsey model does not confer microfoundational legitimacy on the combination. Quite the contrary: a story loses legitimacy and credibility when it is spliced to a simple, extreme, and on the face of it, irrelevant special case. This is the core of my objection: adding some realistic frictions does not make it any more plausible that an observed economy is acting out the desires of a single, consistent, forward-looking intelligence …
For completeness, I suppose it could also be true that the bow to the Ramsey model is like wearing the school colors or singing the Notre Dame fight song: a harmless way of providing some apparent intellectual unity, and maybe even a minimal commonality of approach. That seems hardly worthy of grown-ups, especially because there is always a danger that some of the in-group come to believe the slogans, and it distorts their work …
There has always been a purist streak in economics that wants everything to follow neatly from greed, rationality, and equilibrium, with no ifs, ands, or buts. Most of us have felt that tug. Here is a theory that gives you just that, and this
time “everything” means everything: macro, not micro. The theory is neat, learnable, not terribly difficult, but just technical enough to feel like “science.”
As is well-known, Keynes used to criticize the more traditional economics for making the fallacy of composition, which basically consists of the false belief that the whole is nothing but the sum of its parts. Keynes argued that in the society and in the economy this was not the case, and that a fortiori an adequate analysis of society and economy couldn’t proceed by just adding up the acts and decisions of individuals. The whole is more than a sum of parts.
This fact shows up already when orthodox – neoclassical – economics tries to argue for the existence of The Law of Demand – when the price of a commodity falls, the demand for it will increase – on the aggregate. Although it may be said that one succeeds in establishing The Law for single individuals it soon turned out – in the Sonnenschein-Mantel-Debreu theorem firmly established already in 1976 – that it wasn’t possible to extend The Law of Demand to apply on the market level, unless one made ridiculously unrealistic assumptions such as individuals all having homothetic preferences – which actually implies that all individuals have identical preferences.
This could only be conceivable if there was in essence only one actor – the (in)famous representative actor. So, yes, it was possible to generalize The Law of Demand – as long as we assumed that on the aggregate level there was only one commodity and one actor. What generalization! Does this sound reasonable? Of course not. This is pure nonsense!
How has neoclassical economics reacted to this devastating findig? Basically by looking the other way, ignoring it and hoping that no one sees that the emperor is naked.
Having gone through a handful of the most frequently used textbooks of economics at the undergraduate level today, I can only conclude that the models that are presented in these modern neoclassical textbooks try to describe and analyze complex and heterogeneous real economies with a single rational-expectations-robot-imitation-representative-agent.
That is, with something that has absolutely nothing to do with reality. And — worse still — something that is not even amenable to the kind of general equilibrium analysis that they are thought to give a foundation for, since Hugo Sonnenschein (1972) , Rolf Mantel (1976) and Gerard Debreu (1974) unequivocally showed that there did not exist any condition by which assumptions on individuals would guarantee neither stability nor uniqueness of the equlibrium solution.
So what modern economics textbooks present to students are really models built on the assumption that an entire economy can be modeled as a representative actor and that this is a valid procedure. But it isn’t — as the Sonnenschein-Mantel-Debreu theorem irrevocably has shown.
Of course one could say that it is too difficult on undergraduate levels to show why the procedure is right and to defer it to masters and doctoral courses. It could justifiably be reasoned that way – if what you teach your students is true, if The Law of Demand is generalizable to the market level and the representative actor is a valid modeling abstraction! But in this case it’s demonstrably known to be false, and therefore this is nothing but a case of scandalous intellectual dishonesty. It’s like telling your students that 2 + 2 = 5 and hope that they will never run into Peano’s axioms of arithmetics.
Or — just to take another example — let’s see how the important macroeconomic question of wage rigidity is treated.
Among a couple of really good intermediate – neoclassical – macroeconomics textbooks, Chad Jones textbook Macroeconomics (2nd ed, W W Norton, 2011) stands out as perhaps one of the better alternatives. Unfortunately it also contains some utter nonsense!
In chapter 7 – on “The Labor Market, Wages, and Unemployment” – Jones writes (p. 179):
The point of this experiment is to show that wage rigidities can lead to large movements in employment. Indeed, they are the reason John Maynard Keynes gave, in The General Theory of Employment, Interest, and Money (1936), for the high unemployment of the Great Depression.
But this is pure nonsense. For although Keynes in General Theory devoted substantial attention to the subject of wage rigidities, he certainly did not hold the view that wage rigidities were “the reason … for the high unemployment of the Great Depression.”
Since unions/workers, contrary to classical assumptions, make wage-bargains in nominal terms, they will – according to Keynes – accept lower real wages caused by higher prices, but resist lower real wages caused by lower nominal wages. However, Keynes held it incorrect to attribute “cyclical” unemployment to this diversified agent behaviour. During the depression money wages fell significantly and – as Keynes noted – unemployment still grew. Thus, even when nominal wages are lowered, they do not generally lower unemployment.
In any specific labour market, lower wages could, of course, raise the demand for labour. But a general reduction in money wages would leave real wages more or less unchanged. The reasoning of the classical economists was, according to Keynes, a flagrant example of the “fallacy of composition.” Assuming that since unions/workers in a specific labour market could negotiate real wage reductions via lowering nominal wages, unions/workers in general could do the same, the classics confused micro with macro.
Lowering nominal wages could not – according to Keynes – clear the labour market. Lowering wages – and possibly prices – could, perhaps, lower interest rates and increase investment. But to Keynes it would be much easier to achieve that effect by increasing the money supply. In any case, wage reductions was not seen by Keynes as a general substitute for an expansionary monetary or fiscal policy.
Even if potentially positive impacts of lowering wages exist, there are also more heavily weighing negative impacts – management-union relations deteriorating, expectations of on-going lowering of wages causing delay of investments, debt deflation et cetera.
So, what Keynes actually did argue in General Theory, was that the classical proposition that lowering wages would lower unemployment and ultimately take economies out of depressions, was ill-founded and basically wrong.
Where Keynes found it unproblematic to link flexible wages and prices to involuntary unemployment, modern “Keynesian” macroeconomists has turned his theory into different kinds of fix-price models. But to Keynes, flexible wages would only make things worse by leading to erratic price-fluctuations. The basic explanation for unemployment is insufficient aggregate demand, and that is mostly determined outside the labor market.
So — for almost forty years neoclassical economics has lived with a theorem that shows the impossibility of extending the microanalysis of consumer behaviour to the macro level (unless making patently and admittedly unrealistic and absurd assumptions). Still after all these years neoclassical economists pretend in their textbooks that this theorem does not exist. Most textbooks don’t even mention the existence of the Sonnenschein-Mantel-Debreu theorem. And when it comes to Keynes and wage rigidities, Jones’s macroeconomics textbook is not the only one containing the kind of utter nonsense we’ve mentioned. But here the solution to the problem is more easy. Keynes books are still in print. Read them.
The real scientific challenge — that also has to be reflected in textbooks — is to accept uncertainty and still try to explain why economic transactions take place — instead of simply conjuring the problem away by assuming rational expectations, representative actors, universal market clearing and treating uncertainty as if it was possible to reduce it to stochastic risk. That is scientific fraud. And it has been going on for too long now.
People say time heals all wounds. I wish that was true. But some wounds never heal — you just learn to live with the scars.
In loving memory of my brother, Peter “Uncas” Pålsson.