The best economics article of 2016 in my opinion was Paul Romer’s extremely well-written and brave frontal attack on the theories that has put macroeconomics on a path of ‘intellectual regress’ for three decades now:
Macroeconomists got comfortable with the idea that fluctuations in macroeconomic aggregates are caused by imaginary shocks, instead of actions that people take, after Kydland and Prescott (1982) launched the real business cycle (RBC) model …
In response to the observation that the shocks are imaginary, a standard defence invokes Milton Friedman’s (1953) methodological assertion from unnamed authority that “the more significant the theory, the more unrealistic the assumptions.” More recently, “all models are false” seems to have become the universal hand-wave for dismissing any fact that does not conform to the model that is the current favourite.
The noncommittal relationship with the truth revealed by these methodological evasions and the “less than totally convinced …” dismissal of fact goes so far beyond post-modern irony that it deserves its own label. I suggest “post-real.”
There are many kinds of useless ‘post-realeconomics held in high regard within mainstream economics establishment today. Few — if any — are less deserved than the macroeconomic theory/method — mostly connected with Nobel laureates Finn Kydland, Robert Lucas, Edward Prescott and Thomas Sargent — called calibration.
Paul Romer and yours truly are certainly not the only ones having doubts about the scientific value of calibration. In Journal of Economic Perspective (1996, vol. 10) Nobel laureates Lars Peter Hansen and James J. Heckman write:
It is only under very special circumstances that a micro parameter such as the inter-temporal elasticity of substitution or even a marginal propensity to consume out of income can be ‘plugged into’ a representative consumer model to produce an empirically concordant aggregate model … What credibility should we attach to numbers produced from their ‘computational experiments’, and why should we use their ‘calibrated models’ as a basis for serious quantitative policy evaluation? … There is no filing cabinet full of robust micro estimats ready to use in calibrating dynamic stochastic equilibrium models … The justification for what is called ‘calibration’ is vague and confusing.
Mathematical statistician Aris Spanos — in Error and Inference (Mayo & Spanos, 2010, p. 240) — is no less critical:
Given that “calibration” purposefully foresakes error probabilities and provides no way to assess the reliability of inference, how does one assess the adequacy of the calibrated model? …
The idea that it should suffice that a theory “is not obscenely at variance with the data” (Sargent, 1976, p. 233) is to disregard the work that statistical inference can perform in favor of some discretional subjective appraisal … it hardly recommends itself as an empirical methodology that lives up to the standards of scientific objectivity
In physics it may possibly not be straining credulity too much to model processes as ergodic – where time and history do not really matter – but in social and historical sciences it is obviously ridiculous. If societies and economies were ergodic worlds, why do econometricians fervently discuss things such as structural breaks and regime shifts? That they do is an indication of the unrealisticness of treating open systems as analyzable with ergodic concepts.
The future is not reducible to a known set of prospects. It is not like sitting at the roulette table and calculating what the future outcomes of spinning the wheel will be. Reading Lucas, Sargent, Prescott, Kydland and other calibrationists one comes to think of Robert Clower’s apt remark that
much economics is so far removed from anything that remotely resembles the real world that it’s often difficult for economists to take their own subject seriously.
As Romer says:
Math cannot establish the truth value of a fact. Never has. Never will.
So instead of assuming calibration and rational expectations to be right, one ought to confront the hypothesis with the available evidence. It is not enough to construct models. Anyone can construct models. To be seriously interesting, models have to come with an aim. They have to have an intended use. If the intention of calibration and rational expectations is to help us explain real economies, it has to be evaluated from that perspective. A model or hypothesis without a specific applicability is not really deserving our interest.
To say, as Edward Prescott that
one can only test if some theory, whether it incorporates rational expectations or, for that matter, irrational expectations, is or is not consistent with observations
is not enough. Without strong evidence all kinds of absurd claims and nonsense may pretend to be science. We have to demand more of a justification than this rather watered-down version of “anything goes” when it comes to rationality postulates. If one proposes rational expectations one also has to support its underlying assumptions. None is given, which makes it rather puzzling how rational expectations has become the standard modeling assumption made in much of modern macroeconomics. Perhaps the reason is that economists often mistake mathematical beauty for truth.
But I think Prescott’s view is also the reason why calibration economists are not particularly interested in empirical examinations of how real choices and decisions are made in real economies. In the hands of Lucas, Prescott and Sargent, rational expectations has been transformed from an – in principle – testable hypothesis to an irrefutable proposition. Believing in a set of irrefutable propositions may be comfortable – like religious convictions or ideological dogmas – but it is not science.
So where does this all lead us? What is the trouble ahead for economics? Putting a sticky-price DSGE lipstick on the RBC pig sure won’t do. Neither will — as Paul Romer notices — just looking the other way and pretend it’s raining:
The trouble is not so much that macroeconomists say things that are inconsistent with the facts. The real trouble is that other economists do not care that the macroeconomists do not care about the facts. An indifferent tolerance of obvious error is even more corrosive to science than committed advocacy of error.
This is a book about human limitations and the difficulty of gaining true insight into the world around us. There is, in truth, no way of separating these two things from one other. To try to discuss economics without understanding the difficulty applying it to the real world is to consign oneself to dealing with pure makings of our imaginations. Much of economics at the time of writing is of this sort, although it is unclear such modes of thought should be called ‘economics’ and whether future generations will see them as such. There is every chance that the backward-looking eye of posterity will see much of what today’s economic departments produce in the same as we now see phrenology: a highly technical, but ultimately ridiculous pseudoscience constructed rather unconsciously to serve the political needs of the era.
Highly recommended reading for everyone interested in making economics a relevant and realist science.
Living next to an Opera house sure has its benefits if you love opera. Just twenty minutes ago two lovely opera singers performed Con te partirò on the little piazza in front of my house. Awesome!
This version is pretty good too …
To make education more like a private good, [voucher advocates] tried to change the conditions of both supply and demand. On the demand side, the central proposal was that of education ‘vouchers’, put forward most notably by Nobel Prizewinning economist at the University of Chicago, Milton Friedman. The idea was that, rather than funding schools, government should provide funding directly parents in the form of vouchers that could be used at whichever school the parents preferred, and topped up, if necessary by additional fee payments.
As is typically the case, voucher advocates ignored the implications of their proposals for the distribution of income. In large measure, vouchers represent a simply cash transfer, going predominantly from the poor to the rich. The biggest beneficiaries would be those, mostly well-off, who were already sending their children to private schools, for whom the voucher would be a simple cash transfer. Those whose children remained at the same public school as before would gain nothing …
The most notable entrant in the US school sector was Edison Schools. Edison Schools was founded in 1992 and was widely viewed as representing the future of school education … For-profit schools were also introduced in Chile and Sweden …
The story was much the same everywhere: an initial burst of enthusiasm and high profits, followed by the exposure of poor practices and outcomes, and finally collapse, with governments being left to pick up the pieces …
Sweden introduced voucher-style reforms in 1992, and opened the market to for-profit schools. Initially favorable assessments were replaced by disillusionment as the performance of the school system as a whole deteriorated … By 2015, the majority of the public favoured banning for-profit schools. The Minister for Education described the system as a ‘political failure.’ Other critics described it in harsher terms (The Swedish for-profit ‘free’ school disaster).
Although a full analysis has not yet been undertaken, it seems likely that the for-profit schools engaged in ‘cream-skimming’, admitting able and well-behaved students, while pushing more problematic students back into the public system. The rules under which the reform was introduced included ‘safeguards’ to prevent cream-skimming, but such safeguards have historical proved ineffectual in the face of the profits to be made by evading them …
Why has market-oriented reform of education been such a failure? …
Education is characterized by market failure, by potentially inequitable initial allocations and, most importantly, by the fact that the relationship between the education ‘industry’ and its ‘consumers’, that is between educational institutions and teachers on the one hand and students on the other, cannot be reduced to a market transaction.
The critical problem with this simple model is that students, by definition, cannot know in advance what they are going to learn, or make an informed judgement about what they are learning. They have to rely, to a substantial extent, on their teachers to select the right topics of study and to teach them appropriately …
The result is that education does not rely on market competition to any significant extent to sort good teachers and institutions from bad ones. Rather, education depends on a combination of sustained institutional standards and individual professional ethics to maintain their performance.
The implications for education policy are clear, at least at the school level. School education should be publicly funded and provided either by public schools or by non-profits with a clear educational mission, as opposed to corporate ‘school management organisations’.
Neo-liberals and libertarians have always provided a lot of ideologically founded ideas and ‘theories’ to underpin their Panglossian view on markets. But when they are tested against reality they usually turn out to be wrong. The promised results are simply not to be found. And that goes for for-profit schools too.
The correlation between high executive pay and good performance is “negligible”, a new academic study has found, providing reformers with fresh evidence that a shake-up of Britain’s corporate remuneration systems is overdue.
Although big company bosses enjoyed pay rises of more than 80 per cent in a decade, performance as measured by economic returns on invested capital was less than 1 per cent over the period, the paper by Lancaster University Management School says.
“Our findings suggest a material disconnect between pay and fundamental value generation for, and returns to, capital providers,” the authors of the report said.
In a study of more than a decade of data on the pay and performance of Britain’s 350 biggest listed companies, Weijia Li and Steven Young found that remuneration had increased 82 per cent in real terms over the 11 years to 2014 … The research found that the median economic return on invested capital, a preferable measure, was less than 1 per cent over the same period.
Mainstream economics textbooks usually refer to the interrelationship between technological development and education as the main causal force behind increased inequality. If the educational system (supply) develops at the same pace as technology (demand), there should be no increase, ceteris paribus, in the ratio between high-income (highly educated) groups and low-income (low education) groups. In the race between technology and education, the proliferation of skilled-biased technological change has, however, allegedly increased the premium for the highly educated group.
Another prominent explanation is that globalization – in accordance with Ricardo’s theory of comparative advantage and the Wicksell-Heckscher-Ohlin-Stolper-Samuelson factor price theory – has benefited capital in the advanced countries and labour in the developing countries. The problem with these theories are that they explicitly assume full employment and international immobility of the factors of production. Globalization means more than anything else that capital and labour have to a large extent become mobile over country borders. These mainstream trade theories are really not applicable in the world of today, and they are certainly not able to explain the international trade pattern that has developed during the last decades. Although it seems as though capital in the developed countries has benefited from globalization, it is difficult to detect a similar positive effect on workers in the developing countries.
There are, however, also some other quite obvious problems with these kinds of inequality explanations. The World Top Incomes Database shows that the increase in incomes has been concentrated especially in the top 1%. If education was the main reason behind the increasing income gap, one would expect a much broader group of people in the upper echelons of the distribution taking part of this increase. It is dubious, to say the least, to try to explain, for example, the high wages in the finance sector with a marginal productivity argument. High-end wages seem to be more a result of pure luck or membership of the same ‘club’ as those who decide on the wages and bonuses, than of ‘marginal productivity.’
Mainstream economics, with its technologically determined marginal productivity theory, seems to be difficult to reconcile with reality. Although card-carrying neoclassical apologetics like Greg Mankiw want to recall John Bates Clark’s (1899) argument that marginal productivity results in an ethically just distribution, that is not something – even if it were true – we could confirm empirically, since it is impossible realiter to separate out what is the marginal contribution of any factor of production. The hypothetical ceteris paribus addition of only one factor in a production process is often heard of in textbooks, but never seen in reality.
When reading mainstream economists like Mankiw who argue for the ‘just desert’ of the 0.1 %, one gets a strong feeling that they are ultimately trying to argue that a market economy is some kind of moral free zone where, if left undisturbed, people get what they ‘deserve.’ To most social scientists that probably smacks more of being an evasive action trying to explain away a very disturbing structural ‘regime shift’ that has taken place in our societies. A shift that has very little to do with ‘stochastic returns to education.’ Those were in place also 30 or 40 years ago. At that time they meant that perhaps a top corporate manager earned 10–20 times more than ‘ordinary’ people earned. Today it means that they earn 100–200 times more than ‘ordinary’ people earn. A question of education? Hardly. It is probably more a question of greed and a lost sense of a common project of building a sustainable society.
Since the race between technology and education does not seem to explain the new growing income gap – and even if technological change has become more and more capital augmenting, it is also quite clear that not only the wages of low-skilled workers have fallen, but also the overall wage share – mainstream economists increasingly refer to ‘meritocratic extremism,’ ‘winners-take-all markets’ and ‘super star-theories’ for explanation. But this is also highly questionable.
Fans may want to pay extra to watch top-ranked athletes or movie stars performing on television and film, but corporate managers are hardly the stuff that people’s dreams are made of – and they seldom appear on television and in the movie theaters.
Everyone may prefer to employ the best corporate manager there is, but a corporate manager, unlike a movie star, can only provide his services to a limited number of customers. From the perspective of ‘super-star theories,’ a good corporate manager should only earn marginally better than an average corporate manager. The average earnings of corporate managers of the 50 biggest Swedish companies today, is equivalent to the wages of 46 blue-collar workers.
It is difficult to see the takeoff of the top executives as anything else but a reward for being a member of the same illustrious club. That they should be equivalent to indispensable and fair productive contributions – marginal products – is straining credulity too far. That so many corporate managers and top executives make fantastic earnings today, is strong evidence the theory is patently wrong and basically functions as a legitimizing device of indefensible and growing inequalities.
No one ought to doubt that the idea that capitalism is an expression of impartial market forces of supply and demand, bears but little resemblance to actual reality. Wealth and income distribution, both individual and functional, in a market society is to an overwhelmingly high degree influenced by institutionalized political and economic norms and power relations, things that have relatively little to do with marginal productivity in complete and profit-maximizing competitive market models – not to mention how extremely difficult, if not outright impossible it is to empirically disentangle and measure different individuals’ contributions in the typical team work production that characterize modern societies; or, especially when it comes to ‘capital,’ what it is supposed to mean and how to measure it. Remunerations do not necessarily correspond to any marginal product of different factors of production – or to ‘compensating differentials’ due to non-monetary characteristics of different jobs, natural ability, effort or chance.
Put simply – highly paid workers and corporate managers are not always highly productive workers and corporate managers, and less highly paid workers and corporate managers are not always less productive. History has over and over again disconfirmed the close connection between productivity and remuneration postulated in mainstream income distribution theory.
Neoclassical marginal productivity theory is obviously a collapsed theory from both a historical and a theoretical point of view, as shown already by Sraffa in the 1920s, and in the Cambridge capital controversy in the 1960s and 1970s.
When a theory is impossible to reconcile with facts there is only one thing to do — scrap it!
Adam Smith (1723-90) is a mystery in a puzzle wrapped in an enigma. The mystery is the enormous and unprecedented gap between Smith’s exalted reputation and the reality of his dubious contribution to economic thought …
The problem is not simply that Smith was not the founder of economics.
The problem is that he originated nothing that was true, and that whatever he originated was wrong; that, even in an age that had fewer citations or footnotes than our own, Adam Smith was a shameless plagiarist, acknowledging little or nothing and stealing large chunks …
Even though an inveterate plagiarist, Smith had a Columbus complex, accusing close friends incorrectly of plagiarizing him. And even though a plagiarist, he plagiarized badly, adding new fallacies to the truths he lifted … Smith not only contributed nothing of value to economic thought; his economics was a grave deterioration from his predecessors …
The historical problem is this: how could this phenomenon have taken place with a book so derivative, so deeply flawed, so much less worthy than its predecessors?
The answer is surely not any lucidity or clarity of style or thought. For the much revered Wealth of Nations is a huge, sprawling, inchoate, confused tome, rife with vagueness, ambiguity and deep inner contradictions.
Not exactly the standard textbook presentation of the founding father of economics …
When we cannot accept that the observations, along the time-series available to us, are independent, or cannot by some device be divided into groups that can be treated as independent, we get into much deeper water. For we have then, in strict logic, no more than one observation, all of the separate items having to be taken together. For the analysis of that the probability calculus is useless; it does not apply. We are left to use our judgement, making sense of what has happened as best we can, in the manner of the historian. Applied economics does then come back to history, after all.
I am bold enough to conclude, from these considerations that the usefulness of ‘statistical’ or ‘stochastic’ methods in economics is a good deal less than is now conventionally supposed. We have no business to turn to them automatically; we should always ask ourselves, before we apply them, whether they are appropriate to the problem at hand. Very often they are not. Thus it is not at all sensible to take a small number of observations (sometimes no more than a dozen observations) and to use the rules of probability to deduce from them a ‘significant’ general law. For we are assuming, if we do so, that the variations from one to another of the observations are random, so that if we had a larger sample (as we do not) they would by some averaging tend to disappear. But what nonsense this is when the observations are derived, as not infrequently happens, from different countries, or localities, or industries — entities about which we may well have relevant information, but which we have deliberately decided, by our procedure, to ignore. By all means let us plot the points on a chart, and try to explain them; but it does not help in explaining them to suppress their names. The probability calculus is no excuse for forgetfulness.
John Hicks’ Causality in economics is an absolute masterpiece. It ought to be on the reading list of every course in economic methodology.
One of the limitations with economics is the restricted possibility to perform experiments, forcing it to mainly rely on observational studies for knowledge of real-world economies.
But still — the idea of performing laboratory experiments holds a firm grip of our wish to discover (causal) relationships between economic ‘variables.’ If we only could isolate and manipulate variables in controlled environments, we would probably find ourselves in a situation where we with greater ‘rigour’ and ‘precision’ could describe, predict, or explain economic happenings in terms of ‘structural’ causes, ‘parameter’ values of relevant variables, and economic ‘laws.’
Galileo Galilei’s experiments are often held as exemplary for how to perform experiments to learn something about the real world. Galileo’s experiments were according to Nancy Cartwright (Hunting Causes and Using Them, p. 223)
designed to find out what contribution the motion due to the pull of the earth will make, with the assumption that the contribution is stable across all the different kinds of situations falling bodies will get into … He eliminated (as far as possible) all other causes of motion on the bodies in his experiment so that he could see how they move when only the earth affects them. That is the contribution that the earth’s pull makes to their motion.
Galileo’s heavy balls dropping from the tower of Pisa, confirmed that the distance an object falls is proportional to the square of time, and that this law (empirical regularity) of falling bodies could be applicable outside a vacuum tube when e. g. air existence is negligible.
The big problem is to decide or find out exactly for which objects air resistance (and other potentially ‘confounding’ factors) is ‘negligible.’ In the case of heavy balls, air resistance is obviously negligible, but how about feathers or plastic bags?
One possibility is to take the all-encompassing-theory road and find out all about possible disturbing/confounding factors — not only air resistence — influencing the fall and build that in to one great model delivering accurate predictions on what happens when the object that falls is not only a heavy ball, but feathers and plastic bags. This usually amounts to ultimately state some kind of ceteris paribus interpretation of the ‘law.’
Another road to take would be to concentrate on the negligibility assumption and to specify the domain of applicability to be only heavy compact bodies. The price you have to pay for this is that (1) ‘negligibility’ may be hard to establish in open real-world systems, (2) the generalisation you can make from ‘sample’ to ‘population’ is heavily restricted, and (3) you actually have to use some ‘shoe leather’ and empirically try to find out how large is the ‘reach’ of the ‘law.’
In mainstream economics one has usually settled for the ‘theoretical’ road (and in case you think the present ‘natural experiments’ hype has changed anything, remember that to mimic real experiments, exceedingly stringent special conditions have to obtain).
In the end, it all boils down to one question — are there any heavy balls to be found in economics, so that we can indisputably establish the existence of economic laws operating in real-world economies?
As far as I can see there some heavy balls out there, but not even one single real economic law.
Economic factors/variables are more like feathers than heavy balls — non-negligible factors (like air resistance and chaotic turbulence) are hard to rule out as having no influence on the object studied.
Galilean experiments are hard to carry out in economics, and the theoretical ‘analogue’ models economists construct and in which they perform their ‘thought-experiments’ build on assumptions that are far away from the kind of idealized conditions under which Galileo performed his experiments. The ‘nomological machines’ that Galileo and other scientists have been able to construct, have no real analogues in economics. The stability, autonomy, modularity, and interventional invariance, that we may find between entities in nature, simply are not there in real-world economies. That’s are real-world fact, and contrary to the beliefs of most mainstream economists, they wont’t go away simply by applying deductive-axiomatic economic theory with tons of more or less unsubstantiated assumptions.
By this I do not mean to say that we have to discard all (causal) theories/laws building on modularity, stability, invariance, etc. But we have to acknowledge the fact that outside the systems that possibly fullfil these requirements/assumptions, they are of little substantial value. Running paper and pen experiments on artificial ‘analogue’ model economies is a sure way of ‘establishing’ (causal) economic laws or solving intricate econometric problems of autonomy, identification, invariance and structural stability — in the model-world. But they are pure substitutes for the real thing and they don’t have much bearing on what goes on in real-world open social systems. Setting up convenient circumstances for conducting Galilean experiments may tell us a lot about what happens under those kind of circumstances. But — few, if any, real-world social systems are ‘convenient.’ So most of those systems, theories and models, are irrelevant for letting us know what we really want to know..
To solve, understand, or explain real-world problems you actually have to know something about them – logic, pure mathematics, data simulations or deductive axiomatics don’t take you very far. Most econometrics and economic theories/models are splendid logic machines. But — applying them to the real-world is a totally hopeless undertaking! The assumptions one has to make in order to successfully apply these deductive-axiomatic theories/models/machines are devastatingly restrictive and mostly empirically untestable– and hence make their real-world scope ridiculously narrow. To fruitfully analyse real-world phenomena with models and theories you cannot build on patently and known to be ridiculously absurd assumptions.
No matter how much you would like the world to entirely consist of heavy balls, the world is not like that. The world also has its fair share of feathers and plastic bags.