This one is for you — all you brothers and sisters, fighting oppression, struggling to survive in civil wars, or forced to flee your homes, risking your lives on your long walk to freedom.
May God be with you.
Everything we know is not just wrong – it’s backwards. When banks make loans, they create money. This is because money is really just an IOU. The role of the central bank is to preside over a legal order that effectively grants banks the exclusive right to create IOUs of a certain kind, ones that the government will recognise as legal tender by its willingness to accept them in payment of taxes.
There’s really no limit on how much banks could create, provided they can find someone willing to borrow it. They will never get caught short, for the simple reason that borrowers do not, generally speaking, take the cash and put it under their mattresses; ultimately, any money a bank loans out will just end up back in some bank again. So for the banking system as a whole, every loan just becomes another deposit. What’s more, insofar as banks do need to acquire funds from the central bank, they can borrow as much as they like; all the latter really does is set the rate of interest, the cost of money, not its quantity. Since the beginning of the recession, the US and British central banks have reduced that cost to almost nothing. In fact, with “quantitative easing” they’ve been effectively pumping as much money as they can into the banks, without producing any inflationary effects.
What this means is that the real limit on the amount of money in circulation is not how much the central bank is willing to lend, but how much government, firms, and ordinary citizens, are willing to borrow. Government spending is the main driver in all this … So there’s no question of public spending “crowding out” private investment. It’s exactly the opposite.
Sounds odd, doesn’t it?
This guy must sure be one of those strange and dangerous heterodox cranks?
Well, maybe you should reconsider …
The reality of how money is created today differs from the description found in some economics textbooks:
• Rather than banks receiving deposits when households save and then lending them out, bank lending creates deposits.
• In normal times, the central bank does not fix the amount of money in circulation, nor is central bank money ‘multiplied up’ into more loans and deposits …
Most of the money in circulation is created, not by the printing presses of the Bank of England, but by the commercial banks themselves: banks create money whenever they lend to someone in the economy or buy an asset from consumers. And in contrast to descriptions found in some textbooks, the Bank of England does not directly control the quantity of either base or broad money. The Bank of England is nevertheless still able to influence the amount of money in the economy. It does so in normal times by setting monetary policy — through the interest rate that it pays on reserves held by commercial banks with the Bank of England. More recently, though, with Bank Rate constrained by the effective lower bound, the Bank of England’s asset purchase programme has sought to raise the quantity of broad money in circulation. This in turn affects the prices and quantities of a range of assets in the economy, including money.
France Gall. Une fille-femme merveilleuse. Magnifique.
‘C’est comme toute l’histoire du peuple noir qui se balance entre l’amour et l’désespoir.’
One of the most effective ways of clearing up this most serious of all semantic confusions is to point out that private debt differs from national debt in being external. It is owed by one person to others. That is what makes it burdensome. Because it is interpersonal the proper analogy is not to national debt but to international debt…. But this does not hold for national debt which is owed by the nation to citizens of the same nation. There is no external creditor. We owe it to ourselves.
A variant of the false analogy is the declaration that national debt puts an unfair burden on our children, who are thereby made to pay for our extravagances. Very few economists need to be reminded that if our children or grandchildren repay some of the national debt these payments will be made to our children or grandchildren and to nobody else. Taking them altogether they will no more be impoverished by making the repayments than they will be enriched by receiving them.
Abba Lerner The Burden of the National Debt (1948)
Few issues in politics and economics are nowadays more discussed – and less understood – than public debt. Many raise their voices to urge for reducing the debt, but few explain why and in what way reducing the debt would be conducive to a better economy or a fairer society. And there are no limits to all the – especially macroeconomic –calamities and evils a large public debt is supposed to result in – unemployment, inflation, higher interest rates, lower productivity growth, increased burdens for subsequent generations, etc., etc.
People usually care a lot about public sector budget deficits and debts, and are as a rule worried and negative. Drawing analogies from their own household’s economy, debt is seen as a sign of an imminent risk of default and hence a source of reprobation. But although no one can doubt the political and economic significance of public debt, there’s however no unanimity whatsoever among economists as to whether debt matters, and if so, why and in what way. And even less – one doesn’t know what is the “optimal” size of public debt.
Through history public debts have gone up and down, often expanding in periods of war or large changes in basic infrastructure and technologies, and then going down in periods when things have settled down.
The pros and cons of public debt have been put forward for as long as the phenomenon itself has existed, but it has, notwithstanding that, not been possible to reach anything close to consensus on the issue — at least not in a long time-horizon perspective. One has as a rule not even been able to agree on whether public debt is a problem, and if — when it is or how to best tackle it. Some of the more prominent reasons for this non-consensus are the complexity of the issue, the mingling of vested interests, ideology, psychological fears, the uncertainty of calculating ad estimating inter-generational effects, etc., etc.
In the mercantilist era public debt was as a rule considered positive (cf. Berkeley, Melon, de Pinto), a view that was later repeated in the 19th century by, e.g., economists Adolf Wagner, Lorenz von Stein and Carl Dietzel. The state’s main aim was to control and distribute the resources of the nation, often through regulations and forceful state interventions. As a result of increased public debt, the circulation of money and credit would increase the amount of capital and contribute to the wealth of nations. Public debt was basically considered something that was moved from “the right hand to the left hand.” The economy simply needed a state that was prepared to borrow substantial amounts of money and financial papers and incur indebtedness in the process.
There was also a clear political dimension to the issue, and some authors were clearly aware that government loan/debt activities could have a politically stabilizing effect. Investors had a vested interest in stable governments (low interest rate and low risk premium) and so instinctively were loyal to the government.
In classical economics — following in the footsteps of David Hume – especially Adam Smith, David Ricardo, and Jean-Baptiste Say put forward views on public debt that was more negative. The good budget was a balanced budget. If government borrowed money to finance its activities, it would only give birth to “crowding out” private enterprise and investments. The state was generally considered incapable if paying its debts, and the real burden would therefor essentially fall on the taxpayers that ultimately had to pay for the irresponsibility of government. The moral character of the argumentation was a salient feature — “either the nation must destroy public credit, or the public credit will destroy the nation” (Hume 1752)
Later on in the 20th century economists like John Maynard Keynes, Abba Lerner and Alvin Hansen would again hold a more positive view on public debt. Public debt was normally nothing to fear, especially if it was financed within the country itself (but even foreign loans could be beneficient for the economy if invested in the right way). Some members of society would hold bonds and earn interest on them, while others would have to pay the taxes that ultimately paid the interest on the debt. But the debt was not considered a net burden for society as a whole, since the debt cancelled itself out between the two groups. If the state could issue bonds at a low interest rate, unemployment could be reduced without necessarily resulting in strong inflationary pressure. And the inter-generational burden was no real burden according to this group of economists, since — if used in a suitable way — the debt would, through its effects on investments and employment, actually be net winners. There could, of course, be unwanted negative distributional side effects, for the future generation, but that was mostly considered a minor problem since (Lerner 1948) “if our children or grandchildren repay some of the national debt these payments will be made to our children and grandchildren and to nobody else.”
Central to the Keynesian influenced view is the fundamental difference between private and public debt. Conflating the one with the other is an example of the atomistic fallacy, which is basically a variation on Keynes’ savings paradox. If an individual tries to save and cut down on debts, that may be fine and rational, but if everyone tries to do it, the result would be lower aggregate demand and increasing unemployment for the economy as a whole.
An individual always have to pay his debts. But a government can always pay back old debts with new, through the issue of new bonds. The state is not like an individual. Public debt is not like private debt. Government debt is essentially a debt to itself, its citizens. Interest paid on the debt is paid by the taxpayers on the one hand, but on the other hand, interest on the bonds that finance the debts goes to those who lend out the money.
Abba Lerner’s essay Functional Finance and the Federal Debt set out guiding principles for governments to adopt in their efforts to use economic – especially fiscal – policies in trying to maintain full employment and prosperity in economies struggling with chronic problems with maintaining a high enough aggregate demand.
Because of this inherent deficiency, modern states tended to have structural and long-lasting problems of maintaining full employment. According to Lerner’s Functional Finance principles, the private sector has a tendency not to generate enough demand on its own, and so the government has to take on the responsibility to make sure that full employment was attained. The main instrument in doing this is open market operations – especially selling and buying interest-bearing government bonds.
Although Lerner seems to have had the view that the ideas embedded in Functional Finance was in principle applicable in all kinds of economies, he also recognized the importance of the institutional arrangements in shaping the feasibility and practical implementation of it.
Functional Finance critically depends on nation states being able to tax its citizens, have a currency — and bonds — of its own. As has become transparently clear during the Great Recession, EMU has not been able to impose those structures, since as Hayek noted already back in 1939, “government by agreement is only possible provided that we do not require the government to act in fields other than those in which we can obtain true agreement.” The monetary institutional structure of EMU makes it highly unlikely – not to say impossible — that this will ever become a “system” in which Functional Finance is adapted.
To Functional Finance the choices made by governments to finance the public deficits — and concomitant debts — was important, since bond-based financing was considered more expansionary than using taxes also. According to Lerner, the purpose of public debt is to achieve a rate of interest that results in investments making full employment feasible. In the short run this could result in deficits, but he firmly maintained that there was no reason to assume that the application of Functional Finance to maintain full employment implied that the government had to always borrow money and increase the public debt. An application of Functional Finance would have a tendency to balance the budget in the long run since basically the guarantee of permanent full employment will make private investment much more attractive and a fortiori the greater private investment will diminish the need for deficit spending.
To both Keynes and Lerner it was evident that the state had the ability to promote full employment and a stable price level – and that it should use its powers to do so. If that meant that it had to take on a debt and (more or less temporarily) underbalance its budget – so let it be! Public debt is neither good nor bad. It is a means to achieving two over-arching macroeconomic goals – full employment and price stability. What is sacred is not to have a balanced budget or running down public debt per se, regardless of the effects on the macroeconomic goals. If “sound finance”, austerity and a balanced budgets means increased unemployment and destabilizing prices, they have to be abandoned.
Now against this reasoning, exponents of the thesis of Ricardian equivalence, have maintained that whether the public sector finances its expenditures through taxes or by issuing bonds is inconsequential, since bonds must sooner or later be repaid by raising taxes in the future.
Robert Barro (1974) attempted to give the proposition a firm theoretical foundation, arguing that the substitution of a budget deficit for current taxes has no impact on aggregate demand and so budget deficits and taxation have equivalent effects on the economy.
If the public sector runs extra spending through deficits, taxpayers will according to the hypothesis anticipate that they will have to pay higher taxes in future — and therefore increase their savings and reduce their current consumption to be able to do so, the consequence being that aggregate demand would not be different to what would happen if taxes were raised today.
Ricardian equivalence basically means that financing government expenditures through taxes or debts is equivalent, since debt financing must be repaid with interest, and agents — equipped with rational expectations — would only increase savings in order to be able to pay the higher taxes in the future, thus leaving total expenditures unchanged.
The Ricardo-Barro hypothesis, with its view of public debt incurring a burden for future generations, is the dominant view among mainstream economists and politicians today. The rational people making up the actors in the model are assumed to know that today’s debts are tomorrow’s taxes. But — one of the main problems with this standard neoclassical theory is, however, that it doesn’t fit the facts.
From a more theoretical point of view, one may also strongly criticize the Ricardo-Barro model and its concomitant crowding out assumption, since perfect capital markets do not exist and repayments of public debt can take place far into the future and it’s dubious if we really care for generations 300 years from now.
At times when economic theories have been in favour of public debt one gets the feeling that the more or less explicit assumption is that public expenditures are useful and good for the economy, since they work as an important — and often necessary — injection to the economy, creating wealth and employment. At times when economic theories have been against public debt, the basic assumption seems to be that public expenditures are useless and only crowd out private initiatives and has no positive net effect on the economy.
Wolfgang Streeck argues in Buying Time: The Delayed Crisis of Democratic Capitalism (2014) for an interpretation of the more or less steady increase in public debt since the 1970s as a sign of a transformation of the tax state (Schumpeter) into a debt state. In his perspective public debt is both an indicator and a causal factor in the relationship between political and economic systems. The ultimate cause behind the increased public debt is the long run decline in economic growth, resulting in a doubling of the average public debt in OECD countries for the last 40 years. This has put strong pressures on modern capitalist states, and parallel to this, income inequality has increased in most countries. This is according to Streeck one manifestation of a neoliberal revolution – with its emphasis on supply side politics, austerity policies and financial deregulation — that has taken place and where democratic-redistributive intervention has become ineffectual.
Today there seems to be a rather widespread consensus of public debt being acceptable as long as it doesn’t increase too much and too fast. If the public debt-GDP ratio becomes higher than X % the likelihood of debt crisis and/or lower growth increases.
But in discussing within which margins public debt is feasible, the focus, however, is solely on the upper limit of indebtedness, and very few asks the question if maybe there is also a problem if public debt becomes too low.
The government’s ability to conduct an “optimal” public debt policy may be negatively affected if public debt becomes too small. To guarantee a well-functioning secondary market in bonds it is essential that the government has access to a functioning market. If turnover and liquidity in the secondary market becomes too small, increased volatility and uncertainty will in the long run lead to an increase in borrowing costs. Ultimately there’s even a risk that market makers would disappear, leaving bond market trading to be operated solely through brokered deals. As a kind of precautionary measure against this eventuality it may be argued – especially in times of financial turmoil and crises — that it is necessary to increase government borrowing and debt to ensure – in a longer run – good borrowing preparedness and a sustained (government) bond market.
The failure of successive administrations in most developed countries to embark on any vigorous policy aimed at bringing down unconscionably high levels of unemployment has been due in no small measure to a ‘viewing with alarm’ of the size of the national debts, often alleged to be already excessive, or at least threatening to become so, and by ideologically urged striving toward ‘balanced’ government budgets without any consideration of whether such debts and deficits are or threaten to become excessive in terms of some determinable impact on the real general welfare. If they are examined in the light of their impact on welfare, however, they can usually be shown to be well below their optimum levels, let alone at levels that could have dire consequences.
To view government debts in terms of the ‘functional finance’ concept introduced by Abba Lerner, is to consider their role in the macroeconomic balance of the economy. In simple, bare bones terms, the function of government debts that is significant for the macroeconomic health of an economy is that they provide the assets into which individuals can put whatever accumulated savings they attempt to set aside in excess of what can be wisely invested in privately owned real assets. A debt that is smaller than this will cause the attempted excess savings, by being reflected in a reduced level of consumption outlays, to be lost in reduced real income and increased unemployment.
In Michel De Vroey’s version of the history of macroeconomics, Robert Lucas’ declaration of the need for macroeconomics to be pursued only within ‘equilibrium discipline’ and declaring equilibrium to exist as a postulate, is hailed as a ‘Copernican revolution.’ Equilibrium is not to be considered something that characterises real economies, but rather ‘a property of the way we look at things.’ De Vroey — approvingly — notices that this — as well as Lucas’ banning of disequilibrium as referring to ‘unintelligible behaviour’ — ‘amounts to shrinking the pretence of equilibrium theory.’
Is it really a feasible methodology for economists to make a sharp divide between theory and reality, and then — like De Vroey and Lucas — treat the divide as something recommendable and good? I think not.
Fortunately there are other economists with a less devoted hagiographic attitude towards Lucas and his nonsense on stilts.
Alessandro Vercelli is one:
The equilibria analysed by Lucas are conceived as stationary stochastic processes. The fact that they are stationary imposes a long series of restrictive hypotheses on the range of applicability of the heuristic model, and these considerably reduce the empirical usefulness of Lucas’s equlibrium method …
For such a method to make sense … the stationary ‘equilibrium’ stochastic process must also be ‘dynamically stable,’ or ‘ergodic,’ in the terminology of stochastic processes …
What is worse, if one adopts Lucas’s method of pure equilibrium implying the non-intelligibility of disequilibrium positions, there is no way to argue about the robustness of the alternative equilibria under consideration. In other words, Lucas’s heuristic model, not to mention the analytical models built according to his instructions, prove to be useless for the very purpose for which they were primarily constructed — the evaluation of alternative economic policies.
Another one is Roman Freedman, Professor of Economics at New York University and a long time critic of the rational expectations hypothesis. In his seminal 1982 American Economic Review article Towards an Understanding of Market Processes: Individual Expectations, Learning, and Convergence to Rational Expectations Equilibrium — an absolute must-read for anyone with a serious interest in understanding what are the issues in the present discussion on rational expectations as a modeling assumption — he showed that the kind of models that Lucas recommends — models founded on ‘equilibrium discipline’ and the rational expectations hypothesis — are inadequate as representation of economic agents’ decision making.
Those who want to build macroeconomics on microfoundations usually maintain that the only robust policies and institutions are those based on rational expectations and representative actors. As yours truly has tried to show in On the use and misuse of theories and models in economics there is really no support for this conviction at all. For if this microfounded macroeconomics has nothing to say about the real world and the economic problems out there, why should we care about it? The final court of appeal for macroeconomic models is the real world, and as long as no convincing justification is put forward for how the inferential bridging de facto is made, macroeconomic modelbuilding is little more than hand waving that give us rather little warrant for making inductive inferences from models to real world target systems. If substantive questions about the real world are being posed, it is the formalistic-mathematical representations utilized to analyze them that have to match reality, not the other way around.
In one of their latest books on rational expectations, Roman Frydman and his colleague Michael Goldberg write:
The belief in the scientific stature of fully predetermined models, and in the adequacy of the Rational Expectations Hypothesis to portray how rational individuals think about the future, extends well beyond asset markets. Some economists go as far as to argue that the logical consistency that obtains when this hypothesis is imposed in fully predetermined models is a precondition of the ability of economic analysis to portray rationality and truth.
For example, in a well-known article published in The New York Times Magazine in September 2009, Paul Krugman (2009, p. 36) argued that Chicago-school free-market theorists “mistook beauty . . . for truth.” One of the leading Chicago economists, John Cochrane (2009, p. 4), responded that “logical consistency and plausible foundations are indeed ‘beautiful’ but to me they are also basic preconditions for ‘truth.’” Of course, what Cochrane meant by plausible foundations were fully predetermined Rational Expectations models. But, given the fundamental flaws of fully predetermined models, focusing on their logical consistency or inconsistency, let alone that of the Rational Expectations Hypothesis itself, can hardly be considered relevant to a discussion of the basic preconditions for truth in economic analysis, whatever “truth” might mean.
There is an irony in the debate between Krugman and Cochrane. Although the New Keynesian and behavioral models, which Krugman favors, differ in terms of their specific assumptions, they are every bit as mechanical as those of the Chicago orthodoxy. Moreover, these approaches presume that the Rational Expectations Hypothesis provides the standard by which to define rationality and irrationality.
In fact, the Rational Expectations Hypothesis requires no assumptions about the intelligence of market participants whatsoever … Rather than imputing superhuman cognitive and computational abilities to individuals, the hypothesis presumes just the opposite: market participants forgo using whatever cognitive abilities they do have. The Rational Expectations Hypothesis supposes that individuals do not engage actively and creatively in revising the way they think about the future. Instead, they are presumed to adhere steadfastly to a single mechanical forecasting strategy at all times and in all circumstances. Thus, contrary to widespread belief, in the context of real-world markets, the Rational Expectations Hypothesis has no connection to how even minimally reasonable profit-seeking individuals forecast the future in real-world markets. When new relationships begin driving asset prices, they supposedly look the other way, and thus either abjure profit-seeking behavior altogether or forgo profit opportunities that are in plain sight.
And in a recent article the same authors write:
Contemporary economists’ reliance on mechanical rules to understand – and influence – economic outcomes extends to macroeconomic policy as well, and often draws on an authority, John Maynard Keynes, who would have rejected their approach. Keynes understood early on the fallacy of applying such mechanical rules. “We have involved ourselves in a colossal muddle,” he warned, “having blundered in the control of a delicate machine, the working of which we do not understand.”
In The General Theory of Employment, Interest, and Money, Keynes sought to provide the missing rationale for relying on expansionary fiscal policy to steer advanced capitalist economies out of the Great Depression. But, following World War II, his successors developed a much more ambitious agenda. Instead of pursuing measures to counter excessive fluctuations in economic activity, such as the deep contraction of the 1930’s, so-called stabilization policies focused on measures that aimed to maintain full employment. “New Keynesian” models underpinning these policies assumed that an economy’s “true” potential – and thus the so-called output gap that expansionary policy is supposed to fill to attain full employment – can be precisely measured.
But, to put it bluntly, the belief that an economist can fully specify in advance how aggregate outcomes – and thus the potential level of economic activity – unfold over time is bogus …
The real macroeconomic challenge is to accept uncertainty and still try to explain why economic transactions take place – instead of simply conjuring the problem away à la Lucas by assuming equilibrium and rational expectations, and treating uncertainty as if it was possible to reduce to stochastic risk. That is scientific cheating. And it has been going on for too long now.
Defenders of microfoundations and its rational expectations equipped representative agent’s intertemporal optimization often argue as if sticking with simple representative agent macroeconomic models doesn’t impart a bias to the analysis. I unequivocally reject that unsubstantiated view, and have given the reasons why here.
These defenders often also maintain that there are no methodologically coherent alternatives to microfoundations modeling. That allegation is of course difficult to evaluate, substantially hinging on how coherence is defined. But one thing I do know, is that the kind of microfoundationalist macroeconomics that New Classical economists and “New Keynesian” economists are pursuing, are not methodologically coherent according to the standard coherence definition (see e. g. here). And that ought to be rather embarrassing for those ilks of macroeconomists to whom axiomatics and deductivity is the hallmark of science tout court.
The fact that Lucas introduced rational expectations as a consistency axiom is not really an argument to why we should accept it as an acceptable assumption in a theory or model purporting to explain real macroeconomic processes (see e. g. here). And although virtually any macroeconomic empirical claim is contestable, so is any claim in micro (see e. g. here).
Economic theory, like anthropology, ‘works’ by studying societies which are in some relevant sense simpler or more primitive than our own, in the hope either that relations that are important but hidden in our society will be laid bare in simpler ones, or that concrete evidence can be discovered for possibilities which are open to us which are without precedent in our own history. Unlike anthropologists, however, economists simply invent the primitive societies we study, a practice which frees us from limiting ourselves to societies which can be physically visited as sparing us the discomforts of long stays among savages. This method of society-invention is the source of the utopian character of economics; and of the mix of distrust and envy with which we are viewed by our fellow social scientists. The point of studying wholly fictional, rather than actual societies, is that it is relatively inexpensive to subject them to external forces of various types and observe the way they react. If, subjected to forces similar to those acting on actual societies, the artificial society reacts in a similar way, we gain confidence that there are useable connections between the invented society and the one we really care about.
Although neither yours truly, nor anthropologists (I guess), will recognise anything in this description even remotely reminiscent of practices actually used in real sciences, this quote still gives a very good picture of Lucas’ warped methodology.
All empirical sciences use simplifying or unrealistic assumptions in their modeling activities. That is not the issue – as long as the assumptions made are not unrealistic in the wrong way or for the wrong reasons.
The implications that follow from the kind of models that people like Robert Lucas — according to Ed Prescott, ‘the master of methodology’ — are always conditional on the simplifying assumptions used — assumptions predominantly of a rather far-reaching and non-empirical character with little resemblance to features of the real world. From a descriptive point of view there is a fortiori usually very little resemblance between the models used and the empirical world. ‘As if’ explanations building on such foundations are not really any explanations at all, since they always conditionally build on hypothesized law-like theorems and situation-specific restrictive assumptions. The empirical-descriptive inaccuracy of the models makes it more or less miraculous if they should — in any substantive way — be able to be considered explanative at all. If the assumptions that are made are known to be descriptively totally unrealistic (think of e.g. ‘rational expectations’) they are of course likewise totally worthless for making empirical inductions. Assuming — as Lucas — that people behave ‘as if’ they were rational FORTRAN programmed computers doesn’t take us far when we know that the ‘if’ is false.
The obvious shortcoming of a basically epistemic — rather than ontological — approach such as ‘successive approximations’ and ‘as if’ modeling assumptions, is that ‘similarity’, ‘analogy’ or ‘resemblance’ tout court do not guarantee that the correspondence between model and target is interesting, relevant, revealing or somehow adequate in terms of mechanisms, causal powers, capacities or tendencies. No matter how many convoluted refinements of concepts made in the model, if the successive ‘as if’ approximations do not result in models similar to reality in the appropriate respects (such as structure, isomorphism, etc), they are nothing more than ‘substitute systems’ that do not bridge to the world but rather misses its target.
Economics building on the kind of modeling strategy that Lucas represents does not produce science.
It’s nothing but pseudo-scientific cheating.
The thrust of this realist rhetoric is the same both at the scientific and at the meta-scientific levels. It is that explanatory virtues need not be evidential virtues. It is that you should feel cheated by “The world is as if T were true”, in the same way as you should feel cheated by “The stars move as if they were fixed on a rotating sphere”. Realists do feel cheated in both cases.
Contrary to what some überimpressed macroeconomists seem to argue, I would say the recent economic crisis and the fact that Chicago economics has had next to nothing to contribute in understanding it, shows that Lucas and his New Classical economics — in Lakatosian terms — is a degenerative research program in dire need of replacement.
Mainstream economic theory has for long been in the story-telling business whereby economic theorists create make-believe analogue models of the target system – usually conceived as the real economic system. This modeling activity is considered useful and essential. Since fully-fledged experiments on a societal scale as a rule are prohibitively expensive, ethically indefensible or unmanageable, economic theorists have to substitute experimenting with something else. To understand and explain relations between different entities in the real economy the predominant strategy is to build models and make things happen in these “analogue-economy models” rather than engineering things happening in real economies.
In business cycles theory these models are constructed with the purpose of showing that changes in the supply of money “have the capacity to induce depressions or booms” [Lucas 1988:3] not just in these models, but also in real economies. To do so economists are supposed to imagine subjecting their models to some kind of “operational experiment” and “a variety of reactions”. “In general, I believe that one who claims to understand the principles of flight can reasonably be expected to be able to make a flying machine, and that understanding business cycles means the ability to make them too, in roughly the same sense” [Lucas 1981:8]. To Lucas models are the laboratories of economic theories, and after having made a simulacrum-depression Lucas hopes we find it “convincing on its own terms – that what I said would happen in the [model] as a result of my manipulation would in fact happen” [Lucas 1988:4]. The clarity with which the effects are seen is considered “the key advantage of operating in simplified, fictional worlds” [Lucas 1988:5].
On the flipside lies the fact that “we are not really interested in understanding and preventing depressions in hypothetical [models]. We are interested in our own, vastly more complicated society” [Lucas 1988:5]. But how do we bridge the gulf between model and “target system”? According to Lucas we have to be willing to “argue by analogy from what we know about one situation to what we would like to know about another, quite different situation” [Lucas 1988:5]. Progress lies in the pursuit of the ambition to “tell better and better stories” [Lucas 1988:5], simply because that is what economists do.
We are storytellers, operating much of the time in worlds of make believe. We do not find that the realm of imagination and ideas is an alternative to, or retreat from, practical reality. On the contrary, it is the only way we have found to think seriously about reality. In a way, there is nothing more to this method than maintaining the conviction … that imagination and ideas matter … there is no practical alternative” [Lucas 1988:6].
Lucas has applied this mode of theorizing by constructing “make-believe economic systems” to the age-old question of what causes and constitutes business cycles. According to Lucas the standard for what that means is that one “exhibits understanding of business cycles by constructing a model in the most literal sense: a fully articulated artificial economy, which behaves through time so as to imitate closely the time series behavior of actual economies” [Lucas 1981:219].
To Lucas business cycles is an inherently systemic phenomenon basically characterized by conditional co-variations of different time series. The vision is “the possibility of a unified explanation of business cycles, grounded in the general laws governing market economies, rather than in political or institutional characteristics specific to particular countries or periods” [Lucas 1981:218]. To be able to sustain this view and adopt his “equilibrium approach” he has to define the object of study in a very constrained way. Lucas asserts, e.g., that if one wants to get numerical answers “one needs an explicit, equilibrium account of the business cycles” [Lucas 1981:222]. But his arguments for why it necessarily has to be an equilibrium is not very convincing. The main restriction is that Lucas only deals with purportedly invariable regularities “common to all decentralized market economies” [Lucas 1981:218]. Adopting this definition he can treat business cycles as all alike “with respect to the qualitative behavior of the co-movements among series” [1981:218].
Postulating invariance paves the way for treating various economic entities as stationary stochastic processes (a standard assumption in most modern probabilistic econometric approaches) and the possible application of “economic equilibrium theory.” The result is that Lucas business cycle is a rather watered-down version of what is usually connoted when speaking of business cycles.
Based on the postulates of “self-interest” and “market clearing” Lucas has repeatedly stated that a pure equilibrium method is a necessary intelligibility condition and that disequilibria are somehow “arbitrary” and “unintelligible” [Lucas 1981:225]. Although this might (arguably) be requirements put on models, these requirements are irrelevant and totally without justification vis-à-vis the real world target system. Why should involuntary unemployment, for example, be considered an unintelligible disequilibrium concept? Given the lack of success of these models when empirically applied, what is unintelligible, is rather to pursue in this reinterpretation of the ups and downs in business cycles and labour markets as equilibria. To Keynes involuntary unemployment is not equatable to actors on the labour market becoming irrational non-optimizers. It is basically a reduction in the range of working-options open to workers, regardless of any volitional optimality choices made on their part. Involuntary unemployment is excess supply of labour. That unemployed in Lucas business cycles models only can be conceived of as having chosen leisure over work is not a substantive argument about real world unemployment. Sometimes workers are not employed. That is a real phenomenon and not a “theoretical construct … the task of modern theoretical economics to ‘explain’” [Lucas 1981:243].
All economic theories have to somehow deal with the daunting question of uncertainty and risk. It is “absolutely crucial for understanding business cycles” [Lucas 1981:223]. To be able to practice economics at all, “we need some way … of understanding which decision problem agents are solving” [Lucas 1981:223]. Lucas – in search of a “technical model-building principle” [Lucas 1981:1] – adapts the rational expectations view, according to which agents’ subjective probabilities are identified “with observed frequencies of the events to be forecast” are coincident with “true” probabilities. This hypothesis [Lucas 1981:224]
will most likely be useful in situations in which the probabilities of interest concern a fairly well defined recurrent event, situations of ‘risk’ [where] behavior may be explainable in terms of economic theory … In cases of uncertainty, economic reasoning will be of no value … Insofar as business cycles can be viewed as repeated instances of essentially similar events, it will be reasonable to treat agents as reacting to cyclical changes as ‘risk’, or to assume their expectations are rational, that they have fairly stable arrangements for collecting and processing information, and that they utilize this information in forecasting the future in a stable way, free of systemic and easily correctable biases.
To me this seems much like putting the cart before the horse. Instead of adapting the model to the object – which from both ontological and epistemological considerations seem the natural thing to do – Lucas proceeds in the opposite way and chooses to define his object and construct a model solely to suit own methodological and theoretical preferences. All those – interesting and important – features of business cycles that have anything to do with model-theoretical openness, and a fortiori not possible to squeeze into the closure of the model, are excluded. One might rightly ask what is left of that we in a common sense meaning refer to as business cycles. Einstein’s dictum – “everything should be made as simple as possible but not simpler” falls to mind. Lucas – and neoclassical economics at large – does not heed the implied apt warning.
The development of macro-econometrics has according to Lucas supplied economists with “detailed, quantitatively accurate replicas of the actual economy” thereby enabling us to treat policy recommendations “as though they had been experimentally tested” [Lucas 1981:220]. But if the goal of theory is to be able to make accurate forecasts this “ability of a model to imitate actual behavior” does not give much leverage. What is required is “invariance of the structure of the model under policy variations”. Parametric invariance in an economic model cannot be taken for granted, “but it seems reasonable to hope that neither tastes nor technology vary systematically” [Lucas 1981:220].
The model should enable us to posit contrafactual questions about what would happen if some variable was to change in a specific way. Hence the assumption of structural invariance, that purportedly enables the theoretical economist to do just that. But does it? Lucas appeals to “reasonable hope”, a rather weak justification for a modeler to apply such a far-reaching assumption. To warrant it one would expect an argumentation that this assumption – whether we conceive of it as part of a strategy of “isolation”, “idealization” or “successive approximation” – really establishes a useful relation that we can export or bridge to the target system, the “actual economy.” That argumentation is neither in Lucas, nor – to my knowledge – in the succeeding neoclassical refinements of his “necessarily artificial, abstract, patently ‘unreal’” analogue economies [Lucas 1981:271]. At most we get what Lucas himself calls “inappropriately maligned” casual empiricism in the form of “the method of keeping one’s eyes open.” That is far from sufficient to warrant any credibility in a model pretending to explain the complex and difficult recurrent phenomena we call business cycles. To provide an empirical “illustration” or a “story” to back up your model do not suffice. There are simply too many competing illustrations and stories that could be exhibited or told.
As Lucas has to admit – complaining about the less than ideal contact between theoretical economics and econometrics – even though the “stories” are (purportedly) getting better and better, “the necessary interaction between theory and fact tends not to take place” [Lucas 1981:11].
The basic assumption of this “precise and rigorous” model therefore cannot be considered anything else than an unsubstantiated conjecture as long as it is not supported by evidence from outside the theory or model. To my knowledge no in any way decisive empirical evidence have been presented. This is the more tantalizing since Lucas himself stresses that the presumption “seems a sound one to me, but it must be defended on empirical, not logical grounds” [Lucas 1981:12].
And applying a “Lucas critique” on Lucas own model, it is obvious that it too fails. Changing “policy rules” cannot just be presumed not to influence investment and consumption behavior and a fortiori technology, thereby contradicting the invariance assumption. Technology and tastes cannot live up to the status of an economy’s deep and structurally stable Holy Grail. They too are part and parcel of an ever-changing and open economy. Lucas hope of being able to model the economy as “a FORTRAN program” and “gain some confidence that the component parts of the program are in some sense reliable prior to running it” [Lucas 1981:288] therefore seems – from an ontological point of view – totally misdirected. The failure in the attempt to anchor the analysis in the alleged stable deep parameters “tastes” and “technology” shows that if you neglect ontological considerations pertaining to the target system, ultimately reality kicks back when at last questions of bridging and exportation of model exercises are laid on the table. No matter how precise and rigorous the analysis is, and no matter how hard one tries to cast the argument in “modern mathematical form” [Lucas 1981:7] they do not push science forwards one single millimeter if they do not stand the acid test of relevance to the target. No matter how clear, precise, rigorous or certain the inferences delivered inside these models are, they do not per se say anything about external validity.
Lucas, Robert (1981), Studies in Business-Cycle Theory. Oxford: Basil Blackwell.
– (1986), Adaptive Behavior and Economic Theory. In Hogarth, Robin & Reder, Melvin (eds) Rational Choice (pp. 217-242). Chicago: The University of Chicago Press.
– (1988), What Economists Do.
Syll, Lars (2016), On the use and misuse of theories and models in economics.
Ricardian equivalence basically means that financing government expenditures through taxes or debts is equivalent, since debt financing must be repaid with interest, and agents — equipped with rational expectations — would only increase savings in order to be able to pay the higher taxes in the future, thus leaving total expenditures unchanged.
In the standard neoclassical consumption model — used in DSGE macroeconomic modeling — people are basically portrayed as treating time as a dichotomous phenomenon – today and the future — when contemplating making decisions and acting. How much should one consume today and how much in the future? Facing an intertemporal budget constraint of the form
ct + cf/(1+r) = ft + yt + yf/(1+r),
where ct is consumption today, cf is consumption in the future, ft is holdings of financial assets today, yt is labour incomes today, yf is labour incomes in the future, and r is the real interest rate, and having a lifetime utility function of the form
U = u(ct) + au(cf),
where a is the time discounting parameter, the representative agent (consumer) maximizes his utility when
u'(ct) = a(1+r)u'(cf).
This expression – the Euler equation – implies that the representative agent (consumer) is indifferent between consuming one more unit today or instead consuming it tomorrow. Typically using a logarithmic function form – u(c) = log c – which gives u'(c) = 1/c, the Euler equation can be rewritten as
1/ct = a(1+r)(1/cf),
cf/ct = a(1+r).
This importantly implies that according to the neoclassical consumption model changes in the (real) interest rate and consumption move in the same direction. And — it also follows that consumption is invariant to the timing of taxes, since wealth — ft + yt + yf/(1+r) — has to be interpreted as present discounted value net of taxes. And so, according to the assumption of Ricardian equivalence, the timing of taxes does not affect consumption, simply because the maximization problem as specified in the model is unchanged.
That the theory doesn’t fit the facts we already knew.
And a couple of months ago, on Voxeu, Jonathan A. Parker summarized a series of studies empirically testing the theory, reconfirming how out of line with reality is Ricardian equivalence.
This only, again, underlines that there is, of course, no reason for us to believe in that fairy-tale. Ricardo himself — mirabile dictu — didn’t believe in Ricardian equivalence. In Essay on the Funding System (1820) he wrote:
But the people who paid the taxes never so estimate them, and therefore do not manage their private affairs accordingly. We are too apt to think that the war is burdensome only in proportion to what we are at the moment called to pay for it in taxes, without reflecting on the probable duration of such taxes. It would be difficult to convince a man possessed of £20,000, or any other sum, that a perpetual payment of £50 per annum was equally burdensome with a single tax of £1000.
And as one Nobel laureate had it:
Ricardian equivalence is taught in every graduate school in the country. It is also sheer nonsense.
Joseph E. Stiglitz, twitter