The reality of how money is created

27 August, 2016 at 15:32 | Posted in Economics | 2 Comments

Everything we know is not just wrong – it’s backwards. When banks make loans, they create money. This is because money is really just an IOU. The role of the central bank is to preside over a legal order that effectively grants banks the exclusive right to create IOUs of a certain kind, ones that the government will recognise as legal tender by its willingness to accept them in payment of taxes.

es514f00bf

There’s really no limit on how much banks could create, provided they can find someone willing to borrow it. They will never get caught short, for the simple reason that borrowers do not, generally speaking, take the cash and put it under their mattresses; ultimately, any money a bank loans out will just end up back in some bank again. So for the banking system as a whole, every loan just becomes another deposit. What’s more, insofar as banks do need to acquire funds from the central bank, they can borrow as much as they like; all the latter really does is set the rate of interest, the cost of money, not its quantity. Since the beginning of the recession, the US and British central banks have reduced that cost to almost nothing. In fact, with “quantitative easing” they’ve been effectively pumping as much money as they can into the banks, without producing any inflationary effects.

What this means is that the real limit on the amount of money in circulation is not how much the central bank is willing to lend, but how much government, firms, and ordinary citizens, are willing to borrow. Government spending is the main driver in all this … So there’s no question of public spending “crowding out” private investment. It’s exactly the opposite.

David Graeber

Sounds odd, doesn’t it?

This guy must sure be one of those strange and dangerous heterodox cranks?

Well, maybe you should reconsider …

The reality of how money is created today differs from the description found in some economics textbooks:
• Rather than banks receiving deposits when households save and then lending them out, bank lending creates deposits.
• In normal times, the central bank does not fix the amount of money in circulation, nor is central bank money ‘multiplied up’ into more loans and deposits …
Most of the money in circulation is created, not by the printing presses of the Bank of England, but by the commercial banks themselves: banks create money whenever they lend to someone in the economy or buy an asset from consumers. And in contrast to descriptions found in some textbooks, the Bank of England does not directly control the quantity of either base or broad money. The Bank of England is nevertheless still able to influence the amount of money in the economy. It does so in normal times by setting monetary policy — through the interest rate that it pays on reserves held by commercial banks with the Bank of England. More recently, though, with Bank Rate constrained by the effective lower bound, the Bank of England’s asset purchase programme has sought to raise the quantity of broad money in circulation. This in turn affects the prices and quantities of a range of assets in the economy, including money.

Michael McLeay, Amar Radia and Ryland Thomas
Bank of England’s Monetary Analysis Directorate

The real debt problem

26 August, 2016 at 19:31 | Posted in Economics | 5 Comments

national debt5One of the most effective ways of clearing up this most serious of all semantic confusions is to point out that private debt differs from national debt in being external. It is owed by one person to others. That is what makes it burdensome. Because it is interpersonal the proper analogy is not to national debt but to international debt…. But this does not hold for national debt which is owed by the nation to citizens of the same nation. There is no external creditor. We owe it to ourselves.

A variant of the false analogy is the declaration that national debt puts an unfair burden on our children, who are thereby made to pay for our extravagances. Very few economists need to be reminded that if our children or grandchildren repay some of the national debt these payments will be made to our children or grandchildren and to nobody else. Taking them altogether they will no more be impoverished by making the repayments than they will be enriched by receiving them.

Abba Lerner The Burden of the National Debt (1948)

Few issues in politics and economics are nowadays more discussed – and less understood – than public debt. Many raise their voices to urge for reducing the debt, but few explain why and in what way reducing the debt would be conducive to a better economy or a fairer society. And there are no limits to all the – especially macroeconomic –calamities and evils a large public debt is supposed to result in – unemployment, inflation, higher interest rates, lower productivity growth, increased burdens for subsequent generations, etc., etc.

People usually care a lot about public sector budget deficits and debts, and are as a rule worried and negative. Drawing analogies from their own household’s economy, debt is seen as a sign of an imminent risk of default and hence a source of reprobation. But although no one can doubt the political and economic significance of public debt, there’s however no unanimity whatsoever among economists as to whether debt matters, and if so, why and in what way. And even less – one doesn’t know what is the “optimal” size of public debt.

Through history public debts have gone up and down, often expanding in periods of war or large changes in basic infrastructure and technologies, and then going down in periods when things have settled down.

The pros and cons of public debt have been put forward for as long as the phenomenon itself has existed, but it has, notwithstanding that, not been possible to reach anything close to consensus on the issue — at least not in a long time-horizon perspective. One has as a rule not even been able to agree on whether public debt is a problem, and if — when it is or how to best tackle it. Some of the more prominent reasons for this non-consensus are the complexity of the issue, the mingling of vested interests, ideology, psychological fears, the uncertainty of calculating ad estimating inter-generational effects, etc., etc.

In the mercantilist era public debt was as a rule considered positive (cf. Berkeley, Melon, de Pinto), a view that was later repeated in the 19th century by, e.g., economists Adolf Wagner, Lorenz von Stein and Carl Dietzel. The state’s main aim was to control and distribute the resources of the nation, often through regulations and forceful state interventions. As a result of increased public debt, the circulation of money and credit would increase the amount of capital and contribute to the wealth of nations. Public debt was basically considered something that was moved from “the right hand to the left hand.” The economy simply needed a state that was prepared to borrow substantial amounts of money and financial papers and incur indebtedness in the process.

There was also a clear political dimension to the issue, and some authors were clearly aware that government loan/debt activities could have a politically stabilizing effect. Investors had a vested interest in stable governments (low interest rate and low risk premium) and so instinctively were loyal to the government.

In classical economics — following in the footsteps of David Hume – especially Adam Smith, David Ricardo, and Jean-Baptiste Say put forward views on public debt that was more negative. The good budget was a balanced budget. If government borrowed money to finance its activities, it would only give birth to “crowding out” private enterprise and investments. The state was generally considered incapable if paying its debts, and the real burden would therefor essentially fall on the taxpayers that ultimately had to pay for the irresponsibility of government. The moral character of the argumentation was a salient feature — “either the nation must destroy public credit, or the public credit will destroy the nation” (Hume 1752)

Later on in the 20th century economists like John Maynard Keynes, Abba Lerner and Alvin Hansen would again hold a more positive view on public debt. Public debt was normally nothing to fear, especially if it was financed within the country itself (but even foreign loans could be beneficient for the economy if invested in the right way). Some members of society would hold bonds and earn interest on them, while others would have to pay the taxes that ultimately paid the interest on the debt. But the debt was not considered a net burden for society as a whole, since the debt cancelled itself out between the two groups. If the state could issue bonds at a low interest rate, unemployment could be reduced without necessarily resulting in strong inflationary pressure. And the inter-generational burden was no real burden according to this group of economists, since — if used in a suitable way — the debt would, through its effects on investments and employment, actually be net winners. There could, of course, be unwanted negative distributional side effects, for the future generation, but that was mostly considered a minor problem since (Lerner 1948) “if our children or grandchildren repay some of the national debt these payments will be made to our children and grandchildren and to nobody else.”

Central to the Keynesian influenced view is the fundamental difference between private and public debt. Conflating the one with the other is an example of the atomistic fallacy, which is basically a variation on Keynes’ savings paradox. If an individual tries to save and cut down on debts, that may be fine and rational, but if everyone tries to do it, the result would be lower aggregate demand and increasing unemployment for the economy as a whole.

An individual always have to pay his debts. But a government can always pay back old debts with new, through the issue of new bonds. The state is not like an individual. Public debt is not like private debt. Government debt is essentially a debt to itself, its citizens. Interest paid on the debt is paid by the taxpayers on the one hand, but on the other hand, interest on the bonds that finance the debts goes to those who lend out the money.

Abba Lerner’s essay Functional Finance and the Federal Debt set out guiding principles for governments to adopt in their efforts to use economic – especially fiscal – policies in trying to maintain full employment and prosperity in economies struggling with chronic problems with maintaining a high enough aggregate demand.

Because of this inherent deficiency, modern states tended to have structural and long-lasting problems of maintaining full employment. According to Lerner’s Functional Finance principles, the private sector has a tendency not to generate enough demand on its own, and so the government has to take on the responsibility to make sure that full employment was attained. The main instrument in doing this is open market operations – especially selling and buying interest-bearing government bonds.

Although Lerner seems to have had the view that the ideas embedded in Functional Finance was in principle applicable in all kinds of economies, he also recognized the importance of the institutional arrangements in shaping the feasibility and practical implementation of it.

Functional Finance critically depends on nation states being able to tax its citizens, have a currency — and bonds — of its own. As has become transparently clear during the Great Recession, EMU has not been able to impose those structures, since as Hayek noted already back in 1939, “government by agreement is only possible provided that we do not require the government to act in fields other than those in which we can obtain true agreement.” The monetary institutional structure of EMU makes it highly unlikely – not to say impossible — that this will ever become a “system” in which Functional Finance is adapted.

To Functional Finance the choices made by governments to finance the public deficits — and concomitant debts — was important, since bond-based financing was considered more expansionary than using taxes also. According to Lerner, the purpose of public debt is to achieve a rate of interest that results in investments making full employment feasible. In the short run this could result in deficits, but he firmly maintained that there was no reason to assume that the application of Functional Finance to maintain full employment implied that the government had to always borrow money and increase the public debt. An application of Functional Finance would have a tendency to balance the budget in the long run since basically the guarantee of permanent full employment will make private investment much more attractive and a fortiori the greater private investment will diminish the need for deficit spending.

To both Keynes and Lerner it was evident that the state had the ability to promote full employment and a stable price level – and that it should use its powers to do so. If that meant that it had to take on a debt and (more or less temporarily) underbalance its budget – so let it be! Public debt is neither good nor bad. It is a means to achieving two over-arching macroeconomic goals – full employment and price stability. What is sacred is not to have a balanced budget or running down public debt per se, regardless of the effects on the macroeconomic goals. If “sound finance”, austerity and a balanced budgets means increased unemployment and destabilizing prices, they have to be abandoned.

Now against this reasoning, exponents of the thesis of Ricardian equivalence, have maintained that whether the public sector finances its expenditures through taxes or by issuing bonds is inconsequential, since bonds must sooner or later be repaid by raising taxes in the future.

Robert Barro (1974) attempted to give the proposition a firm theoretical foundation, arguing that the substitution of a budget deficit for current taxes has no impact on aggregate demand and so budget deficits and taxation have equivalent effects on the economy.

If the public sector runs extra spending through deficits, taxpayers will according to the hypothesis anticipate that they will have to pay higher taxes in future — and therefore increase their savings and reduce their current consumption to be able to do so, the consequence being that aggregate demand would not be different to what would happen if taxes were raised today.

Ricardian equivalence basically means that financing government expenditures through taxes or debts is equivalent, since debt financing must be repaid with interest, and agents — equipped with rational expectations — would only increase savings in order to be able to pay the higher taxes in the future, thus leaving total expenditures unchanged.

The Ricardo-Barro hypothesis, with its view of public debt incurring a burden for future generations, is the dominant view among mainstream economists and politicians today. The rational people making up the actors in the model are assumed to know that today’s debts are tomorrow’s taxes. But — one of the main problems with this standard neoclassical theory is, however, that it doesn’t fit the facts.

From a more theoretical point of view, one may also strongly criticize the Ricardo-Barro model and its concomitant crowding out assumption, since perfect capital markets do not exist and repayments of public debt can take place far into the future and it’s dubious if we really care for generations 300 years from now.

At times when economic theories have been in favour of public debt one gets the feeling that the more or less explicit assumption is that public expenditures are useful and good for the economy, since they work as an important — and often necessary — injection to the economy, creating wealth and employment. At times when economic theories have been against public debt, the basic assumption seems to be that public expenditures are useless and only crowd out private initiatives and has no positive net effect on the economy.

Wolfgang Streeck argues in Buying Time: The Delayed Crisis of Democratic Capitalism (2014) for an interpretation of the more or less steady increase in public debt since the 1970s as a sign of a transformation of the tax state (Schumpeter) into a debt state. In his perspective public debt is both an indicator and a causal factor in the relationship between political and economic systems. The ultimate cause behind the increased public debt is the long run decline in economic growth, resulting in a doubling of the average public debt in OECD countries for the last 40 years. This has put strong pressures on modern capitalist states, and parallel to this, income inequality has increased in most countries. This is according to Streeck one manifestation of a neoliberal revolution – with its emphasis on supply side politics, austerity policies and financial deregulation — that has taken place and where democratic-redistributive intervention has become ineffectual.

Today there seems to be a rather widespread consensus of public debt being acceptable as long as it doesn’t increase too much and too fast. If the public debt-GDP ratio becomes higher than X % the likelihood of debt crisis and/or lower growth increases.

But in discussing within which margins public debt is feasible, the focus, however, is solely on the upper limit of indebtedness, and very few asks the question if maybe there is also a problem if public debt becomes too low.

The government’s ability to conduct an “optimal” public debt policy may be negatively affected if public debt becomes too small. To guarantee a well-functioning secondary market in bonds it is essential that the government has access to a functioning market. If turnover and liquidity in the secondary market becomes too small, increased volatility and uncertainty will in the long run lead to an increase in borrowing costs. Ultimately there’s even a risk that market makers would disappear, leaving bond market trading to be operated solely through brokered deals. As a kind of precautionary measure against this eventuality it may be argued – especially in times of financial turmoil and crises — that it is necessary to increase government borrowing and debt to ensure – in a longer run – good borrowing preparedness and a sustained (government) bond market.

The failure of successive administrations in most developed countries to embark on any vigorous policy aimed at bringing down unconscionably high levels of unemployment has been due in no small measure to a ‘viewing with alarm’ of the size of the national debts, often alleged to be already excessive, or at least threatening to become so, and  by ideologically urged striving toward ‘balanced’ government budgets without any consideration of whether such debts and deficits are or threaten to become excessive in terms of some determinable impact on the real general welfare. darling-let-s-get-deeply-into-debtIf they are examined in the light of their impact on welfare, however, they can usually be shown to be well below their optimum levels, let alone at levels that could have dire consequences.

To view government debts in terms of the ‘functional finance’ concept introduced by Abba Lerner, is to consider their role in the macroeconomic balance of the economy. In simple, bare bones terms, the function of government debts that is significant for the macroeconomic health of an economy is that they provide the assets into which individuals can put whatever accumulated savings they attempt to set aside in excess of what can be wisely invested in privately owned real assets. A debt that is smaller than this will cause the attempted excess savings, by being reflected in a reduced level of consumption outlays, to be lost in reduced real income and increased unemployment.

William Vickrey

Lucas’ Copernican revolution — nonsense on stilts

24 August, 2016 at 19:18 | Posted in Economics | 3 Comments

In Michel De Vroey’s version of the history of macroeconomics, Robert Lucas’ declaration of the need for macroeconomics to be pursued only within ‘equilibrium discipline’ and declaring equilibrium to exist as a postulate, is hailed as a ‘Copernican revolution.’ Equilibrium is not to be considered something that characterises real economies, but rather ‘a property of the way we look at things.’ De Vroey  — approvingly — notices that this — as well as Lucas’ banning of disequilibrium as referring to ‘unintelligible behaviour’ — ‘amounts to shrinking the pretence of equilibrium theory.’

Mirabile dictu!

Is it really a feasible methodology for economists to make a sharp divide between theory and reality, and then — like De Vroey and Lucas — treat the divide as something recommendable and good? I think not.

Fortunately there are other economists with a less devoted hagiographic attitude towards Lucas and his nonsense on stilts.

Alessandro Vercelli is one:

The equilibria analysed by Lucas are conceived as stationary stochastic processes. The fact that they are stationary imposes a long series of restrictive hypotheses on the range of applicability of the heuristic model, and these considerably reduce the empirical usefulness of Lucas’s equlibrium method …

9780521074735For such a method to make sense … the stationary ‘equilibrium’ stochastic process must also be ‘dynamically stable,’ or ‘ergodic,’ in the terminology of stochastic processes …

What is worse, if one adopts Lucas’s method of pure equilibrium implying the non-intelligibility of disequilibrium positions, there is no way to argue about the robustness of the alternative equilibria under consideration. In other words, Lucas’s heuristic model, not to mention the analytical models built according to his instructions, prove to be useless for the very purpose for which they were primarily constructed — the evaluation of alternative economic policies.

2-format2010Another one is Roman Freedman, Professor of Economics at New York University and a long time critic of the rational expectations hypothesis. In his seminal 1982 American Economic Review article Towards an Understanding of Market Processes: Individual Expectations, Learning, and Convergence to Rational Expectations Equilibrium — an absolute must-read for anyone with a serious interest in understanding what are the issues in the present discussion on rational expectations as a modeling assumption — he showed that the kind of models that Lucas recommends — models founded on ‘equilibrium discipline’ and the rational expectations hypothesis — are inadequate as representation of economic agents’ decision making.

Those who want to build macroeconomics on microfoundations usually maintain that the only robust policies and institutions are those based on rational expectations and representative actors. As yours truly has tried to show in On the use and misuse of theories and models in economics there is really no support for this conviction at all. For if this microfounded macroeconomics has nothing to say about the real world and the economic problems out there, why should we care about it? The final court of appeal for macroeconomic models is the real world, and as long as no convincing justification is put forward for how the inferential bridging de facto is made, macroeconomic modelbuilding is little more than hand waving that give us rather little warrant for making inductive inferences from models to real world target systems. If substantive questions about the real world are being posed, it is the formalistic-mathematical representations utilized to analyze them that have to match reality, not the other way around.

In one of their latest books on rational expectations, Roman Frydman and his colleague Michael Goldberg write:

Beyond_Mechanical_MarketsThe belief in the scientific stature of fully predetermined models, and in the adequacy of the Rational Expectations Hypothesis to portray how rational individuals think about the future, extends well beyond asset markets. Some economists go as far as to argue that the logical consistency that obtains when this hypothesis is imposed in fully predetermined models is a precondition of the ability of economic analysis to portray rationality and truth.

For example, in a well-known article published in The New York Times Magazine in September 2009, Paul Krugman (2009, p. 36) argued that Chicago-school free-market theorists “mistook beauty . . . for truth.” One of the leading Chicago economists, John Cochrane (2009, p. 4), responded that “logical consistency and plausible foundations are indeed ‘beautiful’ but to me they are also basic preconditions for ‘truth.’” Of course, what Cochrane meant by plausible foundations were fully predetermined Rational Expectations models. But, given the fundamental flaws of fully predetermined models, focusing on their logical consistency or inconsistency, let alone that of the Rational Expectations Hypothesis itself, can hardly be considered relevant to a discussion of the basic preconditions for truth in economic analysis, whatever “truth” might mean.

There is an irony in the debate between Krugman and Cochrane. Although the New Keynesian and behavioral models, which Krugman favors, differ in terms of their specific assumptions, they are every bit as mechanical as those of the Chicago orthodoxy. Moreover, these approaches presume that the Rational Expectations Hypothesis provides the standard by which to define rationality and irrationality.

In fact, the Rational Expectations Hypothesis requires no assumptions about the intelligence of market participants whatsoever … Rather than imputing superhuman cognitive and computational abilities to individuals, the hypothesis presumes just the opposite: market participants forgo using whatever cognitive abilities they do have. The Rational Expectations Hypothesis supposes that individuals do not engage actively and creatively in revising the way they think about the future. Instead, they are presumed to adhere steadfastly to a single mechanical forecasting strategy at all times and in all circumstances. Thus, contrary to widespread belief, in the context of real-world markets, the Rational Expectations Hypothesis has no connection to how even minimally reasonable profit-seeking individuals forecast the future in real-world markets. When new relationships begin driving asset prices, they supposedly look the other way, and thus either abjure profit-seeking behavior altogether or forgo profit opportunities that are in plain sight.

Beyond Mechanical Markets

And in a recent article the same authors write:

Contemporary economists’ reliance on mechanical rules to understand – and influence – economic outcomes extends to macroeconomic policy as well, and often draws on an authority, John Maynard Keynes, who would have rejected their approach. Keynes understood early on the fallacy of applying such mechanical rules. “We have involved ourselves in a colossal muddle,” he warned, “having blundered in the control of a delicate machine, the working of which we do not understand.”

In The General Theory of Employment, Interest, and Money, Keynes sought to provide the missing rationale for relying on expansionary fiscal policy to steer advanced capitalist economies out of the Great Depression. But, following World War II, his successors developed a much more ambitious agenda. Instead of pursuing measures to counter excessive fluctuations in economic activity, such as the deep contraction of the 1930’s, so-called stabilization policies focused on measures that aimed to maintain full employment. “New Keynesian” models underpinning these policies assumed that an economy’s “true” potential – and thus the so-called output gap that expansionary policy is supposed to fill to attain full employment – can be precisely measured.

But, to put it bluntly, the belief that an economist can fully specify in advance how aggregate outcomes – and thus the potential level of economic activity – unfold over time is bogus …

Roman Frydman & Michael Goldberg

The real macroeconomic challenge is to accept uncertainty and still try to explain why economic transactions take place – instead of simply conjuring the problem away  à la Lucas by assuming equilibrium and rational expectations, and treating uncertainty as if it was possible to reduce to stochastic risk. That is scientific cheating. And it has been going on for too long now.

Microfoundations — contestable incoherence

24 August, 2016 at 15:46 | Posted in Economics | 2 Comments

ba7658c533d4de20cf77161b8910d903cab9cbe6_m

Defenders of microfoundations and its rational expectations equipped representative agent’s intertemporal optimization often argue as if sticking with simple representative agent macroeconomic models doesn’t impart a bias to the analysis. I unequivocally reject that unsubstantiated view, and have given the reasons why here.

These defenders often also maintain that there are no methodologically coherent alternatives to microfoundations modeling. That allegation is of course difficult to evaluate, substantially hinging on how coherence is defined. But one thing I do know, is that the kind of microfoundationalist macroeconomics that New Classical economists and “New Keynesian” economists are pursuing, are not methodologically coherent according to the standard coherence definition (see e. g. here). And that ought to be rather embarrassing for those ilks of macroeconomists to whom axiomatics and deductivity is the hallmark of science tout court.

The fact that Lucas introduced rational expectations as a consistency axiom is not really an argument to why we should accept it as an acceptable assumption in a theory or model purporting to explain real macroeconomic processes (see e. g. here). And although virtually any macroeconomic empirical claim is contestable, so is any claim in micro (see e. g. here).

Robert Lucas’ warped methodology

23 August, 2016 at 19:43 | Posted in Economics | Leave a comment

Economic theory, like anthropology, ‘works’ by studying societies which are in some relevant sense simpler or more primitive than our own, in the hope either that relations that are important but hidden in our society will be laid bare in simpler ones, or that concrete evidence can be discovered for possibilities which are open to us which are without precedent in our own history. 1643.Lebowski.jpg-610x0Unlike anthropologists, however, economists simply invent the primitive societies we study, a practice which frees us from limiting ourselves to societies which can be physically visited as sparing us the discomforts of long stays among savages. This method of society-invention is the source of the utopian character of economics; and of the mix of distrust and envy with which we are viewed by our fellow social scientists. The point of studying wholly fictional, rather than actual societies, is that it is relatively inexpensive to subject them to external forces of various types and observe the way they react. If, subjected to forces similar to those acting on actual societies, the artificial society reacts in a similar way, we gain confidence that there are useable connections between the invented society and the one we really care about.

Robert Lucas

Although neither yours truly, nor anthropologists (I guess), will recognise anything in this description even remotely reminiscent of practices actually used in real sciences, this quote still gives a very good picture of Lucas’ warped methodology.

60088455All empirical sciences use simplifying or unrealistic assumptions in their modeling activities. That is not the issue – as long as the assumptions made are not unrealistic in the wrong way or for the wrong reasons.

The implications that follow from the kind of models that people like Robert Lucas — according to Ed Prescott, ‘the master of methodology’ — are always conditional on the simplifying assumptions used — assumptions predominantly of a rather far-reaching and non-empirical character with little resemblance to features of the real world. From a descriptive point of view there is a fortiori usually very little resemblance between the models used and the empirical world. ‘As if’ explanations building on such foundations are not really any explanations at all, since they always conditionally build on hypothesized law-like theorems and situation-specific restrictive assumptions. The empirical-descriptive inaccuracy of the models makes it more or less miraculous if they should — in any substantive way — be able to be considered explanative at all. If the assumptions that are made are known to be descriptively totally unrealistic (think of e.g. ‘rational expectations’) they are of course likewise totally worthless for making empirical inductions. Assuming — as Lucas — that people behave ‘as if’ they were rational FORTRAN programmed computers doesn’t take us far when we know that the ‘if’ is false.

The obvious shortcoming of a basically epistemic — rather than ontological — approach such as ‘successive approximations’ and ‘as if’ modeling assumptions, is that ‘similarity’, ‘analogy’ or ‘resemblance’ tout court do not guarantee that the correspondence between model and target is interesting, relevant, revealing or somehow adequate in terms of mechanisms, causal powers, capacities or tendencies. No matter how many convoluted refinements of concepts made in the model, if the successive ‘as if’ approximations do not result in models similar to reality in the appropriate respects (such as structure, isomorphism, etc), they are nothing more than ‘substitute systems’ that do not bridge to the world but rather misses its target.

Economics building on the kind of modeling strategy that Lucas represents does not  produce science.

It’s nothing but pseudo-scientific cheating.

The thrust of this realist rhetoric is the same both at the scientific and at the meta-scientific levels. It is that explanatory virtues need not be evidential virtues. It is that you should feel cheated by “The world is as if T were true”, in the same way as you should feel cheated by “The stars move as if they were fixed on a rotating sphere”. Realists do feel cheated in both cases.

Alan Musgrave

Contrary to what some überimpressed macroeconomists seem to argue, I would say the recent economic crisis and the fact that Chicago economics has had next to nothing to contribute in understanding it, shows that Lucas and his New Classical economics — in Lakatosian terms — is a degenerative research program in dire need of replacement.

Mainstream economic theory has for long been in the story-telling business whereby economic theorists create make-believe analogue models of the target system – usually conceived as the real economic system. This modeling activity is considered useful and essential. Since fully-fledged experiments on a societal scale as a rule are prohibitively expensive, ethically indefensible or unmanageable, economic theorists have to substitute experimenting with something else. To understand and explain relations between different entities in the real economy the predominant strategy is to build models and make things happen in these “analogue-economy models” rather than engineering things happening in real economies.

In business cycles theory these models are constructed with the purpose of showing that changes in the supply of money “have the capacity to induce depressions or booms” [Lucas 1988:3] not just in these models, but also in real economies. To do so economists are supposed to imagine subjecting their models to some kind of “operational experiment” and “a variety of reactions”. “In general, I believe that one who claims to understand the principles of flight can reasonably be expected to be able to make a flying machine, and that understanding business cycles means the ability to make them too, in roughly the same sense” [Lucas 1981:8]. To Lucas models are the laboratories of economic theories, and after having made a simulacrum-depression Lucas hopes we find it “convincing on its own terms – that what I said would happen in the [model] as a result of my manipulation would in fact happen” [Lucas 1988:4]. The clarity with which the effects are seen is considered “the key advantage of operating in simplified, fictional worlds” [Lucas 1988:5].

On the flipside lies the fact that “we are not really interested in understanding and preventing depressions in hypothetical [models]. We are interested in our own, vastly more complicated society” [Lucas 1988:5]. But how do we bridge the gulf between model and “target system”? According to Lucas we have to be willing to “argue by analogy from what we know about one situation to what we would like to know about another, quite different situation” [Lucas 1988:5]. Progress lies in the pursuit of the ambition to “tell better and better stories” [Lucas 1988:5], simply because that is what economists do.

We are storytellers, operating much of the time in worlds of make believe. We do not find that the realm of imagination and ideas is an alternative to, or retreat from, practical reality. On the contrary, it is the only way we have found to think seriously about reality. In a way, there is nothing more to this method than maintaining the conviction … that imagination and ideas matter … there is no practical alternative” [Lucas 1988:6].

Lucas has applied this mode of theorizing by constructing “make-believe economic systems” to the age-old question of what causes and constitutes business cycles. According to Lucas the standard for what that means is that one “exhibits understanding of business cycles by constructing a model in the most literal sense: a fully articulated artificial economy, which behaves through time so as to imitate closely the time series behavior of actual economies” [Lucas 1981:219].

To Lucas business cycles is an inherently systemic phenomenon basically characterized by conditional co-variations of different time series. The vision is “the possibility of a unified explanation of business cycles, grounded in the general laws governing market economies, rather than in political or institutional characteristics specific to particular countries or periods” [Lucas 1981:218]. To be able to sustain this view and adopt his “equilibrium approach” he has to define the object of study in a very constrained way. Lucas asserts, e.g., that if one wants to get numerical answers “one needs an explicit, equilibrium account of the business cycles” [Lucas 1981:222]. But his arguments for why it necessarily has to be an equilibrium is not very convincing. The main restriction is that Lucas only deals with purportedly invariable regularities “common to all decentralized market economies” [Lucas 1981:218]. Adopting this definition he can treat business cycles as all alike “with respect to the qualitative behavior of the co-movements among series” [1981:218].

Postulating invariance paves the way for treating various economic entities as stationary stochastic processes (a standard assumption in most modern probabilistic econometric approaches) and the possible application of “economic equilibrium theory.” The result is that Lucas business cycle is a rather watered-down version of what is usually connoted when speaking of business cycles.

Based on the postulates of “self-interest” and “market clearing” Lucas has repeatedly stated that a pure equilibrium method is a necessary intelligibility condition and that disequilibria are somehow “arbitrary” and “unintelligible” [Lucas 1981:225]. Although this might (arguably) be requirements put on models, these requirements are irrelevant and totally without justification vis-à-vis the real world target system. Why should involuntary unemployment, for example, be considered an unintelligible disequilibrium concept? Given the lack of success of these models when empirically applied, what is unintelligible, is rather to pursue in this reinterpretation of the ups and downs in business cycles and labour markets as equilibria. To Keynes involuntary unemployment is not equatable to actors on the labour market becoming irrational non-optimizers. It is basically a reduction in the range of working-options open to workers, regardless of any volitional optimality choices made on their part. Involuntary unemployment is excess supply of labour. That unemployed in Lucas business cycles models only can be conceived of as having chosen leisure over work is not a substantive argument about real world unemployment. Sometimes workers are not employed. That is a real phenomenon and not a “theoretical construct … the task of modern theoretical economics to ‘explain’” [Lucas 1981:243].

All economic theories have to somehow deal with the daunting question of uncertainty and risk. It is “absolutely crucial for understanding business cycles” [Lucas 1981:223]. To be able to practice economics at all, “we need some way … of understanding which decision problem agents are solving” [Lucas 1981:223]. Lucas – in search of a “technical model-building principle” [Lucas 1981:1] – adapts the rational expectations view, according to which agents’ subjective probabilities are identified “with observed frequencies of the events to be forecast” are coincident with “true” probabilities. This hypothesis [Lucas 1981:224]

will most likely be useful in situations in which the probabilities of interest concern a fairly well defined recurrent event, situations of ‘risk’ [where] behavior may be explainable in terms of economic theory … In cases of uncertainty, economic reasoning will be of no value … Insofar as business cycles can be viewed as repeated instances of essentially similar events, it will be reasonable to treat agents as reacting to cyclical changes as ‘risk’, or to assume their expectations are rational, that they have fairly stable arrangements for collecting and processing information, and that they utilize this information in forecasting the future in a stable way, free of systemic and easily correctable biases.

To me this seems much like putting the cart before the horse. Instead of adapting the model to the object – which from both ontological and epistemological considerations seem the natural thing to do – Lucas proceeds in the opposite way and chooses to define his object and construct a model solely to suit own methodological and theoretical preferences. All those – interesting and important – features of business cycles that have anything to do with model-theoretical openness, and a fortiori not possible to squeeze into the closure of the model, are excluded. One might rightly ask what is left of that we in a common sense meaning refer to as business cycles. Einstein’s dictum – “everything should be made as simple as possible but not simpler” falls to mind. Lucas – and neoclassical economics at large – does not heed the implied apt warning.

The development of macro-econometrics has according to Lucas supplied economists with “detailed, quantitatively accurate replicas of the actual economy” thereby enabling us to treat policy recommendations “as though they had been experimentally tested” [Lucas 1981:220]. But if the goal of theory is to be able to make accurate forecasts this “ability of a model to imitate actual behavior” does not give much leverage. What is required is “invariance of the structure of the model under policy variations”. Parametric invariance in an economic model cannot be taken for granted, “but it seems reasonable to hope that neither tastes nor technology vary systematically” [Lucas 1981:220].

The model should enable us to posit contrafactual questions about what would happen if some variable was to change in a specific way. Hence the assumption of structural invariance, that purportedly enables the theoretical economist to do just that. But does it? Lucas appeals to “reasonable hope”, a rather weak justification for a modeler to apply such a far-reaching assumption. To warrant it one would expect an argumentation that this assumption – whether we conceive of it as part of a strategy of “isolation”, “idealization” or “successive approximation” – really establishes a useful relation that we can export or bridge to the target system, the “actual economy.” That argumentation is neither in Lucas, nor – to my knowledge – in the succeeding neoclassical refinements of his “necessarily artificial, abstract, patently ‘unreal’” analogue economies [Lucas 1981:271]. At most we get what Lucas himself calls “inappropriately maligned” casual empiricism in the form of “the method of keeping one’s eyes open.” That is far from sufficient to warrant any credibility in a model pretending to explain the complex and difficult recurrent phenomena we call business cycles. To provide an empirical “illustration” or a “story” to back up your model do not suffice. There are simply too many competing illustrations and stories that could be exhibited or told.

As Lucas has to admit – complaining about the less than ideal contact between theoretical economics and econometrics – even though the “stories” are (purportedly) getting better and better, “the necessary interaction between theory and fact tends not to take place” [Lucas 1981:11].

The basic assumption of this “precise and rigorous” model therefore cannot be considered anything else than an unsubstantiated conjecture as long as it is not supported by evidence from outside the theory or model. To my knowledge no in any way decisive empirical evidence have been presented. This is the more tantalizing since Lucas himself stresses that the presumption “seems a sound one to me, but it must be defended on empirical, not logical grounds” [Lucas 1981:12].

And applying a “Lucas critique” on Lucas own model, it is obvious that it too fails. Changing “policy rules” cannot just be presumed not to influence investment and consumption behavior and a fortiori technology, thereby contradicting the invariance assumption. Technology and tastes cannot live up to the status of an economy’s deep and structurally stable Holy Grail. They too are part and parcel of an ever-changing and open economy. Lucas hope of being able to model the economy as “a FORTRAN program” and “gain some confidence that the component parts of the program are in some sense reliable prior to running it” [Lucas 1981:288] therefore seems – from an ontological point of view – totally misdirected. The failure in the attempt to anchor the analysis in the alleged stable deep parameters “tastes” and “technology” shows that if you neglect ontological considerations pertaining to the target system, ultimately reality kicks back when at last questions of bridging and exportation of model exercises are laid on the table. No matter how precise and rigorous the analysis is, and no matter how hard one tries to cast the argument in “modern mathematical form” [Lucas 1981:7] they do not push science forwards one single millimeter if they do not stand the acid test of relevance to the target. No matter how clear, precise, rigorous or certain the inferences delivered inside these models are, they do not per se say anything about external validity.

 

References

Lucas, Robert (1981), Studies in Business-Cycle Theory. Oxford: Basil Blackwell.

– (1986), Adaptive Behavior and Economic Theory. In Hogarth, Robin & Reder, Melvin (eds) Rational Choice (pp. 217-242). Chicago: The University of Chicago Press.

– (1988), What Economists Do.

Syll, Lars (2016), On the use and misuse of theories and models in economics.

What was Robert Barro smoking when he came up with ‘Ricardian equivalence’?

23 August, 2016 at 15:12 | Posted in Economics | 1 Comment

 

Ricardian equivalence basically means that financing government expenditures through taxes or debts is equivalent, since debt financing must be repaid with interest, and agents — equipped with rational expectations — would only increase savings in order to be able to pay the higher taxes in the future, thus leaving total expenditures unchanged.

6a00e54ecbb69a88330148c6a1f2cd970c

Why?

In the standard neoclassical consumption model — used in DSGE macroeconomic modeling — people are basically portrayed as treating time as a dichotomous phenomenon  today and the future — when contemplating making decisions and acting. How much should one consume today and how much in the future? Facing an intertemporal budget constraint of the form

ct + cf/(1+r) = ft + yt + yf/(1+r),

where ct is consumption today, cf is consumption in the future, ft is holdings of financial assets today, yt is labour incomes today, yf is labour incomes in the future, and r is the real interest rate, and having a lifetime utility function of the form

U = u(ct) + au(cf),

where a is the time discounting parameter, the representative agent (consumer) maximizes his utility when

u'(ct) = a(1+r)u'(cf).

This expression – the Euler equation – implies that the representative agent (consumer) is indifferent between consuming one more unit today or instead consuming it tomorrow. Typically using a logarithmic function form – u(c) = log c – which gives u'(c) = 1/c, the Euler equation can be rewritten as

1/ct = a(1+r)(1/cf),

or

cf/ct = a(1+r).

This importantly implies that according to the neoclassical consumption model changes in the (real) interest rate and consumption move in the same direction. And — it also follows that consumption is invariant to the timing of taxes, since wealth — ft + yt + yf/(1+r) — has to be interpreted as present discounted value net of taxes. And so, according to the assumption of Ricardian equivalence, the timing of taxes does not affect consumption, simply because the maximization problem as specified in the model is unchanged.

That the theory doesn’t fit the facts we already knew.

And a couple of months ago, on Voxeu, Jonathan A. Parker summarized a series of studies empirically testing the theory, reconfirming how out of line with reality is Ricardian equivalence.

This only, again, underlines that there is, of course, no reason for us to believe in that fairy-tale. Ricardo himself — mirabile dictu — didn’t believe in Ricardian equivalence. In Essay on the Funding System (1820) he wrote:

But the people who paid the taxes never so estimate them, and therefore do not manage their private affairs accordingly. We are too apt to think that the war is burdensome only in proportion to what we are at the moment called to pay for it in taxes, without reflecting on the probable duration of such taxes. It would be difficult to convince a man possessed of £20,000, or any other sum, that a perpetual payment of £50 per annum was equally burdensome with a single tax of £1000.

And as one Nobel laureate had it:

Ricardian equivalence is taught in every graduate school in the country. It is also sheer nonsense.

Joseph E. Stiglitz, twitter

Bank stress testing — not particularly helpful

23 August, 2016 at 12:31 | Posted in Economics | Leave a comment

 

Keynes-Hicks macrotheory — a ‘New Keynesian’ unicorn fantasy

22 August, 2016 at 17:08 | Posted in Economics | 9 Comments

Paul Krugman has in numerous posts on his blog tried to defend “the whole enterprise of Keynes/Hicks macroeconomic theory” and especially his own somewhat idiosyncratic version of IS-LM.

Unicorn-s-fantasy-227928_300_312The main problem is simpliciter that there is no such thing as a Keynes-Hicks macroeconomic theory!

So, let us get some things straight.

There is nothing in the post-General Theory writings of Keynes that suggests him considering Hicks’s IS-LM anywhere near a faithful rendering of his thought. In Keynes’s canonical statement of the essence of his theory in the 1937 QJE-article there is nothing to even suggest that Keynes would have thought the existence of a Keynes-Hicks-IS-LM-theory anything but pure nonsense. So of course there can’t be any “vindication for the whole enterprise of Keynes/Hicks macroeconomic theory” – simply because “Keynes/Hicks” never existed.

And it gets even worse!

John Hicks, the man who invented IS-LM in his 1937 Econometrica review of Keynes’ General Theory – ‘Mr. Keynes and the ‘Classics’. A Suggested Interpretation’ – returned to it in an article in 1980 – ‘IS-LM: an explanation’ – in Journal of Post Keynesian Economics. Self-critically he wrote:

I accordingly conclude that the only way in which IS-LM analysis usefully survives — as anything more than a classroom gadget, to be superseded, later on, by something better – is in application to a particular kind of causal analysis, where the use of equilibrium methods, even a drastic use of equilibrium methods, is not inappropriate. I have deliberately interpreted the equilibrium concept, to be used in such analysis, in a very stringent manner (some would say a pedantic manner) not because I want to tell the applied economist, who uses such methods, that he is in fact committing himself to anything which must appear to him to be so ridiculous, but because I want to ask him to try to assure himself that the divergences between reality and the theoretical model, which he is using to explain it, are no more than divergences which he is entitled to overlook. I am quite prepared to believe that there are cases where he is entitled to overlook them. But the issue is one which needs to be faced in each case.

When one turns to questions of policy, looking toward the future instead of the past, the use of equilibrium methods is still more suspect. For one cannot prescribe policy without considering at least the possibility that policy may be changed. There can be no change of policy if everything is to go on as expected-if the economy is to remain in what (however approximately) may be regarded as its existing equilibrium. It may be hoped that, after the change in policy, the economy will somehow, at some time in the future, settle into what may be regarded, in the same sense, as a new equilibrium; but there must necessarily be a stage before that equilibrium is reached …

I have paid no attention, in this article, to another weakness of IS-LM analysis, of which I am fully aware; for it is a weakness which it shares with General Theory itself. It is well known that in later developments of Keynesian theory, the long-term rate of interest (which does figure, excessively, in Keynes’ own presentation and is presumably represented by the r of the diagram) has been taken down a peg from the position it appeared to occupy in Keynes. We now know that it is not enough to think of the rate of interest as the single link between the financial and industrial sectors of the economy; for that really implies that a borrower can borrow as much as he likes at the rate of interest charged, no attention being paid to the security offered. As soon as one attends to questions of security, and to the financial intermediation that arises out of them, it becomes apparent that the dichotomy between the two curves of the IS-LM diagram must not be pressed too hard.

The editor of JPKE, Paul Davidson, gives the background to Hicks’s article:

I originally published an article about Keynes’s finance motive — which in 1937 Keynes added to his other liquidity preference motives (transactions, precautionary, speculative motives) , I showed that adding this finance motive required that Hicks’s IS curve and LM curves to be interdependent — and thus when the IS curve shifted so would the LM curve.
Hicks and I then discussed this when we met several times.
When I first started to think about the ergodic vs. nonergodic dischotomy, I sent to Hicks some preliminary drafts of articles I would be writing about nonergodic processes. Then John and I met several times to discuss this matter further and I finally convinced him to write the article — which I published in the Journal of Post Keynesian Economics– in which he renounces the IS-LM apparatus. Hicks then wrote me a letter in which he thought the word nonergodic was wonderful and said he wanted to lable his approach to macroeconomics as nonergodic!

So – back in 1937 John Hicks said that he was building a model of John Maynard Keynes’ General Theory. In 1980 he openly admits he wasn’t.

What Hicks acknowledges in 1980 is basically that his original review totally ignored the very core of Keynes’ theory – uncertainty. In doing this he actually turned the train of macroeconomics on the wrong tracks for decades. It’s about time that neoclassical economists – as Krugman, Mankiw, or what have you – set the record straight and stop promoting something that the creator himself admits was a total failure. Why not study the real thing itself – General Theory – in full and without looking the other way when it comes to non-ergodicity and uncertainty?

Paul Krugman persists in talking about a Keynes-Hicks-IS-LM-model that really never existed. It’s deeply disappointing. You would expect more from a Nobel prize winner.

In his 1937 paper Hicks actually elaborates four different models (where Hicks uses I to denote Total Income and Ix to denote Investment):

1) “Classical”: M = kI   Ix = C(i)   Ix = S(i,I)

2) Keynes’ “special” theory: M = L(i)   Ix = C(i)    I = S(I)

3) Keynes’ “general” theory: M = L(I, i)   Ix = C(i)   I = S(I)

4) The “generalized general” theory: M = L(I, i)   Ix =C(I, i)  Ix = S(I, i)

It is obvious from the way Krugman draws his IS-LM curves that he is thinking in terms of model number 4 – and that is not even by Hicks considered a Keynes model (modells 2 and 3)! It’s basically a loanable funds model, that belongs in the “classical” camp and which you find reproduced in most mainstream textbooks. Hicksian IS-LM? Maybe. Keynes? No way!

Steve Keen, Noah Smith and heterodox ‘anti-math’ economics

21 August, 2016 at 15:56 | Posted in Economics | 5 Comments

4816e5631fdce0191e6f8ba67d81758dResponding to the critique of his Bloomberg View post on heterodox economics and its alleged anti-math position, Noah Smith approvingly cites Steve Keen telling us there is

a wing of heterodox economics that is anti-mathematical. Known as “Critical Realism” and centred on the work of Tony Lawson at Cambridge UK, it attributes the failings of economics to the use of mathematics itself…

Although yours truly appreciate much of Steve Keen’s debunking of mainstream economics, on this issue he is, however, just plain wrong! For a more truthful characterization of Tony Lawson’s position, here’s what Axel Leijonhufvud has to say:

For a good many years, Tony Lawson has been urging economists to pay attention to their ontological presuppositions. Economists have not paid much attention, perhaps because few of us know what “ontology” means. This branch of philosophy stresses the need to “grasp the nature of the reality” that is the object of study – and to adapt one’s methods of inquiry to it.
5112X+PoJkLEconomics, it might be argued, has gotten this backwards. We have imposed our pre-conceived methods on economic reality in such manner as to distort our understanding of it. We start from optimal choice and fashion an image of reality to fit it. We transmit this distorted picture of what the world is like to our students by insisting that they learn to perceive the subject matter trough the lenses of our method.

The central message of Lawson’s critique of modern economics is that an economy is an “open system” but economists insist on dealing with it as if it were “closed.” Controlled experiments in the natural sciences create closure and in so doing make possible the unambiguous association of “cause” and “effects”. Macroeconomists, in particular, never have the privilege of dealing with systems that are closed in this controlled experiment sense.

Our mathematical representations of both individual and system behaviour require the assumption of closure for the models to have determinate solutions. Lawson, consequently, is critical of mathematical economics and, more generally, of the role of deductivism in our field. Even those of us untutored in ontology may reflect that it is not necessarily a reasonable ambition to try to deduce the properties of very large complex systems from a small set of axioms. Our axioms are, after all, a good deal shakier than Euclid’s.

The impetus to “closure” in modern macroeconomics stems from the commitment to optimising behaviour as the “microfoundations” of the enterprise. Models of “optimal choice” render agents as automatons lacking “free will” and thus deprived of choice in any genuine sense. Macrosystems composed of such automatons exclude the possibility of solutions that could be “disequilibria” in any meaningful sense. Whatever happens, they are always in equilibrium.

Axel Leijonhufvud

Modern economics has become increasingly irrelevant to the understanding of the real world. In his seminal book Economics and Reality (1997) Tony Lawson traced this irrelevance to the failure of economists to match their deductive-axiomatic methods with their subject.

It is — sad to say — as relevant today as it was seventeen years ago.

It is still a fact that within mainstream economics internal validity is everything and external validity nothing. Why anyone should be interested in that kind of theories and models is beyond my imagination. As long as mainstream economists do not come up with any export-licenses for their theories and models to the real world in which we live, they really should not be surprised if people say that this is not science, but autism!

Studying mathematics and logics is interesting and fun. It sharpens the mind. In pure mathematics and logics we do not have to worry about external validity. But economics is not pure mathematics or logics. It’s about society. The real world. Forgetting that, economics is really in dire straits.

To many mainstream economists, Tony Lawson is synonymous with anti-mathematics. But I think reading what Tony Lawson or yours truly have written on the subject, shows how unfounded and ridiculous is the idea that many mainstream economists have that because heterodox people often criticize the application of mathematics in mainstream economics, we are critical of math per se.

Indeed.

No, there is nothing wrong with mathematics per se.

No, there is nothing wrong with applying mathematics to economics.

amathMathematics is one valuable tool among other valuable tools for understanding and explaining things in economics.

What is, however, totally wrong, are the utterly simplistic beliefs that

• “math is the only valid tool”

• “math is always and everywhere self-evidently applicable”

• “math is all that really counts”

• “if it’s not in math, it’s not really economics”

“almost everything can be adequately understood and analyzed with math”

What is wrong with these beliefs is that they do not — as forcefully argued by Tony Lawson — reflect an ontological reflection on what can be rightfully expected from using mathematical methods in different context. Or as Knut Wicksell put it already a century ago:

Knut-WicksellOne must, of course, beware of expecting from this method more than it can give. Out of the crucible of calculation comes not an atom more truth than was put in. The assumptions being hypothetical, the results obviously cannot claim more than a vey limited validity. The mathematical expression ought to facilitate the argument, clarify the results, and so guard against possible faults of reasoning — that is all.

It is, by the way, evident that the economic aspects must be the determining ones everywhere: economic truth must never be sacrificed to the desire for mathematical elegance.
 

De Vroey’s Chicago style History of Macroeconomics

21 August, 2016 at 14:32 | Posted in Economics | Leave a comment

A couple of years ago Michel De Vroey felt the urge to write a defense of Robert Lucas’ denial of involuntary unemployment:

What explains the difficulty of constructing a theory of involuntary unemployment? Is it, as argued by Lucas, that the “thing” to be explained doesn’t exist, or is it due to some deeply embedded premise of economic theory? My own view tilts towards the latter. Economic theory is concerned with fictitious parables. The premises upon which it is based have the advantage of allowing tractable, rigorous theorising, but the price of this is that important facts of life are excluded from the theoretical universe. Non-chosen outcomes is one of them. The underlying reason lies in the trade technology and information assumptions upon which both the Walrasian and the Marshallian (and the neo-Walrasian and neo-Marshallian) approaches are based. This is a central conclusion of my inquiry: the stumbling block to the introduction of involuntary unemployment lies in the assumptions about trade technology that are usually adopted in economic theory.

unemployed1Foregoing the involuntary unemployment claim may look like a high price to pay, particularly if it is admitted that good reasons exist for believing in its real world relevance. But would its abandonment really be so dramatic? …

First of all, the elimination of this concept would only affect the theoretical sphere. Drawing conclusions from this sphere about the real world would be a mistake. No jumps should be made from the world of theory to the real world, or vice-versa … The fact that solid arguments can be put forward as to its real world existence is not a sufficient condition to give involuntary unemployment theoretical legitimacy.

Michel De Vroey

nonsequitur090111

I have to admit of being totally unimpressed by this rather defeatist methodological stance. Is it really a feasible methodology for economists to make  a sharp divide between theory and reality, and then treat the divide as something recommendable and good? I think not.

Models and theories should — if they are to be of any real interest — have to look to the world. Being able to construct “fictitious parables” or build models of a “credible world,” is not enough. No matter how many convoluted refinements of concepts made in the theory or model, if they do not result in “things” similar to reality in the appropriate respects, such as structure, isomorphism etc, the surrogate system becomes a substitute system — and why should we care about that? Science has to have higher aspirations.

Mainstream economic theory today is in the story-telling business whereby economic theorists create mathematical make-believe analogue models of the target system – usually conceived as the real economic system. This modeling activity is considered useful and essential. Formalistic deductive “Glasperlenspiel” can be very impressive and seductive. But in the realm of science it ought to be considered of little or no value to simply make claims about the theory or model and lose sight of reality. Insisting — like De Vroey — that “no jumps should be made from the world of theory to the real world, or vice-versa” is an untenable methodological position.

full coverIn his new book — A History of Macroeconomics from Keynes to Lucas and Beyond (CUP 2016) — De Vroey basically tells the same misleading and wrong-headed story as in the article mentioned above. He explicitly acknowledges that his assessment of The General Theory ‘follows from this analysis.’ Citing Mankiw and Patinkin, the author criticises Keynes for not having built his analysis on an ‘explicit and complete model’ and therefore was unable to translate his views ‘into a rigorous demonstration.’

Where did Keynes’ unemployment analysis go wrong? To De Vroey the answer  seems to be that ‘the notion of involuntary unemployment was not questioned’ and that  ‘the fact that unemployment was massive was taken as an indication that it could not be voluntary.’

182ytxt83k5oxjpgDoes it sound familiar? Well it should! Because this is the standard Chicago New Classical Economics view that never has accepted Keynes’s distinction between voluntary and involuntary unemployment. According to New Classical übereconomist Robert Lucas, an unemployed worker can always instantaneously find some job. No matter how miserable the work options are, “one can always choose to accept them,” according to Lucas:

KLAMER: My taxi driver here is driving a taxi, even though he is an accountant, because he can’t find a job …

LUCAS: I would describe him as a taxi driver [laughing], if what he is doing is driving a taxi.

KLAMER: But a frustrated taxi driver.

LUCAS: Well, we draw these things out of urns, and sometimes we get good draws, sometimes we get bad draws.

Arjo Klamer

In New Classical Economics unemployment is seen as as a kind of leisure that workers optimally select. In the basic DSGE models used by these economists, the labour market is always cleared – responding to a changing interest rate, expected life time incomes, or real wages, the representative agent maximizes the utility function by varying her labour supply, money holding and consumption over time. Most importantly – if the real wage somehow deviates from its “equilibrium value,” the representative agent adjust her labour supply, so that when the real wage is higher than its “equilibrium value,” labour supply is increased, and when the real wage is below its “equilibrium value,” labour supply is decreased.

In this model world, unemployment is always an optimal choice to changes in the labour market conditions. Hence, unemployment is totally voluntary. To be unemployed is something one optimally chooses to be.

It is extremely important to pose the question why mainstream economists choose to work with these kinds of models. It is not a harmless choice based solely on ‘internal’ scientific considerations. It is in fact also, and not to a trivial extent, a conscious choice motivated by ideology.

By employing these models one is actually to a significant degree absolving the structure of market economies from any responsibility in creating unemployment. Focussing on the choices of individuals, the unemployment ‘problem’ is reduced to being an individual ‘problem’, and not something that essentially has to do with the workings of market economies. A conscious methodological choice in this way comes to work as an apologetic device for not addressing or challenging given structures.

Not being able to explain unemployment, these models can’t help us to change the structures and institutions that produce the arguably greatest problem of our society.

 

Added GMT 1800: Although De Vroey has a whole chapter on Hicks and the IS-LM model, he does not mention that Hicks returned to his 1937 IS-LM article in an article in 1980 – ‘IS-LM: an explanation’ – in Journal of Post Keynesian Economics — and self-critically wrote:

I accordingly conclude that the only way in which IS-LM analysis usefully survives — as anything more than a classroom gadget, to be superseded, later on, by something better – is in application to a particular kind of causal analysis, where the use of equilibrium methods, even a drastic use of equilibrium methods, is not inappropriate. I have deliberately interpreted the equilibrium concept, to be used in such analysis, in a very stringent manner (some would say a pedantic manner) not because I want to tell the applied economist, who uses such methods, that he is in fact committing himself to anything which must appear to him to be so ridiculous, but because I want to ask him to try to assure himself that the divergences between reality and the theoretical model, which he is using to explain it, are no more than divergences which he is entitled to overlook. I am quite prepared to believe that there are cases where he is entitled to overlook them. But the issue is one which needs to be faced in each case.

When one turns to questions of policy, looking toward the future instead of the past, the use of equilibrium methods is still more suspect. For one cannot prescribe policy without considering at least the possibility that policy may be changed. There can be no change of policy if everything is to go on as expected-if the economy is to remain in what (however approximately) may be regarded as its existing equilibrium. It may be hoped that, after the change in policy, the economy will somehow, at some time in the future, settle into what may be regarded, in the same sense, as a new equilibrium; but there must necessarily be a stage before that equilibrium is reached …

I have paid no attention, in this article, to another weakness of IS-LM analysis, of which I am fully aware; for it is a weakness which it shares with General Theory itself. It is well known that in later developments of Keynesian theory, the long-term rate of interest (which does figure, excessively, in Keynes’ own presentation and is presumably represented by the r of the diagram) has been taken down a peg from the position it appeared to occupy in Keynes. We now know that it is not enough to think of the rate of interest as the single link between the financial and industrial sectors of the economy; for that really implies that a borrower can borrow as much as he likes at the rate of interest charged, no attention being paid to the security offered. As soon as one attends to questions of security, and to the financial intermediation that arises out of them, it becomes apparent that the dichotomy between the two curves of the IS-LM diagram must not be pressed too hard.

So – back in 1937 John Hicks said that he was building a model of John Maynard Keynes’ General Theory. In 1980 he openly admits he wasn’t. Not mentioning that, not even in a footnote, in a book on the History of Macroeconomics, would, I guess, by many be considered  an example of Chicago-inspired intellectual dishonesty.

 

Next Page »

Create a free website or blog at WordPress.com.
Entries and comments feeds.