British austerity delusion

30 April, 2015 at 13:15 | Posted in Economics | 3 Comments

Why does big business love austerity and hate Keynesian economics? After all, you might expect corporate leaders to want policies that produce strong sales and hence strong profits.

I’ve already suggested one answer: scare talk about debt and deficits is often used as a cover for a very different agenda, namely an attempt to reduce the overall size of government and especially spending on social insurance. This has been transparently obvious in the United States, where many supposed deficit-reduction plans just happen to include sharp cuts in tax rates on corporations and the wealthy even as they take away healthcare and nutritional aid for the poor. But it’s also a fairly obvious motivation in the UK, if not so crudely expressed. The “primary purpose” of austerity, the Telegraph admitted in 2013, “is to shrink the size of government spending” – or, as Cameron put it in a speech later that year, to make the state “leaner … not just now, but permanently” …

Business leaders love the idea that the health of the economy depends on confidence, which in turn – or so they argue – requires making them happy. In the US there were, until the recent takeoff in job growth, many speeches and opinion pieces arguing that President Obama’s anti-business rhetoric – which only existed in the right’s imagination, but never mind – was holding back recovery. The message was clear: don’t criticise big business, or the economy will suffer.

Paul Krugman

I definitely recommend everyone to read this well-argued Krugman article.

To many conservative and neoliberal politicians and economists there seems to be a spectre haunting the United States and Europe today — Keynesian ideas on governments pursuing policies raising effective demand and supporting employment. And some of the favourite arguments used among these Keynesophobics to fight it are the confidence argument and the doctrine of ‘sound finance.’

Is this witless crusade against economic reason new? Not at all. And had Krugman not had such a debonair attitude to history of economic thought he would for sure have encountered Michal Kalecki’s classic 1943 article — basically giving the same answer to the questions posed as Krugman does seventy-two years later…

kale It should be first stated that, although most economists are now agreed that full employment may be achieved by government spending, this was by no means the case even in the recent past. Among the opposers of this doctrine there were (and still are) prominent so-called ‘economic experts’ closely connected with banking and industry. This suggests that there is a political background in the opposition to the full employment doctrine, even though the arguments advanced are economic. That is not to say that people who advance them do not believe in their economics, poor though this is. But obstinate ignorance is usually a manifestation of underlying political motives …

Clearly, higher output and employment benefit not only workers but entrepreneurs as well, because the latter’s profits rise. And the policy of full employment outlined above does not encroach upon profits because it does not involve any additional taxation. The entrepreneurs in the slump are longing for a boom; why do they not gladly accept the synthetic boom which the government is able to offer them? It is this difficult and fascinating question with which we intend to deal in this article …

We shall deal first with the reluctance of the ‘captains of industry’ to accept government intervention in the matter of employment. Every widening of state activity is looked upon by business with suspicion, but the creation of employment by government spending has a special aspect which makes the opposition particularly intense. Under a laissez-faire system the level of employment depends to a great extent on the so-called state of confidence. If this deteriorates, private investment declines, which results in a fall of output and employment (both directly and through the secondary effect of the fall in incomes upon consumption and investment). This gives the capitalists a powerful indirect control over government policy: everything which may shake the state of confidence must be carefully avoided because it would cause an economic crisis. But once the government learns the trick of increasing employment by its own purchases, this powerful controlling device loses its effectiveness. Hence budget deficits necessary to carry out government intervention must be regarded as perilous. The social function of the doctrine of ‘sound finance’ is to make the level of employment dependent on the state of confidence.

Michal Kalecki Political aspects of full employment

Advertisements

Skolmatematik

30 April, 2015 at 12:06 | Posted in Economics | 1 Comment

Matematikundervisningen i den svenska skolan

matteÅr 1970
En bonde säljer en säck potatis för 20 kr. Framställningskostnaden är 4/5 av priset. Hur stor är vinsten?

År 1980
En bonde säljer en säck potatis för 20 kr. Framställningskostnaden är 16 kr. Var snäll och räkna ut vinsten.

År 2015
En bonde säljer en säck potatis för 20 kr. Framställningskostnaden är 4/5 därav, vilket är 16 kr. Vinsten uppgår till 1/5, lika med 4 kr. Stryk under ordet ”potatis” och diskutera med din kamrat.

Validating assumptions

30 April, 2015 at 07:56 | Posted in Economics | 1 Comment

Piketty uses the terms “capital” and “wealth” interchangeably to denote the total monetary value of shares, housing and other assets. “Income” is measured in money terms. We shall reserve the term “capital” for the totality of productive assets evaluated at constant prices. The term “output” is used to denote the totality of net output (value-added) measured at constant prices. Piketty uses the symbol β to denote the ratio of “wealth” to “income” and he denotes the share of wealth-owners in total income by α. In his theoretical analysis this share is equated to the share of profits in total output. Piketty documents how α and β have both risen by a considerable amount in recent decades. He argues that this is not mere correlation, but reflects a causal link. It is the rise in β which is responsible for the rise in α. To reach this conclusion, he first assumes that β is equal to the capital-output ratio K/Y, as conventionally understood. From his empirical finding that β has risen, he concludes that K/Y has also risen by a similar amount. According to the neoclassical theory of factor shares, an increase in K/Y will only lead to an increase in α when the elasticity of substitution between capital and labour σ is greater than unity. Piketty asserts that this is the case. Indeed, based on movements α and β, he estimates that σ is between 1.3 and 1.6 (page 221).

assumptions-analysis1Thus, Piketty’s argument rests on two crucial assumptions: β = K/Y and σ > 1. Once these assumptions are granted, the neoclassical theory of factor shares ensures that an increase in β will lead to an increase in α. In fact, neither of these assumptions is supported by the empirical evidence which is surveyed briefly in the appendix. This evidence implies that the large observed rise in β in recent decades is not the result of a big rise in K/Y but is primarily a valuation effect …

Piketty argues that the higher income share of wealth-owners is due to an increase in the capital-output ratio resulting from a high rate of capital accumulation. The evidence suggests just the contrary. The capital-output ratio, as conventionally measured has either fallen or been constant in recent decades. The apparent increase in the capital-output ratio identified by Piketty is a valuation effect reflecting a disproportionate increase in the market value of certain real assets. A more plausible explanation for the increased income share of wealth-owners is an unduly low rate of investment in real capital.

Robert Rowthorn

It seems to me that Rowthorn is closing in on the nodal point in Piketty’s picture of the long-term trends in income distribution in advanced economies.

Say we have a diehard neoclassical model (assuming the production function is homogeneous of degree one and unlimited substitutability) such as the standard Cobb-Douglas production function (with A a given productivity parameter, and k  the ratio of capital stock to labor, K/L) y = Akα , with a constant investment λ out of output y and a constant depreciation rate δ of the “capital per worker” k, where the rate of accumulation of k, Δk = λyδk, equals Δk = λAkαδk. In steady state (*) we have λAk*α = δk*, giving λ/δ = k*/y* and k* = (λA/δ)1/(1-α)Putting this value of k* into the production function, gives us the steady state output per worker level y* = Ak*α= A1/(1-α)(λ/δ))α/(1-α)Assuming we have an exogenous Harrod-neutral technological progress that increases y with a growth rate g (assuming a zero labour growth rate and with y and k a fortiori now being refined as y/A and k/A respectively, giving the production function as y = kα) we get dk/dt = λy – (g + δ)k, which in the Cobb-Douglas case gives dk/dt = λkα– (g + δ)k, with steady state value k* = (λ/(g + δ))1/(1-αand capital-output ratio k*/y* = k*/k*α = λ/(g + δ). If using Piketty’s preferred model with output and capital given net of depreciation, we have to change the final expression into k*/y* = k*/k*α = λ/(g + λδ). Now what Piketty predicts is that g will fall and that this will increase the capital-output ratio. Let’s say we have δ = 0.03, λ = 0.1 and g = 0.03 initially. This gives a capital-output ratio of around 3. If g falls to 0.01 it rises to around 7.7. We reach analogous results if we use a basic CES production function with an elasticity of substitution σ > 1. With σ = 1.5, the capital share rises from 0.2 to 0.36 if the wealth-income ratio goes from 2.5 to 5, which according to Piketty is what actually has happened in rich countries during the last forty years.

Being able to show that you can get these results using one or another of the available standard neoclassical growth models is of course — from a realist point of view — of limited value. As usual — the really interesting thing is how in accord with reality are the assumptions you make and the numerical values you put into the model specification.

Professor Piketty chose a theoretical framework that simultaneously allowed him to produce catchy numerical predictions, in tune with his empirical findings, while soaring like an eagle above the ‘messy’ debates of political economists shunned by their own profession’s mainstream and condemned diligently to inquire, in pristine isolation, into capitalism’s radical indeterminacy. The fact that, to do this, he had to adopt axioms that are both grossly unrealistic and logically incoherent must have seemed to him a small price to pay.

Yanis Varoufakis

Significance testing — an embarrassing ritual

29 April, 2015 at 10:17 | Posted in Statistics & Econometrics | 1 Comment

Knowing the contents of a toolbox, of course, requires statistical thinking, that is, the art of choosing a proper tool for a given problem. Instead, one single procedure that I call the “null ritual” tends to be featured in texts and practiced by researchers. Its essence can be summarized in a few lines:

The null ritual:
1. Set up a statistical null hypothesis of “no mean difference” or “zero correlation.” Don’t specify the predictions of your research hypothesis or of any alternative substantive hypotheses.
2. Use 5% as a convention for rejecting the null. If significant, accept your research hypothesis. Report the result as p < 0.05, p < 0.01, or p < 0.001 (whichever comes next to the obtained p-value).
3. Always perform this procedure …

gigThe routine reliance on the null ritual discourages not only statistical thinking but also theoretical thinking. One does not need to specify one’s hypothesis, nor any challenging alternative hypothesis … The sole requirement is to reject a null that is identified with “chance.” Statistical theories such as Neyman–Pearson theory and Wald’s theory, in contrast, begin with two or more statistical hypotheses.

In the absence of theory, the temptation is to look first at the data and then see what is significant. The physicist Richard Feynman … has taken notice of this misuse of hypothesis testing. I summarize his argument:

Feynman’s conjecture:
To report a significant result and reject the null in favor of an alternative hypothesis is meaningless unless the alternative hypothesis has been stated before the data was obtained.

Feynman’s conjecture is again and again violated by routine significance testing, where one looks at the data to see what is significant. Statistical packages allow every difference, interaction, or correlation against chance to be tested. They automatically deliver ratings of “significance” in terms of stars, double stars, and triple stars, encouraging the bad afterthe-fact habit. The general problem Feynman addressed is known as overfitting … Fitting per se has the same
problems as story telling after the fact, which leads to a “hindsight bias.” The true test of a model is to fix its parameters on one sample, and to test it in a new sample. Then it turns out that predictions based on simple heuristics can be more accurate than routine multiple regressions … Less can be more. The routine use of linear multiple regression exemplifies another mindless use of statistics …

We know but often forget that the problem of inductive inference has no single solution. There is no uniformly most powerful test, that is, no method that is best for every problem. Statistical theory has provided us with a toolbox with effective instruments, which require judgment about when it is right to use them … Judgment is part of the art of statistics.

To stop the ritual, we also need more guts and nerves. We need some pounds of courage to cease playing along in this embarrassing game. This may cause friction with editors and colleagues, but it will in the end help them to enter the dawn of statistical thinking.

‘Sometimes I Feel Like a Motherless Child’

26 April, 2015 at 12:17 | Posted in Varia | Comments Off on ‘Sometimes I Feel Like a Motherless Child’

 

The confidence fairy bleeding

25 April, 2015 at 12:31 | Posted in Economics | 2 Comments

The confidence factor affects government decision-making, but it does not affect the results of decisions. Except in extreme cases, confidence cannot cause a bad policy to have good results, and a lack of it cannot cause a good policy to have bad results, any more than jumping out of a window in the mistaken belief that humans can fly can offset the effect of gravity.

cashreserve1The sequence of events in the Great Recession that began in 2008 bears this out. At first, governments threw everything at it. This prevented the Great Recession from becoming Great Depression II. But, before the economy reached bottom, the stimulus was turned off, and austerity – accelerated liquidation of budget deficits, mainly by cuts in spending – became the order of the day.

Once winded political elites had recovered their breath, they began telling a story designed to preclude any further fiscal stimulus. The slump had been created by fiscal extravagance, they insisted, and therefore could be cured only by fiscal austerity. And not any old austerity: it was spending on the poor, not the rich, that had to be cut, because such spending was the real cause of the trouble.

Any Keynesian knows that cutting the deficit in a slump is bad policy. A slump, after all, is defined by a deficiency in total spending. To try to cure it by spending less is like trying to cure a sick person by bleeding.

So it was natural to ask economist/advocates of bleeding like Harvard’s Alberto Alesina and Kenneth Rogoff how they expected their cure to work. Their answer was that the belief that it would work – the confidence fairy – would ensure its success.

More precisely, Alesina argued that while bleeding on its own would worsen the patient’s condition, its beneficial impact on expectations would more than offset its debilitating effects. Buoyed by assurance of recovery, the half-dead patient would leap out of bed, start running, jumping, and eating normally, and would soon be restored to full vigor. The bleeding school produced some flaky evidence to show that this had happened in a few instances …

With the help of professors like Alesina, conservative conviction could be turned into scientific prediction. And when Alesina’s cure failed to produce rapid recovery, there was an obvious excuse: it had not been applied with enough vigor to be “credible.”

The cure, such as it was, finally came about, years behind schedule, not through fiscal bleeding, but by massive monetary stimulus. When the groggy patient eventually staggered to its feet, the champions of fiscal bleeding triumphantly proclaimed that austerity had worked.

The moral of the tale is simple: Austerity in a slump does not work, for the reason that the medieval cure of bleeding a patient never worked: it enfeebles instead of strengthening. Inserting the confidence fairy between the cause and effect of a policy does not change the logic of the policy; it simply obscures the logic for a time. Recovery may come about despite fiscal austerity, but never because of it.

Robert Skidelsky

 

Where to write blog posts and listen to music (private)

25 April, 2015 at 11:27 | Posted in Economics | Comments Off on Where to write blog posts and listen to music (private)

Karlskrona’s well preserved town plan and wide baroque streets has resulted in UNESCO designating it a World Heritage site. Its archipelago is the southernmost of Sweden’s archipelagos.

A lovely place for writing blog posts — and listening to music like this:
 

‘De evige tre’

24 April, 2015 at 18:34 | Posted in Economics | Comments Off on ‘De evige tre’

 

Bayesianism — a dangerous scientific cul-de-sac

24 April, 2015 at 18:24 | Posted in Economics | 7 Comments

419Fn8sV1FL._SY344_BO1,204,203,200_The bias toward the superficial and the response to extraneous influences on research are both examples of real harm done in contemporary social science by a roughly Bayesian paradigm of statistical inference as the epitome of empirical argument. For instance the dominant attitude toward the sources of black-white differential in United States unemployment rates (routinely the rates are in a two to one ratio) is “phenomenological.” The employment differences are traced to correlates in education, locale, occupational structure, and family background. The attitude toward further, underlying causes of those correlations is agnostic … Yet on reflection, common sense dictates that racist attitudes and institutional racism must play an important causal role. People do have beliefs that blacks are inferior in intelligence and morality, and they are surely influenced by these beliefs in hiring decisions … Thus, an overemphasis on Bayesian success in statistical inference discourages the elaboration of a type of account of racial disadavantages that almost certainly provides a large part of their explanation.

For all scholars seriously interested in questions on what makes up a good scientific explanation, Richard Miller’s Fact and Method is a must read. His incisive critique of Bayesianism is still unsurpassed.

Lindeberg-Levy CLT (wonkish)

23 April, 2015 at 11:24 | Posted in Economics | Comments Off on Lindeberg-Levy CLT (wonkish)

 

Short refresher on proof techniques (wonkish)

22 April, 2015 at 18:20 | Posted in Economics | Comments Off on Short refresher on proof techniques (wonkish)

 

Guess who’s paying for the Greek euro disaster

20 April, 2015 at 11:09 | Posted in Economics | Comments Off on Guess who’s paying for the Greek euro disaster

012615krugman1-blog480
Source

The euro has taken away the possibility for national governments to manage their economies in a meaningful way — and in Greece the people has had to pay the true costs of its concomitant misguided austerity policies.

stasi-4-thumb-large

Reality killed the Washington Consensus

19 April, 2015 at 17:01 | Posted in Economics | 1 Comment

Over the past three years research coming from [IMF] increasingly challenged the orthodoxy that still shapes European policy making:
 
reality-header2-2

First, there was the widely discussed mea culpa in the October 2012 World Economic Outlook, when the IMF staff basically disavowed their own previous estimates of the size of multipliers, and in doing so they certified that austerity could not, and would not work …

Then, the Fund tackled the issue of income inequality, and broke another taboo, i.e. the dichotomy between fairness and efficiency. Turns out that unequal societies tend to perform less well, and IMF staff research reached the same conclusion …

Then, of course, the “public Investment is a free lunch” chapter three of the World Economic Outlook, in the fall 2014.

In between, they demolished another building block of the Washington Consensus: free capital movements may sometimes be destabilizing …
 
These results are not surprising per se. All of these issues are highly controversial, so it is obvious that research does not find unequivocal support for a particular view. All the more so if that view, like the Washington Consensus, is pretty much an ideological construction. Yet, the fact that research coming from the center of the empire acknowledges that the world is complex, and interactions among agents goes well beyond the working of efficient markets, is in my opinion quite something.

Francesco Saraceno

Stationary non-ergodicity (wonkish)

18 April, 2015 at 10:24 | Posted in Economics | 18 Comments

Let’s say we have a stationary process. That does not guarantee that it is also ergodic. The long-run time average of a single output function of the stationary process may not converge to the expectation of the corresponding variables — and so the long-run time average may not equal the probabilistic (expectational) average. cointossingSay we have two coins, where coin A has a probability of 1/2 of coming up heads, and coin B has a probability of 1/4 of coming up heads. We pick either of these coins with a probability of 1/2 and then toss the chosen coin over and over again. Now let H1, H2, … be either one or zero as the coin comes up heads or tales. This process is obviously stationary, but the time averages — [H1 + … + Hn]/n — converges to 1/2 if coin A is chosen, and 1/4 if coin B is chosen. Both these time averages have a probability of 1/2 and so their expectational average is 1/2 x 1/2 + 1/2 x 1/4 = 3/8, which obviously is not equal to 1/2 or 1/4. The time averages depend on which coin you happen to choose, while the probabilistic (expectational) average is calculated for the whole “system” consisting of both coin A and coin B.

Models, math and macro

17 April, 2015 at 15:58 | Posted in Economics | 6 Comments

“To put it bluntly, the discipline of economics has yet to get over its childish passion for mathematics and for purely theoretical and often highly ideological speculation, at the expense of historical research and collaboration with the other social sciences.”

The quote is, of course, from Piketty’s Capital in the 21st Century. Judging by Noah Smith’s recent blog entry, there is still progress to be made.

Smith observes that the performance of DSGE models is dependably poor in predicting future macroeconomic outcomes—precisely the task for which they are widely deployed. Critics of DSGE are however dismissed because—in a nutshell—there’s nothing better out there.
OB-LC060_neweco_G_20101129224057
This argument is deficient in two respects. First, there is a self-evident flaw in a belief that, despite overwhelming and damning evidence that a particular tool is faulty—and dangerously so—that tool should not be abandoned because there is no obvious replacement.

The second deficiency relates to the claim that there is no alternative way to approach macroeconomics:

“When I ask angry “heterodox” people “what better alternative models are there?”, they usually either mention some models but fail to provide links and then quickly change the subject, or they link me to reports that are basically just chartblogging.”

Although Smith is too polite to accuse me directly, this refers to a Twitter exchange from a few days earlier. This was triggered when I took offence at a previous post of his in which he argues that the triumph of New Keynesian sticky-price models over their Real Business Cycle predecessors was proof that “if you just keep pounding away with theory and evidence, even the toughest orthodoxy in a mean, confrontational field like macroeconomics will eventually have to give you some respect”.

When I put it to him that, rather then supporting his point, the failure of the New Keynesian model to be displaced—despite sustained and substantiated criticism—rather undermined it, he responded—predictably—by asking what should replace it.

The short answer is that there is no single model that will adequately tell you all you need to know about a macroeconomic system. A longer answer requires a discussion of methodology and the way that we, as economists, think about the economy. To diehard supporters of the ailing DSGE tradition, “a model” means a collection of dynamic simultaneous equations constructed on the basis of a narrow set of assumptions around what individual “agents” do—essentially some kind of optimisation problem. Heterodox economists argue for a much broader approach to understanding the economic system in which mathematical models are just one tool to aid us in thinking about economic processes.

What all this means is that it is very difficult to have a discussion with people for whom the only way to view the economy is through the lens of mathematical models—and a particularly narrowly defined class of mathematical models—because those individuals can only engage with an argument by demanding to be shown a sheet of equations.

Jo Michell

[h/t Jan Milch]

Model validation and significance testing

17 April, 2015 at 10:28 | Posted in Economics | Comments Off on Model validation and significance testing

In its standard form, a significance test is not the kind of “severe test” that we are looking for in our search for being able to confirm or disconfirm empirical scientific hypothesis. This is problematic for many reasons, one being that there is a strong tendency to accept the null hypothesis since they can’t be rejected at the standard 5% significance level. In their standard form, significance tests bias against new hypotheses by making it hard to disconfirm the null hypothesis.

35mm_12312_ 023And as shown over and over again when it is applied, people have a tendency to read “not disconfirmed” as “probably confirmed.” Standard scientific methodology tells us that when there is only say a 10 % probability that pure sampling error could account for the observed difference between the data and the null hypothesis, it would be more “reasonable” to conclude that we have a case of disconfirmation. Especially if we perform many independent tests of our hypothesis and they all give about the same 10 % result as our reported one, I guess most researchers would count the hypothesis as even more disconfirmed.

freed1Most importantly — we should never forget that the underlying parameters we use when performing significance tests are model constructions. Our p-values mean next to nothing if the model is wrong. As eminent mathematical statistician David Freedman writes:

I believe model validation to be a central issue. Of course, many of my colleagues will be found to disagree. For them, fitting models to data, computing standard errors, and performing significance tests is “informative,” even though the basic statistical assumptions (linearity, independence of errors, etc.) cannot be validated. This position seems indefensible, nor are the consequences trivial. Perhaps it is time to reconsider.

Invariance assumptions and economic theory (wonkish)

17 April, 2015 at 09:04 | Posted in Economics | Comments Off on Invariance assumptions and economic theory (wonkish)

41svIj0RdVLInvariance assumptions need to be made in order to draw causal conclusions from non-experimental data: parameters are invariant to interventions, and so are errors or their distributions. Exogeneity is another concern. In a real example, as opposed to a hypothetical, real questions would have to be asked about these assumptions. Why are the equations “structural,” in the sense that the required invariance assumptions hold true? Applied papers seldom address such assumptions, or the narrower statistical assumptions: for instance, why are errors IID?

The tension here is worth considering. We want to use regression to draw causal inferences from non-experimental data. To do that, we need to know that certain parameters and certain distributions would remain invariant if we were to intervene. Invariance can seldom be demonstrated experimentally. If it could, we probably wouldn’t be discussing invariance assumptions. What then is the source of the knowledge?

“Economic theory” seems like a natural answer, but an incomplete one. Theory has to be anchored in reality. Sooner or later, invariance needs empirical demonstration, which is easier said than done.

The Coase Theorem

16 April, 2015 at 12:29 | Posted in Economics | 3 Comments

Coase thm

Examining the Coase Theorem relies on a critical analysis of economic theory. The fundamental shortcomings of the most developed theory of the market, general equilibrium theory, as well as the restrictions imposed by the use of partial equilibrium and cases of a bilateral monopoly, undermine the assertions of the Coase Theorem. In the case of a bilateral monopoly, this construct involves serious distributional problems, and the invariance component of the theorem is seriously called into question. In addition, it is possible that the negotiations process may stop when mutually beneficial transactions take place outside of the contract curve. In those cases, social efficiency in the restricted Pareto-optimum sense will not be the outcome

Faith in the idea that markets allocate resources efficiently is severely shaken by the set of difficulties in general equilibrium theory discussed in this article. The shortcomings of general equilibrium theory in stability theory should alert anyone tempted by the Law and Economics (L&E) movement and its
applicability to fields of legal practice. The bottom line is that we do not have a theory showing how, if at all, markets reach equilibrium allocations. Because efficiency, in terms of Pareto-optimality, is an attribute only of equilibrium allocations, very serious negative implications exist for anyone claiming that markets allocate resources efficiently.

We have concentrated our critique of L&E based on the fact that economic theory is in a very sad state. Proponents of L&E seem to ignore this, appearing instead to believe that there exists somewhere a robust theoretical construct that satisfactorily explains how markets allocate resources efficiently– this article has shown such faith to be groundless. This should be enough to dismiss L&E as another example of the triumph of ideology over science. In addition, the extreme version of L&E transforms justice into a commodity and represents a disturbing backward movement in social thought. The critiques raised in this article should also suffice to call into question the idea that the main objective of legal systems is efficiency, and that efficiency is attained through the market system. There are no grounds to believe in the efficiency of the market system.

One final thought on the role of mathematics is important. In its development, economics as a discipline has been obsessed with the use of mathematical models to build a theory of competitive markets. The only function for the very awkward assumptions … was to allow the theoretician to have access to certain mathematical theorems. Functioning in this manner, economic theory has sacrificed the construction of relevant economic concepts for the sake of using mathematical tools. This is not how scientific discourse should advance, and the followers of L&E are probably not aware of this. In fact, they may have fallen victim to the illusion of scientific rigor conferred by the use, and abuse, of mathematics.

Alejandro Nadal

[For yours truly’s own take on the Coase Theorem and Law & Economics — in Swedish only, sorry — see here or my “Dr Pangloss, Coase och välfärdsteorins senare öden,” Zenit, 4/1996]

Confidence — the fairy that turned out to be a witch

16 April, 2015 at 10:28 | Posted in Economics | Comments Off on Confidence — the fairy that turned out to be a witch

65a767700602f1869d169a6150792046Remember the old times? Here is a quote from ECB President Jean-Claude Trichet … on September 3rd, 2010:

“We encourage all countries to be absolutely determined to go back to a sustainable mode for their fiscal policies,” Trichet said, speaking after the ECB rate decision on Thursday. “Our message is the same for all, and we trust that it is absolutely decisive not only for each country individually, but for prosperity of all.”

“Not because it is an elementary recommendation to care for your sons and daughter and not overburden them, but because it is good for confidence, consumption and investment today”.

Well, think again. Here is the abstract of ECB Working Paper no 1770, March 2015:

“We explore how fiscal consolidations affect private sector confidence, a possible channel for the fiscal transmission that has received particular attention recently as a result of governments embarking on austerity trajectories in the aftermath of the crisis … The effects are stronger for revenue-based measures and when institutional arrangements, such as fiscal rules, are weak … Consumer confidence falls around announcements of consolidation measures, an effect driven by revenue-based measures. Moreover, the effects are most relevant for European countries with weak institutional arrangements, as measured by the tightness of fiscal rules or budgetary transparency.”

The confidence fairy seems to have turned into a confidence witch. One more victim of the crisis. But this one will not be missed.

Francesco Saraceno

Economists — arrogant and self-congratulatory autists

15 April, 2015 at 15:49 | Posted in Economics | 2 Comments

Ten years ago, a survey published in the Journal of Economic Perspectives found that 77 percent of the doctoral candidates in the leading American economics programs agreed or strongly agreed with the statement “economics is the most scientific of the social sciences.”

autistic opinionIn the intervening decade, a massive economic crisis rocked the global economy, and most economists never saw it coming. Nevertheless, little has changed: A new paper from the same publication reveals how economists continue to believe that their science is superior to all other social sciences, such as political science, sociology, anthropology, etc. While there may be budding intentions to appeal to other disciplines in order to enrich their theories (especially psychology and neuroscience), the reality is that economists almost exclusively study—and cite—each other …

The world is still living with the effects of the most recent economic crisis, and the inability of economists to offer solutions with a significant degree of agreement shows how urgently their discipline needs to be disrupted by an injection of new ideas, methods, and assumptions about human behavior. Unfortunately, there are powerful obstacles to this disruption: elite control and lack of gender diversity …

Ten years ago, I suggested that economists would “be well advised to trade in their intellectual haughtiness for a more humble disposition.” That’s advice that has yet to be heeded.

Moisés Naím/The Atlantic

My new book is out

14 April, 2015 at 19:37 | Posted in Economics | 2 Comments

wea-ebookcover-syll-225x300“A wonderful set of clearly written and highly informative essays by a scholar who is knowledgeable, critical and sharp enough to see how things really are in the discipline, and honest and brave enough to say how things are. A must read especially for those truly concerned and/or puzzled about the state of modern economics.”

Tony Lawson

Table of Contents
Introduction
What is (wrong with) economic theory?
Capturing causality in economics and the limits of statistical inference
Microfoundations – spectacularly useless and positively harmful
Economics textbooks – anomalies and transmogrification of truth
Rational expectations – a fallacious foundation for macroeconomics
Neoliberalism and neoclassical economics
The limits of marginal productivity theory
References

About the author
Lars Pålsson Syll received a PhD in economic history in 1991 and a PhD in economics in 1997, both at Lund University, Sweden. Since 2004 he has been professor of social science at Malmö University, Sweden. His primary research areas have been in the philosophy and methodology of economics, theories of distributive justice, and critical realist social science. As philosopher of science and methodologist he is a critical realist and an outspoken opponent of all kinds of social constructivism and postmodern relativism. As social scientist and economist he is strongly influenced by John Maynard Keynes and Hyman Minsky. He is the author of Social Choice, Value and Exploitation: an Economic-Philosophical Critique (in Swedish, 1991), Utility Theory and Structural Analysis (1997), Economic Theory and Method: A Critical Realist Perspective (in Swedish, 2001), The Dismal Science (in Swedish, 2001), The History of Economic Theories (in Swedish, 4th ed., 2007), John Maynard Keynes (in Swedish, 2007), An Outline of the History of Economics (in Swedish, 2011), as well as numerous articles in scientific journals.

World Economics Association Books

Is there anything worth keeping in mainstream microeconomics?

14 April, 2015 at 12:48 | Posted in Economics | 1 Comment

The main reason why the teaching of microeconomics (or of “ micro foundations” of macroeconomics) has been called “autistic” is because it is increasingly impossible to discuss real-world economic questions with microeconomists – and with almost all neoclassical theorists. They are trapped in their system, and don’t in fact care about the outside world any more. If you consult any microeconomic textbook, it is full of maths (e.g. Kreps or Mas-Colell, Whinston and Green) or of “tales” (e.g. Varian or Schotter), without real data (occasionally you find “examples”, or “applications”, with numerical examples – but they are purely fictitious, invented by the authors).

an-inconvenient-truth1At first, French students got quite a lot of support from teachers and professors: hundreds of teachers signed petitions backing their movement – specially pleading for “pluralism” in teaching the different ways of approaching economics. But when the students proposed a precise program of studies … almost all teachers refused, considering that is was “too much” because “students must learn all these things, even with some mathematical details”. When you ask them “why?”, the answer usually goes something like this: “Well, even if we, personally, never use the kind of ‘theory’ or ‘tools’ taught in micoreconomics Courses … surely there are people who do ‘use’ and ‘apply’ them, even if it is in an ‘unrealistic’, or ‘excessive’ way”.

But when you ask those scholars who do “use these tools”, especially those who do a lot of econometrics with “representative agent” models, they answer (if you insist quite a bit): “OK, I agree with you that it is nonsense to represent the whole economy by the (intertemporal) choice of one agent – consumer and producer – or by a unique household that owns a unique firm; but if you don’t do that, you don’t do anything !”

Bernard Guerrien

Yes indeed — “you don’t do anything!”

Twenty years ago Phil Mirowski was invited to give a speech on themes from his book More Heat than Light at my economics department in Lund, Sweden. All the neoclassical professors were there. Their theories were totally mangled and no one — absolutely no one — had anything to say even remotely reminiscent of a defense. Being at a nonplus, one of them, in total desperation, finally asked “But what shall we do then?”

Yes indeed — what shall they do? The emperor turned out to be naked.

[h/t Edward Fullbrook]

Does big government help or hurt?

14 April, 2015 at 09:44 | Posted in Economics | Comments Off on Does big government help or hurt?

 

Mastering ‘metrics

13 April, 2015 at 14:41 | Posted in Economics | 4 Comments

In their new book, Mastering ‘Metrics: The Path from Cause to Effect, Joshua D. Angrist and Jörn-Steffen Pischke write:

masteringOur first line of attack on the causality problem is a randomized experiment, often called a randomized trial. In a randomized trial, researchers change the causal variables of interest … for a group selected using something like a coin toss. By changing circumstances randomly, we make it highly likely that the variable of interest is unrelated to the many other factors determining the outcomes we want to study. Random assignment isn’t the same as holding everything else fixed, but it has the same effect. Random manipulation makes other things equal hold on average across the groups that did and did not experience manipulation. As we explain … ‘on average’ is usually good enough.

Angrist and Pischke may “dream of the trials we’d like to do” and consider “the notion of an ideal experiment” something that “disciplines our approach to econometric research,” but to maintain that ‘on average’ is “usually good enough” is an allegation that in my view is rather unwarranted, and for many reasons.

First of all it amounts to nothing but hand waving to simpliciter assume, without argumentation, that it is tenable to treat social agents and relations as homogeneous and interchangeable entities.

notes7-2Randomization is used to basically allow the econometrician to treat the population as consisting of interchangeable and homogeneous groups (‘treatment’ and ‘control’). The regression models one arrives at  by using randomized trials tell us the average effect that variations in variable X has on the outcome variable Y, without having to explicitly control for effects of other explanatory variables R, S, T, etc., etc. Everything is assumed to be essentially equal except the values taken by variable X.

In a usual regression context one would apply an ordinary least squares estimator (OLS) in trying to get an unbiased and consistent estimate:

Y = α + βX + ε,

where α is a constant intercept, β a constant “structural” causal effect and ε an error term.

The problem here is that although we may get an estimate of the “true” average causal effect, this may “mask” important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are “treated”( X=1) may have causal effects equal to – 100 and those “not treated” (X=0) may have causal effects equal to 100. Contemplating being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the OLS average effect particularly enlightening.

Limiting model assumptions in economic science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we “export” them to our “target systems”, we have to be able to show that they do not only hold under ceteris paribus conditions and a fortiori only are of limited value to our understanding, explanations or predictions of real economic systems.

Real world social systems are not governed by stable causal mechanisms or capacities. The kinds of “laws” and relations that econometrics has established, are laws and relations about entities in models that presuppose causal mechanisms being atomistic and additive. When causal mechanisms operate in real world social target systems they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made “nomological machines” they are rare, or even non-existant. Unfortunately that also makes most of the achievements of econometrics – as most of contemporary endeavours of mainstream economic theoretical modeling – rather useless.

Remember that a model is not the truth. It is a lie to help you get your point across. And in the case of modeling economic risk, your model is a lie about others, who are probably lying themselves. And what’s worse than a simple lie? A complicated lie.

Sam L. Savage The Flaw of Averages

When Joshua Angrist and Jörn-Steffen Pischke in an earlier article of theirs [“The Credibility Revolution in Empirical Economics: How Better Research Design Is Taking the Con out of Econometrics,” Journal of Economic Perspectives, 2010] say that

anyone who makes a living out of data analysis probably believes that heterogeneity is limited enough that the well-understood past can be informative about the future

I really think they underestimate the heterogeneity problem. It does not just turn up as an external validity problem when trying to “export” regression results to different times or different target populations. It is also often an internal problem to the millions of regression estimates that economists produce every year.

But when the randomization is purposeful, a whole new set of issues arises — experimental contamination — which is much more serious with human subjects in a social system than with chemicals mixed in beakers … Anyone who designs an experiment in economics would do well to anticipate the inevitable barrage of questions regarding the valid transference of things learned in the lab (one value of z) into the real world (a different value of z) …

randomizeAbsent observation of the interactive compounding effects z, what is estimated is some kind of average treatment effect which is called by Imbens and Angrist (1994) a “Local Average Treatment Effect,” which is a little like the lawyer who explained that when he was a young man he lost many cases he should have won but as he grew older he won many that he should have lost, so that on the average justice was done. In other words, if you act as if the treatment effect is a random variable by substituting βt for β0 + β′zt, the notation inappropriately relieves you of the heavy burden of considering what are the interactive confounders and finding some way to measure them …

If little thought has gone into identifying these possible confounders, it seems probable that little thought will be given to the limited applicability of the results in other settings.

Ed Leamer

Evidence-based theories and policies are highly valued nowadays. Randomization is supposed to control for bias from unknown confounders. The received opinion is that evidence based on randomized experiments therefore is the best.

More and more economists have also lately come to advocate randomization as the principal method for ensuring being able to make valid causal inferences.

I would however rather argue that randomization, just as econometrics, promises more than it can deliver, basically because it requires assumptions that in practice are not possible to maintain.

Especially when it comes to questions of causality, randomization is nowadays considered some kind of “gold standard”. Everything has to be evidence-based, and the evidence has to come from randomized experiments.

But just as econometrics, randomization is basically a deductive method. Given the assumptions (such as manipulability, transitivity, separability, additivity, linearity, etc.) these methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. And although randomization may contribute to controlling for confounding, it does not guarantee it, since genuine ramdomness presupposes infinite experimentation and we know all real experimentation is finite. And even if randomization may help to establish average causal effects, it says nothing of individual effects unless homogeneity is added to the list of assumptions. Real target systems are seldom epistemically isomorphic to our axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by randomization procedures may be valid in “closed” models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.

When does a conclusion established in population X hold for target population Y? Only under very restrictive conditions!

Angrist’s and Pischke’s “ideally controlled experiments” tell us with certainty what causes what effects — but only given the right “closures”. Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. “It works there” is no evidence for “it will work here”. Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of “rigorous” and “precise” methods — and ‘on-average-knowledge’ — is despairingly small.

The cleavage that counts

13 April, 2015 at 12:55 | Posted in Economics | 3 Comments

On the one side were those who believed that the existing economic system is in the long run self-adjusting, though with creaks and groans and jerks, and interrupted by time-lags, outside interference and mistakes … These economists did not, of course, believe that the system is automatic or immediately self-adjusting, but they did maintain that it has an inherent tendency towards self-adjustment, if it is not interfered with, and if the action of change and chance is not too rapid.

John Maynard KeynesThose on the other side of the gulf, however, rejected the idea that the existing economic system is, in any significant sense, self-adjusting. They believed that the failure of effective demand to reach the full potentialities of supply, in spite of human psychological demand being immensely far from satisfied for the vast majority of individuals, is due to much more fundamental causes …

The gulf between these two schools of thought is deeper, I believe, than most of those on either side of it realize. On which side does the essential truth lie?

The strength of the self-adjusting school depends on its having behind it almost the whole body of organized economic thinking and doctrine of the last hundred years. This is a formidable power … It has vast prestige and a more far-reaching influence than is obvious. For it lies behind the education and the habitual modes of thought, not only of economists but of bankers and business men and civil servants and politicians of all parties …

Now I range myself with the heretics. I believe their flair and their instinct move them towards the right conclusion. But I was brought up in the citadel and I recognize its power and might … For me, therefore, it is impossible to rest satisfied until I can put my finger on the flaw in the part of the orthodox reasoning that leads to the conclusions that for various reasons seem to me to be inacceptable. I believe that I am on my way to do so. There is, I am convinced, a fatal flaw in that part of the orthodox reasoning that deals with the theory of what determines the level of effective demand and the volume of aggregate employment …

John Maynard Keynes (1934)

Balance sheet recessions — a massive case of fallacy of composition problems

11 April, 2015 at 17:43 | Posted in Economics | 3 Comments

 

The way I understand Richard Koo, he maintains that interest rates and monetary policy don’t really matter when we’re in a balance sheet recession where, following on a nationwide collapse in asset prices, more or less every company and household find themselves carrying excess debt and have to pay down debt. The number of willing private borrowers is strongly reduced – even when interest rates are at zero – and as a result of this “debt minimization” monetary policy by itself therefore loses all power. To get things going, the government has to run a fiscal deficit,  by increasing borrowing producing an increase in money supply and thereby making monetary policy work.

Paul Krugman had a post up earlier this year, basically maintaining that this argument can’t be right, since if there are some people – debtors – in the balance sheet recession that pay down their debt, there also have to be other people – creditors – that a fortiori strengthen their balance sheets, and who are susceptible to being influenced by what happens to interest rates and inflation.

To be honest, I have some problems seeing the great gulf between them – at least on the level of general principles – that one is lead to believe ought to be there, considering all the heated discussion there has been on this issue between them for a couple of years now.

For although it’s true, as Koo says, for those firms that try to minimize debt, no injections what so ever that the central bank makes will generate inflationary impulses. For others – and probably not even in the worst balance sheet recessions imaginable are all firms debt-constrained – there might be room for some (limited) inflationary generation by monetary means. So ultimately, it looks like more of a differences in degree rather than in kind. To Koo monetary policy has by itself no power, and instead we have to put our trust in fiscal policy. Krugman on the other hand says that some private actors might not be  balance sheet-constrained and therefore susceptible to (inflationary) monetary policy, and that besides fiscal policy anyway can work. And more importantly – both definitely agree that increased liquidity will not not always and  everywhere get the economy out of a slump, and that neither fiscal, nor monetary policy, in itself is capable of solving the problems created in a balance sheet recession.

Market fundamentalist ideologies

11 April, 2015 at 17:19 | Posted in Economics | Comments Off on Market fundamentalist ideologies

 

On the irrelevance of general equilibrium theory

11 April, 2015 at 11:18 | Posted in Economics | 1 Comment

The general equilibrium approach starts with individual decisions. It assumes that trades are voluntary and that there exist mutually advantageous opportunities of exchange. Up to here, everyone can agree. The problem lies in the next step. At this point, let us folllow David Kreps’s (1990) reasoning in his A Course in Microeconomic Theory. Kreps asks the reader to “imagine consumers wandering around a large market square” with different kinds of food in their bags. When two of them meet, “they examine what each has to offer, to see if they can arrange a mutually agreeable trade. To be precise, we might imagine that at every chance meeting of this sort, the two flip a coin and depending on the outcome, one is allowed to propose an exchange, which the other may either accept or reject. The rule is that you can’t eat until you leave the market square, so consumers wait until they are sat- isfied with what they possess” (196).

Kreps “imagines” other models of this kind. In each of them by the word “market” he means a “market square,” and he introduces rules (“flip a coin,” “nobody can leave before the end of the process”). He is aware that “exploration of more realistic models of markets is in relative infancy.” And when he speaks of “more realistic” models, he means more realistic with respect to perfect competition.

_files_2012_05_Foreclosure-MythsBut the problem with perfect competition is not its “lack” of realism; it is its “irrelevancy” as it surreptitiously assumes an entity that gives prices (present and future) to price taking agents, that collects information about supplies and demands, adds these up, moves prices up and down until it finds their equilibrium value. Textbooks do not tell this story; they assume that a deus ex machina called the “market” does the job.

Sorry, but we do not want to teach these absurdities. In the real world, people trade with each other, not with “the market.” And some of them, at least, are price makers. To make things worse, textbooks generally allude to some mysterious “invisible hand” that allocates goods optimally. They wrongly attribute this idea to Adam Smith and make use of his authority so that students accept this magical way of thinking as a kind of proof.

Perfect competition in the general equilibrium mode is perhaps an interesting model for describing a central planner who is trying to find an efficient allocation of resources using prices as signals that guide price taker households and firms. But students should be told that the course they follow—on “general competitive analysis”—is irrelevant for understanding market economies.

Emmanuelle Benicourt & Bernard Guerrien

I can’t but agree with these two eminent French mathematical economists. You could, of course, as Brad DeLong has asserted, consider modern neoclassical economics to be in fine shape “as long as it is understood as the ideological and substantive legitimating doctrine of the political theory of possessive individualism” and you manage to put a blind eye to all the caveats to its general equilibrium models — markets must be in equilibrium and competitive, the goods traded must be excludable and non-rival, etc, etc. The list of caveats soon becomes impressively large — and not very much value is left of “modern neoclassical economics” if you ask me …

what ifStill — almost a century and a half after Léon Walras founded neoclassical general equilibrium theory — “modern neoclassical economics” hasn’t been able to show that markets move economies to equilibria.

We do know that — under very restrictive assumptions — equilibria do exist, are unique and are Pareto-efficient. One however has to ask oneself — what good does that do?

As long as we cannot show, except under exceedingly special assumptions, that there are convincing reasons to suppose there are forces which lead economies to equilibria — the value of general equilibrium theory is negligible. As long as we cannot really demonstrate that there are forces operating — under reasonable, relevant and at least mildly realistic conditions — at moving markets to equilibria, there cannot really be any sustainable reason for anyone to pay any interest or attention to this theory.

A stability that can only be proved by assuming “Santa Claus” conditions is of no avail. Most people do not believe in Santa Claus anymore. And for good reasons. Santa Claus is for kids, and general equilibrium economists ought to grow up.

Continuing to model a world full of agents behaving as economists — “often wrong, but never uncertain” — and still not being able to show that the system under reasonable assumptions converges to equilibrium (or simply assume the problem away) is a gross misallocation of intellectual resources and time.

The Bernanke-Summers imbroglio

10 April, 2015 at 18:24 | Posted in Economics | 6 Comments

As no one interested in macroeconomics has failed to notice, Ben Bernanke is having a debate with Larry Summers on what’s behind the slow recovery of growth rates since the financial crisis of 2007.

To Bernanke it’s basically a question of a savings glut.

To Summers it’s basically a question of a secular decline in the level of investment.

To me the debate is actually a non-starter, since they both rely on a loanable funds theory and a Wicksellian notion of a “natural” rate of interest — ideas that have been known to be dead wrong for at least 80 years …

Let’s start with the Wicksellian connection and consider what Keynes wrote in General Theory:

In my Treatise on Money I defined what purported to be a unique rate of interest, which I called the natural rate of interest, namely, the rate of interest which, in the terminology of my Treatise, preserved equality between the rate of saving (as there defined) and the rate of investment. I believed this to be a development and clarification of Wicksell’s ‘natural rate of interest’, which was, according to him, the rate which would preserve the stability of some, not quite clearly specified, price-level.

I had, however, overlooked the fact that in any given society there is, on this definition, a different natural rate of interest for each hypothetical level of employment. And, similarly, for every rate of interest there is a level of employment for which that rate is the ‘natural’ rate, in the sense that the system will be in equilibrium with that rate of interest and that level of employment. Thus it was a mistake to speak of the natural rate of interest or to suggest that the above definition would yield a unique value for the rate of interest irrespective of the level of employment. I had not then understood that, in certain conditions, the system could be in equilibrium with less than full employment.

I am now no longer of the opinion that the [Wicksellian] concept of a ‘natural’ rate of interest, which previously seemed to me a most promising idea, has anything very useful or significant to contribute to our analysis. It is merely the rate of interest which will preserve the status quo; and, in general, we have no predominant interest in the status quo as such.

And when it comes to the loanable funds theory, this is really in many regards nothing but an approach where the ruling rate of interest in society is — pure and simple — conceived as nothing else than the price of loans or credit, determined by supply and demand — as Bertil Ohlin put it — “in the same way as the price of eggs and strawberries on a village market.”

loanIn the traditional loanable funds theory — as presented in mainstream macroeconomics textbooks  — the amount of loans and credit available for financing investment is constrained by how much saving is available. Saving is the supply of loanable funds, investment is the demand for loanable funds and assumed to be negatively related to the interest rate. Lowering households’ consumption means increasing savings that via a lower interest.

From a more Post-Keynesian-Minskyite point of view the problems with the standard presentation and formalization of the loanable funds theory are quite obvious.

As already noticed by James Meade decades ago, the causal story told to explicate the accounting identities used gives the picture of “a dog called saving wagged its tail labelled investment.” In Keynes’s view — and later over and over again confirmed by empirical research — it’s not so much the interest rate at which firms can borrow that causally determines the amount of investment undertaken, but rather their internal funds, profit expectations and capacity utilization.

As is typical of most mainstream macroeconomic formalizations and models, there is pretty little mention of real world phenomena, like e. g. real money, credit rationing and the existence of multiple interest rates, in the loanable funds theory. Loanable funds theory essentially reduces modern monetary economies to something akin to barter systems — something it definitely is not. As emphasized especially by Minsky, to understand and explain how much investment/loaning/crediting is going on in an economy, it’s much more important to focus on the working of financial markets than staring at accounting identities like S = Y – C – G. The problems we meet on modern markets today have more to do with inadequate financial institutions than with the size of loanable-funds-savings.

The loanable funds theory means that the interest rate is endogenized by assuming that Central Banks can (try to) adjust it in response to an eventual output gap. This, of course, is essentially nothing but an assumption of Walras’ law being valid and applicable, and that a fortiori the attainment of equilibrium is secured by the Central Banks’ interest rate adjustments. From a realist Keynes-Minsky point of view this can’t be considered anything else than a belief resting on nothing but sheer hope. [Not to mention that more and more Central Banks actually choose not to follow Taylor-like policy rules.] The age-old belief that Central Banks control the money supply has more an more come to be questioned and replaced by an “endogenous” money view, and I think the same will happen to the view that Central Banks determine “the” rate of interest.

A further problem in the traditional loanable funds theory is that it assumes that saving and investment can be treated as independent entities. To Keynes this was seriously wrong:

gtThe classical theory of the rate of interest [the loanable funds theory] seems to suppose that, if the demand curve for capital shifts or if the curve relating the rate of interest to the amounts saved out of a given income shifts or if both these curves shift, the new rate of interest will be given by the point of intersection of the new positions of the two curves. But this is a nonsense theory. For the assumption that income is constant is inconsistent with the assumption that these two curves can shift independently of one another. If either of them shift, then, in general, income will change; with the result that the whole schematism based on the assumption of a given income breaks down … In truth, the classical theory has not been alive to the relevance of changes in the level of income or to the possibility of the level of income being actually a function of the rate of the investment.

There are always (at least) two parts in an economic transaction. Savers and investors have different liquidity preferences and face different choices — and their interactions usually only take place intermediated by financial institutions. This, importantly, also means that there is no “direct and immediate” automatic interest mechanism at work in modern monetary economies. What this ultimately boils done to is — iter — that what happens at the microeconomic level — both in and out of equilibrium —  is not always compatible with the macroeconomic outcome. The fallacy of composition (the “atomistic fallacy” of Keynes) has many faces — loanable funds is one of them.

Contrary to the loanable funds theory, finance in the world of Keynes and Minsky precedes investment and saving. Highlighting the loanable funds fallacy, Keynes wrote in “The Process of Capital Formation” (1939):

Increased investment will always be accompanied by increased saving, but it can never be preceded by it. Dishoarding and credit expansion provides not an alternative to increased saving, but a necessary preparation for it. It is the parent, not the twin, of increased saving.

So, in way of conclusion, what I think both Bernanke and Summers “forget” when they hold to the loanable funds theory and the Wicksellian concept of a “natural” rate of interest, is the Keynes-Minsky wisdom of truly acknowledging that finance — in all its different shapes — has its own dimension, and if taken seriously, its effect on an analysis must modify the whole theoretical system and not just be added as an unsystematic appendage. Finance is fundamental to our understanding of modern economies, and acting like the baker’s apprentice who, having forgotten to add yeast to the dough, throws it into the oven afterwards, simply isn’t enough.

I may be too bold, but I’m willing to take the risk, and so recommend both Bernanke and Summers to make the following addition to their reading lists …

It should be emphasized that the equality between savings and investment … will be valid under all circumstances.kalecki In particular, it will be independent of the level of the rate of interest which was customarily considered in economic theory to be the factor equilibrating the demand for and supply of new capital. In the present conception investment, once carried out, automatically provides the savings necessary to finance it. Indeed, in our simplified model, profits in a given period are the direct outcome of capitalists’ consumption and investment in that period. If investment increases by a certain amount, savings out of profits are pro tanto higher …

One important consequence of the above is that the rate of interest cannot be determined by the demand for and supply of new capital because investment ‘finances itself.’

Nicholas Kaldor on putting the cart before the horse fallacy

10 April, 2015 at 15:15 | Posted in Economics | 2 Comments

Foreseeing the future is difficult. But sometimes it seems as though some people get it terribly right …


Some day the nations of Europe may be ready to merge their national identities and create a new European Union – the United States of Europe. If and when they do, a European Government will take over all the functions which the Federal government now provides in the U.S., or in Canada or Australia. This will involve the creation of a “full economic and monetary union”. But it is a dangerous error to believe that monetary and economic union can precede a political union or that it will act (in the words of the Werner report) “as a leaven for the evolvement of a political union which in the long run it will in any case be unable to do without”. For if the creation of a monetary union and Community control over national budgets generates pressures which lead to a breakdown of the whole system it will prevent the development of a political union, not promote it.

Nicholas Kaldor (1971)

Next Page »

Create a free website or blog at WordPress.com.
Entries and comments feeds.