British austerity delusion

30 April, 2015 at 13:15 | Posted in Economics | 3 Comments

Why does big business love austerity and hate Keynesian economics? After all, you might expect corporate leaders to want policies that produce strong sales and hence strong profits.

I’ve already suggested one answer: scare talk about debt and deficits is often used as a cover for a very different agenda, namely an attempt to reduce the overall size of government and especially spending on social insurance. This has been transparently obvious in the United States, where many supposed deficit-reduction plans just happen to include sharp cuts in tax rates on corporations and the wealthy even as they take away healthcare and nutritional aid for the poor. But it’s also a fairly obvious motivation in the UK, if not so crudely expressed. The “primary purpose” of austerity, the Telegraph admitted in 2013, “is to shrink the size of government spending” – or, as Cameron put it in a speech later that year, to make the state “leaner … not just now, but permanently” …

Business leaders love the idea that the health of the economy depends on confidence, which in turn – or so they argue – requires making them happy. In the US there were, until the recent takeoff in job growth, many speeches and opinion pieces arguing that President Obama’s anti-business rhetoric – which only existed in the right’s imagination, but never mind – was holding back recovery. The message was clear: don’t criticise big business, or the economy will suffer.

Paul Krugman

I definitely recommend everyone to read this well-argued Krugman article.

To many conservative and neoliberal politicians and economists there seems to be a spectre haunting the United States and Europe today — Keynesian ideas on governments pursuing policies raising effective demand and supporting employment. And some of the favourite arguments used among these Keynesophobics to fight it are the confidence argument and the doctrine of ‘sound finance.’

Is this witless crusade against economic reason new? Not at all. And had Krugman not had such a debonair attitude to history of economic thought he would for sure have encountered Michal Kalecki’s classic 1943 article — basically giving the same answer to the questions posed as Krugman does seventy-two years later…

kale It should be first stated that, although most economists are now agreed that full employment may be achieved by government spending, this was by no means the case even in the recent past. Among the opposers of this doctrine there were (and still are) prominent so-called ‘economic experts’ closely connected with banking and industry. This suggests that there is a political background in the opposition to the full employment doctrine, even though the arguments advanced are economic. That is not to say that people who advance them do not believe in their economics, poor though this is. But obstinate ignorance is usually a manifestation of underlying political motives …

Clearly, higher output and employment benefit not only workers but entrepreneurs as well, because the latter’s profits rise. And the policy of full employment outlined above does not encroach upon profits because it does not involve any additional taxation. The entrepreneurs in the slump are longing for a boom; why do they not gladly accept the synthetic boom which the government is able to offer them? It is this difficult and fascinating question with which we intend to deal in this article …

We shall deal first with the reluctance of the ‘captains of industry’ to accept government intervention in the matter of employment. Every widening of state activity is looked upon by business with suspicion, but the creation of employment by government spending has a special aspect which makes the opposition particularly intense. Under a laissez-faire system the level of employment depends to a great extent on the so-called state of confidence. If this deteriorates, private investment declines, which results in a fall of output and employment (both directly and through the secondary effect of the fall in incomes upon consumption and investment). This gives the capitalists a powerful indirect control over government policy: everything which may shake the state of confidence must be carefully avoided because it would cause an economic crisis. But once the government learns the trick of increasing employment by its own purchases, this powerful controlling device loses its effectiveness. Hence budget deficits necessary to carry out government intervention must be regarded as perilous. The social function of the doctrine of ‘sound finance’ is to make the level of employment dependent on the state of confidence.

Michal Kalecki Political aspects of full employment

Advertisements

Skolmatematik

30 April, 2015 at 12:06 | Posted in Economics | 1 Comment

Matematikundervisningen i den svenska skolan

matteÅr 1970
En bonde säljer en säck potatis för 20 kr. Framställningskostnaden är 4/5 av priset. Hur stor är vinsten?

År 1980
En bonde säljer en säck potatis för 20 kr. Framställningskostnaden är 16 kr. Var snäll och räkna ut vinsten.

År 2015
En bonde säljer en säck potatis för 20 kr. Framställningskostnaden är 4/5 därav, vilket är 16 kr. Vinsten uppgår till 1/5, lika med 4 kr. Stryk under ordet ”potatis” och diskutera med din kamrat.

Validating assumptions

30 April, 2015 at 07:56 | Posted in Economics | 1 Comment

Piketty uses the terms “capital” and “wealth” interchangeably to denote the total monetary value of shares, housing and other assets. “Income” is measured in money terms. We shall reserve the term “capital” for the totality of productive assets evaluated at constant prices. The term “output” is used to denote the totality of net output (value-added) measured at constant prices. Piketty uses the symbol β to denote the ratio of “wealth” to “income” and he denotes the share of wealth-owners in total income by α. In his theoretical analysis this share is equated to the share of profits in total output. Piketty documents how α and β have both risen by a considerable amount in recent decades. He argues that this is not mere correlation, but reflects a causal link. It is the rise in β which is responsible for the rise in α. To reach this conclusion, he first assumes that β is equal to the capital-output ratio K/Y, as conventionally understood. From his empirical finding that β has risen, he concludes that K/Y has also risen by a similar amount. According to the neoclassical theory of factor shares, an increase in K/Y will only lead to an increase in α when the elasticity of substitution between capital and labour σ is greater than unity. Piketty asserts that this is the case. Indeed, based on movements α and β, he estimates that σ is between 1.3 and 1.6 (page 221).

assumptions-analysis1Thus, Piketty’s argument rests on two crucial assumptions: β = K/Y and σ > 1. Once these assumptions are granted, the neoclassical theory of factor shares ensures that an increase in β will lead to an increase in α. In fact, neither of these assumptions is supported by the empirical evidence which is surveyed briefly in the appendix. This evidence implies that the large observed rise in β in recent decades is not the result of a big rise in K/Y but is primarily a valuation effect …

Piketty argues that the higher income share of wealth-owners is due to an increase in the capital-output ratio resulting from a high rate of capital accumulation. The evidence suggests just the contrary. The capital-output ratio, as conventionally measured has either fallen or been constant in recent decades. The apparent increase in the capital-output ratio identified by Piketty is a valuation effect reflecting a disproportionate increase in the market value of certain real assets. A more plausible explanation for the increased income share of wealth-owners is an unduly low rate of investment in real capital.

Robert Rowthorn

It seems to me that Rowthorn is closing in on the nodal point in Piketty’s picture of the long-term trends in income distribution in advanced economies.

Say we have a diehard neoclassical model (assuming the production function is homogeneous of degree one and unlimited substitutability) such as the standard Cobb-Douglas production function (with A a given productivity parameter, and k  the ratio of capital stock to labor, K/L) y = Akα , with a constant investment λ out of output y and a constant depreciation rate δ of the “capital per worker” k, where the rate of accumulation of k, Δk = λyδk, equals Δk = λAkαδk. In steady state (*) we have λAk*α = δk*, giving λ/δ = k*/y* and k* = (λA/δ)1/(1-α)Putting this value of k* into the production function, gives us the steady state output per worker level y* = Ak*α= A1/(1-α)(λ/δ))α/(1-α)Assuming we have an exogenous Harrod-neutral technological progress that increases y with a growth rate g (assuming a zero labour growth rate and with y and k a fortiori now being refined as y/A and k/A respectively, giving the production function as y = kα) we get dk/dt = λy – (g + δ)k, which in the Cobb-Douglas case gives dk/dt = λkα– (g + δ)k, with steady state value k* = (λ/(g + δ))1/(1-αand capital-output ratio k*/y* = k*/k*α = λ/(g + δ). If using Piketty’s preferred model with output and capital given net of depreciation, we have to change the final expression into k*/y* = k*/k*α = λ/(g + λδ). Now what Piketty predicts is that g will fall and that this will increase the capital-output ratio. Let’s say we have δ = 0.03, λ = 0.1 and g = 0.03 initially. This gives a capital-output ratio of around 3. If g falls to 0.01 it rises to around 7.7. We reach analogous results if we use a basic CES production function with an elasticity of substitution σ > 1. With σ = 1.5, the capital share rises from 0.2 to 0.36 if the wealth-income ratio goes from 2.5 to 5, which according to Piketty is what actually has happened in rich countries during the last forty years.

Being able to show that you can get these results using one or another of the available standard neoclassical growth models is of course — from a realist point of view — of limited value. As usual — the really interesting thing is how in accord with reality are the assumptions you make and the numerical values you put into the model specification.

Professor Piketty chose a theoretical framework that simultaneously allowed him to produce catchy numerical predictions, in tune with his empirical findings, while soaring like an eagle above the ‘messy’ debates of political economists shunned by their own profession’s mainstream and condemned diligently to inquire, in pristine isolation, into capitalism’s radical indeterminacy. The fact that, to do this, he had to adopt axioms that are both grossly unrealistic and logically incoherent must have seemed to him a small price to pay.

Yanis Varoufakis

Significance testing — an embarrassing ritual

29 April, 2015 at 10:17 | Posted in Statistics & Econometrics | 1 Comment

Knowing the contents of a toolbox, of course, requires statistical thinking, that is, the art of choosing a proper tool for a given problem. Instead, one single procedure that I call the “null ritual” tends to be featured in texts and practiced by researchers. Its essence can be summarized in a few lines:

The null ritual:
1. Set up a statistical null hypothesis of “no mean difference” or “zero correlation.” Don’t specify the predictions of your research hypothesis or of any alternative substantive hypotheses.
2. Use 5% as a convention for rejecting the null. If significant, accept your research hypothesis. Report the result as p < 0.05, p < 0.01, or p < 0.001 (whichever comes next to the obtained p-value).
3. Always perform this procedure …

gigThe routine reliance on the null ritual discourages not only statistical thinking but also theoretical thinking. One does not need to specify one’s hypothesis, nor any challenging alternative hypothesis … The sole requirement is to reject a null that is identified with “chance.” Statistical theories such as Neyman–Pearson theory and Wald’s theory, in contrast, begin with two or more statistical hypotheses.

In the absence of theory, the temptation is to look first at the data and then see what is significant. The physicist Richard Feynman … has taken notice of this misuse of hypothesis testing. I summarize his argument:

Feynman’s conjecture:
To report a significant result and reject the null in favor of an alternative hypothesis is meaningless unless the alternative hypothesis has been stated before the data was obtained.

Feynman’s conjecture is again and again violated by routine significance testing, where one looks at the data to see what is significant. Statistical packages allow every difference, interaction, or correlation against chance to be tested. They automatically deliver ratings of “significance” in terms of stars, double stars, and triple stars, encouraging the bad afterthe-fact habit. The general problem Feynman addressed is known as overfitting … Fitting per se has the same
problems as story telling after the fact, which leads to a “hindsight bias.” The true test of a model is to fix its parameters on one sample, and to test it in a new sample. Then it turns out that predictions based on simple heuristics can be more accurate than routine multiple regressions … Less can be more. The routine use of linear multiple regression exemplifies another mindless use of statistics …

We know but often forget that the problem of inductive inference has no single solution. There is no uniformly most powerful test, that is, no method that is best for every problem. Statistical theory has provided us with a toolbox with effective instruments, which require judgment about when it is right to use them … Judgment is part of the art of statistics.

To stop the ritual, we also need more guts and nerves. We need some pounds of courage to cease playing along in this embarrassing game. This may cause friction with editors and colleagues, but it will in the end help them to enter the dawn of statistical thinking.

‘Sometimes I Feel Like a Motherless Child’

26 April, 2015 at 12:17 | Posted in Varia | Comments Off on ‘Sometimes I Feel Like a Motherless Child’

 

The confidence fairy bleeding

25 April, 2015 at 12:31 | Posted in Economics | 2 Comments

The confidence factor affects government decision-making, but it does not affect the results of decisions. Except in extreme cases, confidence cannot cause a bad policy to have good results, and a lack of it cannot cause a good policy to have bad results, any more than jumping out of a window in the mistaken belief that humans can fly can offset the effect of gravity.

cashreserve1The sequence of events in the Great Recession that began in 2008 bears this out. At first, governments threw everything at it. This prevented the Great Recession from becoming Great Depression II. But, before the economy reached bottom, the stimulus was turned off, and austerity – accelerated liquidation of budget deficits, mainly by cuts in spending – became the order of the day.

Once winded political elites had recovered their breath, they began telling a story designed to preclude any further fiscal stimulus. The slump had been created by fiscal extravagance, they insisted, and therefore could be cured only by fiscal austerity. And not any old austerity: it was spending on the poor, not the rich, that had to be cut, because such spending was the real cause of the trouble.

Any Keynesian knows that cutting the deficit in a slump is bad policy. A slump, after all, is defined by a deficiency in total spending. To try to cure it by spending less is like trying to cure a sick person by bleeding.

So it was natural to ask economist/advocates of bleeding like Harvard’s Alberto Alesina and Kenneth Rogoff how they expected their cure to work. Their answer was that the belief that it would work – the confidence fairy – would ensure its success.

More precisely, Alesina argued that while bleeding on its own would worsen the patient’s condition, its beneficial impact on expectations would more than offset its debilitating effects. Buoyed by assurance of recovery, the half-dead patient would leap out of bed, start running, jumping, and eating normally, and would soon be restored to full vigor. The bleeding school produced some flaky evidence to show that this had happened in a few instances …

With the help of professors like Alesina, conservative conviction could be turned into scientific prediction. And when Alesina’s cure failed to produce rapid recovery, there was an obvious excuse: it had not been applied with enough vigor to be “credible.”

The cure, such as it was, finally came about, years behind schedule, not through fiscal bleeding, but by massive monetary stimulus. When the groggy patient eventually staggered to its feet, the champions of fiscal bleeding triumphantly proclaimed that austerity had worked.

The moral of the tale is simple: Austerity in a slump does not work, for the reason that the medieval cure of bleeding a patient never worked: it enfeebles instead of strengthening. Inserting the confidence fairy between the cause and effect of a policy does not change the logic of the policy; it simply obscures the logic for a time. Recovery may come about despite fiscal austerity, but never because of it.

Robert Skidelsky

 

Where to write blog posts and listen to music (private)

25 April, 2015 at 11:27 | Posted in Economics | Comments Off on Where to write blog posts and listen to music (private)

Karlskrona’s well preserved town plan and wide baroque streets has resulted in UNESCO designating it a World Heritage site. Its archipelago is the southernmost of Sweden’s archipelagos.

A lovely place for writing blog posts — and listening to music like this:
 

‘De evige tre’

24 April, 2015 at 18:34 | Posted in Economics | Comments Off on ‘De evige tre’

 

Bayesianism — a dangerous scientific cul-de-sac

24 April, 2015 at 18:24 | Posted in Economics | 7 Comments

419Fn8sV1FL._SY344_BO1,204,203,200_The bias toward the superficial and the response to extraneous influences on research are both examples of real harm done in contemporary social science by a roughly Bayesian paradigm of statistical inference as the epitome of empirical argument. For instance the dominant attitude toward the sources of black-white differential in United States unemployment rates (routinely the rates are in a two to one ratio) is “phenomenological.” The employment differences are traced to correlates in education, locale, occupational structure, and family background. The attitude toward further, underlying causes of those correlations is agnostic … Yet on reflection, common sense dictates that racist attitudes and institutional racism must play an important causal role. People do have beliefs that blacks are inferior in intelligence and morality, and they are surely influenced by these beliefs in hiring decisions … Thus, an overemphasis on Bayesian success in statistical inference discourages the elaboration of a type of account of racial disadavantages that almost certainly provides a large part of their explanation.

For all scholars seriously interested in questions on what makes up a good scientific explanation, Richard Miller’s Fact and Method is a must read. His incisive critique of Bayesianism is still unsurpassed.

Lindeberg-Levy CLT (wonkish)

23 April, 2015 at 11:24 | Posted in Economics | Comments Off on Lindeberg-Levy CLT (wonkish)

 

Next Page »

Create a free website or blog at WordPress.com.
Entries and comments feeds.