British austerity delusion

30 Apr, 2015 at 13:15 | Posted in Economics | 3 Comments

Why does big business love austerity and hate Keynesian economics? After all, you might expect corporate leaders to want policies that produce strong sales and hence strong profits.

I’ve already suggested one answer: scare talk about debt and deficits is often used as a cover for a very different agenda, namely an attempt to reduce the overall size of government and especially spending on social insurance. This has been transparently obvious in the United States, where many supposed deficit-reduction plans just happen to include sharp cuts in tax rates on corporations and the wealthy even as they take away healthcare and nutritional aid for the poor. But it’s also a fairly obvious motivation in the UK, if not so crudely expressed. The “primary purpose” of austerity, the Telegraph admitted in 2013, “is to shrink the size of government spending” – or, as Cameron put it in a speech later that year, to make the state “leaner … not just now, but permanently” …

Business leaders love the idea that the health of the economy depends on confidence, which in turn – or so they argue – requires making them happy. In the US there were, until the recent takeoff in job growth, many speeches and opinion pieces arguing that President Obama’s anti-business rhetoric – which only existed in the right’s imagination, but never mind – was holding back recovery. The message was clear: don’t criticise big business, or the economy will suffer.

Paul Krugman

I definitely recommend everyone to read this well-argued Krugman article.

To many conservative and neoliberal politicians and economists there seems to be a spectre haunting the United States and Europe today — Keynesian ideas on governments pursuing policies raising effective demand and supporting employment. And some of the favourite arguments used among these Keynesophobics to fight it are the confidence argument and the doctrine of ‘sound finance.’

Is this witless crusade against economic reason new? Not at all. And had Krugman not had such a debonair attitude to history of economic thought he would for sure have encountered Michal Kalecki’s classic 1943 article — basically giving the same answer to the questions posed as Krugman does seventy-two years later…

kale It should be first stated that, although most economists are now agreed that full employment may be achieved by government spending, this was by no means the case even in the recent past. Among the opposers of this doctrine there were (and still are) prominent so-called ‘economic experts’ closely connected with banking and industry. This suggests that there is a political background in the opposition to the full employment doctrine, even though the arguments advanced are economic. That is not to say that people who advance them do not believe in their economics, poor though this is. But obstinate ignorance is usually a manifestation of underlying political motives …

Clearly, higher output and employment benefit not only workers but entrepreneurs as well, because the latter’s profits rise. And the policy of full employment outlined above does not encroach upon profits because it does not involve any additional taxation. The entrepreneurs in the slump are longing for a boom; why do they not gladly accept the synthetic boom which the government is able to offer them? It is this difficult and fascinating question with which we intend to deal in this article …

We shall deal first with the reluctance of the ‘captains of industry’ to accept government intervention in the matter of employment. Every widening of state activity is looked upon by business with suspicion, but the creation of employment by government spending has a special aspect which makes the opposition particularly intense. Under a laissez-faire system the level of employment depends to a great extent on the so-called state of confidence. If this deteriorates, private investment declines, which results in a fall of output and employment (both directly and through the secondary effect of the fall in incomes upon consumption and investment). This gives the capitalists a powerful indirect control over government policy: everything which may shake the state of confidence must be carefully avoided because it would cause an economic crisis. But once the government learns the trick of increasing employment by its own purchases, this powerful controlling device loses its effectiveness. Hence budget deficits necessary to carry out government intervention must be regarded as perilous. The social function of the doctrine of ‘sound finance’ is to make the level of employment dependent on the state of confidence.

Michal Kalecki Political aspects of full employment

Skolmatematik

30 Apr, 2015 at 12:06 | Posted in Economics | 1 Comment

Matematikundervisningen i den svenska skolan

matteÅr 1970
En bonde säljer en säck potatis för 20 kr. Framställningskostnaden är 4/5 av priset. Hur stor är vinsten?

År 1980
En bonde säljer en säck potatis för 20 kr. Framställningskostnaden är 16 kr. Var snäll och räkna ut vinsten.

År 2015
En bonde säljer en säck potatis för 20 kr. Framställningskostnaden är 4/5 därav, vilket är 16 kr. Vinsten uppgår till 1/5, lika med 4 kr. Stryk under ordet ”potatis” och diskutera med din kamrat.

Validating assumptions

30 Apr, 2015 at 07:56 | Posted in Economics | 1 Comment

Piketty uses the terms “capital” and “wealth” interchangeably to denote the total monetary value of shares, housing and other assets. “Income” is measured in money terms. We shall reserve the term “capital” for the totality of productive assets evaluated at constant prices. The term “output” is used to denote the totality of net output (value-added) measured at constant prices. Piketty uses the symbol β to denote the ratio of “wealth” to “income” and he denotes the share of wealth-owners in total income by α. In his theoretical analysis this share is equated to the share of profits in total output. Piketty documents how α and β have both risen by a considerable amount in recent decades. He argues that this is not mere correlation, but reflects a causal link. It is the rise in β which is responsible for the rise in α. To reach this conclusion, he first assumes that β is equal to the capital-output ratio K/Y, as conventionally understood. From his empirical finding that β has risen, he concludes that K/Y has also risen by a similar amount. According to the neoclassical theory of factor shares, an increase in K/Y will only lead to an increase in α when the elasticity of substitution between capital and labour σ is greater than unity. Piketty asserts that this is the case. Indeed, based on movements α and β, he estimates that σ is between 1.3 and 1.6 (page 221).

assumptions-analysis1Thus, Piketty’s argument rests on two crucial assumptions: β = K/Y and σ > 1. Once these assumptions are granted, the neoclassical theory of factor shares ensures that an increase in β will lead to an increase in α. In fact, neither of these assumptions is supported by the empirical evidence which is surveyed briefly in the appendix. This evidence implies that the large observed rise in β in recent decades is not the result of a big rise in K/Y but is primarily a valuation effect …

Piketty argues that the higher income share of wealth-owners is due to an increase in the capital-output ratio resulting from a high rate of capital accumulation. The evidence suggests just the contrary. The capital-output ratio, as conventionally measured has either fallen or been constant in recent decades. The apparent increase in the capital-output ratio identified by Piketty is a valuation effect reflecting a disproportionate increase in the market value of certain real assets. A more plausible explanation for the increased income share of wealth-owners is an unduly low rate of investment in real capital.

Robert Rowthorn

It seems to me that Rowthorn is closing in on the nodal point in Piketty’s picture of the long-term trends in income distribution in advanced economies.

Say we have a diehard neoclassical model (assuming the production function is homogeneous of degree one and unlimited substitutability) such as the standard Cobb-Douglas production function (with A a given productivity parameter, and k  the ratio of capital stock to labor, K/L) y = Akα , with a constant investment λ out of output y and a constant depreciation rate δ of the “capital per worker” k, where the rate of accumulation of k, Δk = λyδk, equals Δk = λAkαδk. In steady state (*) we have λAk*α = δk*, giving λ/δ = k*/y* and k* = (λA/δ)1/(1-α)Putting this value of k* into the production function, gives us the steady state output per worker level y* = Ak*α= A1/(1-α)(λ/δ))α/(1-α)Assuming we have an exogenous Harrod-neutral technological progress that increases y with a growth rate g (assuming a zero labour growth rate and with y and k a fortiori now being refined as y/A and k/A respectively, giving the production function as y = kα) we get dk/dt = λy – (g + δ)k, which in the Cobb-Douglas case gives dk/dt = λkα– (g + δ)k, with steady state value k* = (λ/(g + δ))1/(1-αand capital-output ratio k*/y* = k*/k*α = λ/(g + δ). If using Piketty’s preferred model with output and capital given net of depreciation, we have to change the final expression into k*/y* = k*/k*α = λ/(g + λδ). Now what Piketty predicts is that g will fall and that this will increase the capital-output ratio. Let’s say we have δ = 0.03, λ = 0.1 and g = 0.03 initially. This gives a capital-output ratio of around 3. If g falls to 0.01 it rises to around 7.7. We reach analogous results if we use a basic CES production function with an elasticity of substitution σ > 1. With σ = 1.5, the capital share rises from 0.2 to 0.36 if the wealth-income ratio goes from 2.5 to 5, which according to Piketty is what actually has happened in rich countries during the last forty years.

Being able to show that you can get these results using one or another of the available standard neoclassical growth models is of course — from a realist point of view — of limited value. As usual — the really interesting thing is how in accord with reality are the assumptions you make and the numerical values you put into the model specification.

Professor Piketty chose a theoretical framework that simultaneously allowed him to produce catchy numerical predictions, in tune with his empirical findings, while soaring like an eagle above the ‘messy’ debates of political economists shunned by their own profession’s mainstream and condemned diligently to inquire, in pristine isolation, into capitalism’s radical indeterminacy. The fact that, to do this, he had to adopt axioms that are both grossly unrealistic and logically incoherent must have seemed to him a small price to pay.

Yanis Varoufakis

Significance testing — an embarrassing ritual

29 Apr, 2015 at 10:17 | Posted in Statistics & Econometrics | 1 Comment

Knowing the contents of a toolbox, of course, requires statistical thinking, that is, the art of choosing a proper tool for a given problem. Instead, one single procedure that I call the “null ritual” tends to be featured in texts and practiced by researchers. Its essence can be summarized in a few lines:

The null ritual:
1. Set up a statistical null hypothesis of “no mean difference” or “zero correlation.” Don’t specify the predictions of your research hypothesis or of any alternative substantive hypotheses.
2. Use 5% as a convention for rejecting the null. If significant, accept your research hypothesis. Report the result as p < 0.05, p < 0.01, or p < 0.001 (whichever comes next to the obtained p-value).
3. Always perform this procedure …

gigThe routine reliance on the null ritual discourages not only statistical thinking but also theoretical thinking. One does not need to specify one’s hypothesis, nor any challenging alternative hypothesis … The sole requirement is to reject a null that is identified with “chance.” Statistical theories such as Neyman–Pearson theory and Wald’s theory, in contrast, begin with two or more statistical hypotheses.

In the absence of theory, the temptation is to look first at the data and then see what is significant. The physicist Richard Feynman … has taken notice of this misuse of hypothesis testing. I summarize his argument:

Feynman’s conjecture:
To report a significant result and reject the null in favor of an alternative hypothesis is meaningless unless the alternative hypothesis has been stated before the data was obtained.

Feynman’s conjecture is again and again violated by routine significance testing, where one looks at the data to see what is significant. Statistical packages allow every difference, interaction, or correlation against chance to be tested. They automatically deliver ratings of “significance” in terms of stars, double stars, and triple stars, encouraging the bad afterthe-fact habit. The general problem Feynman addressed is known as overfitting … Fitting per se has the same
problems as story telling after the fact, which leads to a “hindsight bias.” The true test of a model is to fix its parameters on one sample, and to test it in a new sample. Then it turns out that predictions based on simple heuristics can be more accurate than routine multiple regressions … Less can be more. The routine use of linear multiple regression exemplifies another mindless use of statistics …

We know but often forget that the problem of inductive inference has no single solution. There is no uniformly most powerful test, that is, no method that is best for every problem. Statistical theory has provided us with a toolbox with effective instruments, which require judgment about when it is right to use them … Judgment is part of the art of statistics.

To stop the ritual, we also need more guts and nerves. We need some pounds of courage to cease playing along in this embarrassing game. This may cause friction with editors and colleagues, but it will in the end help them to enter the dawn of statistical thinking.

‘Sometimes I Feel Like a Motherless Child’

26 Apr, 2015 at 12:17 | Posted in Varia | Comments Off on ‘Sometimes I Feel Like a Motherless Child’

 

The confidence fairy bleeding

25 Apr, 2015 at 12:31 | Posted in Economics | 2 Comments

The confidence factor affects government decision-making, but it does not affect the results of decisions. Except in extreme cases, confidence cannot cause a bad policy to have good results, and a lack of it cannot cause a good policy to have bad results, any more than jumping out of a window in the mistaken belief that humans can fly can offset the effect of gravity.

cashreserve1The sequence of events in the Great Recession that began in 2008 bears this out. At first, governments threw everything at it. This prevented the Great Recession from becoming Great Depression II. But, before the economy reached bottom, the stimulus was turned off, and austerity – accelerated liquidation of budget deficits, mainly by cuts in spending – became the order of the day.

Once winded political elites had recovered their breath, they began telling a story designed to preclude any further fiscal stimulus. The slump had been created by fiscal extravagance, they insisted, and therefore could be cured only by fiscal austerity. And not any old austerity: it was spending on the poor, not the rich, that had to be cut, because such spending was the real cause of the trouble.

Any Keynesian knows that cutting the deficit in a slump is bad policy. A slump, after all, is defined by a deficiency in total spending. To try to cure it by spending less is like trying to cure a sick person by bleeding.

So it was natural to ask economist/advocates of bleeding like Harvard’s Alberto Alesina and Kenneth Rogoff how they expected their cure to work. Their answer was that the belief that it would work – the confidence fairy – would ensure its success.

More precisely, Alesina argued that while bleeding on its own would worsen the patient’s condition, its beneficial impact on expectations would more than offset its debilitating effects. Buoyed by assurance of recovery, the half-dead patient would leap out of bed, start running, jumping, and eating normally, and would soon be restored to full vigor. The bleeding school produced some flaky evidence to show that this had happened in a few instances …

With the help of professors like Alesina, conservative conviction could be turned into scientific prediction. And when Alesina’s cure failed to produce rapid recovery, there was an obvious excuse: it had not been applied with enough vigor to be “credible.”

The cure, such as it was, finally came about, years behind schedule, not through fiscal bleeding, but by massive monetary stimulus. When the groggy patient eventually staggered to its feet, the champions of fiscal bleeding triumphantly proclaimed that austerity had worked.

The moral of the tale is simple: Austerity in a slump does not work, for the reason that the medieval cure of bleeding a patient never worked: it enfeebles instead of strengthening. Inserting the confidence fairy between the cause and effect of a policy does not change the logic of the policy; it simply obscures the logic for a time. Recovery may come about despite fiscal austerity, but never because of it.

Robert Skidelsky

 

Where to write blog posts and listen to music (private)

25 Apr, 2015 at 11:27 | Posted in Economics | Comments Off on Where to write blog posts and listen to music (private)

Karlskrona’s well preserved town plan and wide baroque streets has resulted in UNESCO designating it a World Heritage site. Its archipelago is the southernmost of Sweden’s archipelagos.

A lovely place for writing blog posts — and listening to music like this:
 

‘De evige tre’

24 Apr, 2015 at 18:34 | Posted in Economics | Comments Off on ‘De evige tre’

 

Bayesianism — a dangerous scientific cul-de-sac

24 Apr, 2015 at 18:24 | Posted in Economics | 7 Comments

419Fn8sV1FL._SY344_BO1,204,203,200_The bias toward the superficial and the response to extraneous influences on research are both examples of real harm done in contemporary social science by a roughly Bayesian paradigm of statistical inference as the epitome of empirical argument. For instance the dominant attitude toward the sources of black-white differential in United States unemployment rates (routinely the rates are in a two to one ratio) is “phenomenological.” The employment differences are traced to correlates in education, locale, occupational structure, and family background. The attitude toward further, underlying causes of those correlations is agnostic … Yet on reflection, common sense dictates that racist attitudes and institutional racism must play an important causal role. People do have beliefs that blacks are inferior in intelligence and morality, and they are surely influenced by these beliefs in hiring decisions … Thus, an overemphasis on Bayesian success in statistical inference discourages the elaboration of a type of account of racial disadavantages that almost certainly provides a large part of their explanation.

For all scholars seriously interested in questions on what makes up a good scientific explanation, Richard Miller’s Fact and Method is a must read. His incisive critique of Bayesianism is still unsurpassed.

Lindeberg-Levy CLT (wonkish)

23 Apr, 2015 at 11:24 | Posted in Economics | Comments Off on Lindeberg-Levy CLT (wonkish)

 

Short refresher on proof techniques (wonkish)

22 Apr, 2015 at 18:20 | Posted in Economics | Comments Off on Short refresher on proof techniques (wonkish)

 

Guess who’s paying for the Greek euro disaster

20 Apr, 2015 at 11:09 | Posted in Economics | Comments Off on Guess who’s paying for the Greek euro disaster

012615krugman1-blog480
Source

The euro has taken away the possibility for national governments to manage their economies in a meaningful way — and in Greece the people has had to pay the true costs of its concomitant misguided austerity policies.

stasi-4-thumb-large

Reality killed the Washington Consensus

19 Apr, 2015 at 17:01 | Posted in Economics | 1 Comment

Over the past three years research coming from [IMF] increasingly challenged the orthodoxy that still shapes European policy making:
 
reality-header2-2

First, there was the widely discussed mea culpa in the October 2012 World Economic Outlook, when the IMF staff basically disavowed their own previous estimates of the size of multipliers, and in doing so they certified that austerity could not, and would not work …

Then, the Fund tackled the issue of income inequality, and broke another taboo, i.e. the dichotomy between fairness and efficiency. Turns out that unequal societies tend to perform less well, and IMF staff research reached the same conclusion …

Then, of course, the “public Investment is a free lunch” chapter three of the World Economic Outlook, in the fall 2014.

In between, they demolished another building block of the Washington Consensus: free capital movements may sometimes be destabilizing …
 
These results are not surprising per se. All of these issues are highly controversial, so it is obvious that research does not find unequivocal support for a particular view. All the more so if that view, like the Washington Consensus, is pretty much an ideological construction. Yet, the fact that research coming from the center of the empire acknowledges that the world is complex, and interactions among agents goes well beyond the working of efficient markets, is in my opinion quite something.

Francesco Saraceno

Stationary non-ergodicity (wonkish)

18 Apr, 2015 at 10:24 | Posted in Economics | 18 Comments

Let’s say we have a stationary process. That does not guarantee that it is also ergodic. The long-run time average of a single output function of the stationary process may not converge to the expectation of the corresponding variables — and so the long-run time average may not equal the probabilistic (expectational) average. cointossingSay we have two coins, where coin A has a probability of 1/2 of coming up heads, and coin B has a probability of 1/4 of coming up heads. We pick either of these coins with a probability of 1/2 and then toss the chosen coin over and over again. Now let H1, H2, … be either one or zero as the coin comes up heads or tales. This process is obviously stationary, but the time averages — [H1 + … + Hn]/n — converges to 1/2 if coin A is chosen, and 1/4 if coin B is chosen. Both these time averages have a probability of 1/2 and so their expectational average is 1/2 x 1/2 + 1/2 x 1/4 = 3/8, which obviously is not equal to 1/2 or 1/4. The time averages depend on which coin you happen to choose, while the probabilistic (expectational) average is calculated for the whole “system” consisting of both coin A and coin B.

Models, math and macro

17 Apr, 2015 at 15:58 | Posted in Economics | 6 Comments

“To put it bluntly, the discipline of economics has yet to get over its childish passion for mathematics and for purely theoretical and often highly ideological speculation, at the expense of historical research and collaboration with the other social sciences.”

The quote is, of course, from Piketty’s Capital in the 21st Century. Judging by Noah Smith’s recent blog entry, there is still progress to be made.

Smith observes that the performance of DSGE models is dependably poor in predicting future macroeconomic outcomes—precisely the task for which they are widely deployed. Critics of DSGE are however dismissed because—in a nutshell—there’s nothing better out there.
OB-LC060_neweco_G_20101129224057
This argument is deficient in two respects. First, there is a self-evident flaw in a belief that, despite overwhelming and damning evidence that a particular tool is faulty—and dangerously so—that tool should not be abandoned because there is no obvious replacement.

The second deficiency relates to the claim that there is no alternative way to approach macroeconomics:

“When I ask angry “heterodox” people “what better alternative models are there?”, they usually either mention some models but fail to provide links and then quickly change the subject, or they link me to reports that are basically just chartblogging.”

Although Smith is too polite to accuse me directly, this refers to a Twitter exchange from a few days earlier. This was triggered when I took offence at a previous post of his in which he argues that the triumph of New Keynesian sticky-price models over their Real Business Cycle predecessors was proof that “if you just keep pounding away with theory and evidence, even the toughest orthodoxy in a mean, confrontational field like macroeconomics will eventually have to give you some respect”.

When I put it to him that, rather then supporting his point, the failure of the New Keynesian model to be displaced—despite sustained and substantiated criticism—rather undermined it, he responded—predictably—by asking what should replace it.

The short answer is that there is no single model that will adequately tell you all you need to know about a macroeconomic system. A longer answer requires a discussion of methodology and the way that we, as economists, think about the economy. To diehard supporters of the ailing DSGE tradition, “a model” means a collection of dynamic simultaneous equations constructed on the basis of a narrow set of assumptions around what individual “agents” do—essentially some kind of optimisation problem. Heterodox economists argue for a much broader approach to understanding the economic system in which mathematical models are just one tool to aid us in thinking about economic processes.

What all this means is that it is very difficult to have a discussion with people for whom the only way to view the economy is through the lens of mathematical models—and a particularly narrowly defined class of mathematical models—because those individuals can only engage with an argument by demanding to be shown a sheet of equations.

Jo Michell

[h/t Jan Milch]

Next Page »

Blog at WordPress.com.
Entries and Comments feeds.