Friedman’s response to Romer & Romer

29 February, 2016 at 18:27 | Posted in Economics | 3 Comments

As yours truly wrote the other day, reading the different reactions, critiques and ‘analyses’ of Gerald Friedman’s calculations on the long term effects of implementing the Sanders’ program, the whole issue seems to basically burn down to if the Verdoorn law is operative or not.

In Friedman’s response to Romer & Romer today this is made even clearer than in the original Friedman analysis:

The Romers … would acknowledge that following a negative shock, government stimulus spending may accelerate the recovery somewhat …They deny, however, that stimulus spending could change the permanent level of output … Like mosquitos on an otherwise delightful summer afternoon, slow growth is unfortunate but there is little that can safely be done about it.

slide_40Or maybe we can find safe pesticides. Here I agree with John Maynard Keynes that the economy can have a low-employment equilibrium because of a lack of effective demand, and I agree with Nicholas Kaldor and Petrus Verdoorn that productivity and the growth rate of capacity can be increased by policies that push the economy to a higher level of employment … I see an economy at low-employment equilibrium where discouraged workers have abandoned the labor market and firms have had little incentive to innovate or to raise productivity. In this situation, additional stimulus can not only temporarily raise output but by priming the pump and encouraging additional private spending and investment, it can push the economy upwards towards capacity. And, beyond because at higher levels of employment, more people will look for work, more businesses will invest, and employment will grow faster and productivity will rise pushing up the growth rate in capacity. That is why I see lasting effects from a government stimulus when, as now, the economy is in a low-employment equilibrium.

Advertisements

Is 0.999 … = 1? (wonkish)

29 February, 2016 at 12:52 | Posted in Statistics & Econometrics | 8 Comments

What is 0.999 …, really? Is it 1? Or is it some number infinitesimally less than 1?

The right answer is to unmask the question. What is 0.999 …, really? It appears to refer to a kind of sum:

.9 + + 0.09 + 0.009 + 0.0009 + …

9781594205224M1401819961But what does that mean? That pesky ellipsis is the real problem. There can be no controversy about what it means to add up two, or three, or a hundred numbers. But infinitely many? That’s a different story. In the real world, you can never have infinitely many heaps. What’s the numerical value of an infinite sum? It doesn’t have one — until we give it one. That was the great innovation of Augustin-Louis Cauchy, who introduced the notion of limit into calculus in the 1820s.

The British number theorist G. H. Hardy … explains it best: “It is broadly true to say that mathematicians before Cauchy asked not, ‘How shall we define 1 – 1 – 1 + 1 – 1 …’ but ‘What is 1 -1 + 1 – 1 + …?'”

No matter how tight a cordon we draw around the number 1, the sum will eventually, after some finite number of steps, penetrate it, and never leave. Under those circumstances, Cauchy said, we should simply define the value of the infinite sum to be 1.

I have no problem with solving problems in mathematics by ‘defining’ them away. But how about the real world? Maybe that ought to be a question to ponder even for economists all to fond of uncritically following the mathematical way when applying their mathematical models to the real world, where indeed “you can never have infinitely many heaps” …

In econometrics we often run into the ‘Cauchy logic’ —the data is treated as if it were from a larger population, a ‘superpopulation’ where repeated realizations of the data are imagined. Just imagine there could be more worlds than the one we live in and the problem is fixed …

Accepting Haavelmo’s domain of probability theory and sample space of infinite populations – just as Fisher’s “hypothetical infinite population, of which the actual data are regarded as constituting a random sample”, von Mises’s “collective” or Gibbs’s ”ensemble” – also implies that judgments are made on the basis of observations that are actually never made!

Infinitely repeated trials or samplings never take place in the real world. So that cannot be a sound inductive basis for a science with aspirations of explaining real-world socio-economic processes, structures or events. It’s — just as the Cauchy mathematical logic of ‘defining’ away problems — not tenable.

In social sciences — including economics — it’s always wise to ponder C. S. Peirce’s remark that universes are not as common as peanuts …

Transitivity — just another questionable assumption

29 February, 2016 at 10:33 | Posted in Economics | 1 Comment

My doctor once recommended I take niacin for the sake of my heart. Yours probably has too, unless you’re a teenager or a marathon runner or a member of some other metabolically privileged caste. Here’s the argument: Consumption of niacin is correlated with higher levels of HDL, or “good cholesterol,” and high HDL is correlated with lower risk of “cardiovascular events.” If you’re not a native speaker of medicalese, that means people with plenty of good cholesterol are less likely on average to clutch their hearts and keel over dead.

But a large-scale trial carried out by the National Heart, Lung, and Blood Institute was halted in 2011, a year and a half before the scheduled finish, because the results were so weak it didn’t seem worth it to continue. Patients who got niacin did indeed have higher HDL levels, but they had just as many heart attacks and strokes as everybody else.

rockPaperScisssor How can this be? Because correlation isn’t transitive. That is: Just because niacin is correlated with HDL, and high HDL is correlated with low risk of heart disease, you can’t conclude that niacin is correlated with low risk of heart disease.

Transitive relations are ones like “weighs more than.” If I weigh more than my son and my son weighs more than my daughter, it’s an absolute certainty that I weigh more than my daughter. “Lives in the same city as” is transitive, too—if I live in the same city as Bill, who lives in the same city as Bob, then I live in the same city as Bob.

But many of the most interesting relations we find in the world of data aren’t transitive. Correlation, for instance, is more like “blood relation.” I’m related to my son, who’s related to my wife, but my wife and I aren’t blood relatives. In fact, it’s not a terrible idea to think of correlated variables as “sharing part of their DNA.” Suppose I run a boutique money management firm with just three investors, Laura, Sara, and Tim. Their stock positions are pretty simple: Laura’s fund is split 50–50 between Facebook and Google, Tim’s is one-half General Motors and one-half Honda, and Sara, poised between old economy and new, goes one-half Honda, one-half Facebook. It’s pretty obvious that Laura’s returns will be positively correlated with Sara’s; they have half their portfolio in common. And the correlation between Sara’s returns and Tim’s will be equally strong. But there’s no reason (except insofar as the whole stock market tends to move in concert) to think Tim’s performance has to be correlated with Laura’s. Those two funds are like the parents, each contributing one-half of their “genetic material” to form Sara’s hybrid fund.

Jordan Ellenberg

Statistics — a question of life and death

29 February, 2016 at 10:19 | Posted in Statistics & Econometrics | 1 Comment

In 1997, Christopher, the eleven-week-old child of a young lawyer named Sally Clark, died in his sleep: an apparent case of Sudden Infant Death Sybdrome (SIDS) … One year later, Sally’s second child, Harry, also died, aged just eight weeks. Sally was arrested and accused of killing the children. She was convicted of murdering them, and in 1999 was given a life sentence …

71saNqrmn1L._SL1500_Now … I want to show how a simple mistaken assumption led to incorrect probabilities.

In this case the mistaken evidence came from Sir Roy Meadow, a paediatrician. Despite not being an expert statistician or probabilist, he felt able to make a statement about probabilities … He asserted that the probability of two SIDS deaths in a family like Sally Clark’s was 1 in 73 million. A probability as small as this suggests we might apply Borel’s law: we shouldn’t expect to see an improbable event …

Unfortunately, however, Meadow’s 1 in 73 million probability is based on a crucial assumption: that the deaths are independent; that one such death in a family does not make it more or less likely that there will be another …

Now … that assumption does seem unjustified: data show that if one SIDS death has occurred, then a subsequent child is about ten times more likely to die of SIDS … To arrive at a valid conclusion, we would have to compare the probability that the two children had been murdered with the probability that they had both died from SIDS … There is a factor-of-ten differeence between Meadow’s estimate and the estimate based on recognizing that SIDS events in the same family are not independent, and that difference shifts the probability from favouring homicide to favouring SIDS deaths …

Following widespread criticism of the misuse and indeed misunderstanding of statistical evidence, Sally Clark’s conviction was overturned, and she was released in 2003.

Tina

28 February, 2016 at 23:52 | Posted in Varia | Comments Off on Tina

 

A guide to econometrics

28 February, 2016 at 10:37 | Posted in Statistics & Econometrics | Comments Off on A guide to econometrics

kennedyguide1. Thou shalt use common sense and economic theory.
2. Thou shalt ask the right question.
3. Thou shalt know the context.
4. Thou shalt inspect the data.
5. Thou shalt not worship complexity.
6. Thou shalt look long and hard at thy results.
7. Thou shalt beware the costs of data mining.
8. Thou shalt be willing to compromise.
9. Thou shalt not confuse statistical significance with substance.
10. Thou shalt confess in the presence of sensitivity.

Bernie Sanders and the Verdoorn law

27 February, 2016 at 16:54 | Posted in Economics | 12 Comments

Reading the different reactions, critiques and ‘analyses’ of Gerald Friedman’s calculations on the long term effects of implementing the Sanders’ program, it seem to me that what it basically burns down to is if the Verdoorn law is operative or not.

Estimating the impact of Sanders’ program Friedman writes (p. 13):

Higher demand for labor is also associated with an increase in labor productivity and this accounts for about half of the increase in economic growth under the Sanders program.

Obviously, that’s a view that  Christina Romer and David Romer (p. 8) don’t share:

Friedman … argues that as demand expansion raised output, endogenous productivity growth … would raise productive capacity by enough to prevent it from constraining output … The evidence that productivity growth would surge as a result of a demand-driven boom is weak. The fact that there is a correlation between output growth and productivity growth is not surprising. Periods of rapid productivity growth, such as the 1990s, are naturally also periods of rapid output growth. But this does not tell us that an extended period of rapid output growth resulting from demand stimulus would cause sustained high productivity growth …

In the standard mainstream economic analysis, a demand expansion may very well raise measured productivity — in the short run. But in the long run, expansionary demand policy measures cannot lead to sustained higher productivity and output levels.

verdoornIn some non-standard heterodox analyses, however, labour productivity growth is often described as a function of output growth. The rate of technical progress varies directly with the rate of growth according to the Verdoorn law. Growth and productivity is in this view highly demand-determined not only in the short run but also in the long run.

Given that the Verdoorn law is operative, Sanders’ policy could actually lead to increases in productivity and growth. Living in a world permeated by genuine Keynes-type uncertainty, we can, of course, not with any greater precision forecast how great those effects would be.

So, the nodal point is — has the Verdoorn Law been validated or not in empirical studies?

There have been hundreds of studies that have tried to answer that question, and as could be imagined, the answers differ. The law has been investigated with different econometric methods (time-series, IV, OLS, ECM, cointegration, etc.). The statistical and econometric problems are enormous (especially when it comes to the question, highlighted by Romer & Romer, on the direction of causality). Given this, however, most studies on the country level do confirm that the Verdoorn law holds — United States included. Most of the studies are for the period before the subprime crisis of 2006/2007, but if anything, it is more in line with Friedman than Romer & Romer.

Oh, dear.

Bayesianism — an unacceptable scientific reasoning

26 February, 2016 at 16:14 | Posted in Theory of Science & Methodology | 5 Comments

9780702249631 A major, and notorious, problem with this approach, at least in the domain of science, concerns how to ascribe objective prior probabilities to hypotheses. What seems to be necessary is that we list all the possible hypotheses in some domain and distribute probabilities among them, perhaps ascribing the same probability to each employing the principal of indifference. But where is such a list to come from? It might well be thought that the number of possible hypotheses in any domain is infinite, which would yield zero for the probability of each and the Bayesian game cannot get started. All theories have zero
probability and Popper wins the day. How is some finite list of hypotheses enabling some objective distribution of nonzero prior probabilities to be arrived at? My own view is that this problem is insuperable, and I also get the impression from the current literature that most Bayesians are themselves
coming around to this point of view.

Chalmers is absolutely right here in his critique of ‘objective’ Bayesianism, but I think it could actually be extended to also encompass its ‘subjective’ variety.

A classic example — borrowed from Bertrand Russell — may perhaps be allowed to illustrate the main point of the critique:

Assume you’re a Bayesian turkey and hold a nonzero probability belief in the hypothesis H that “people are nice vegetarians that do not eat turkeys and that every day I see the sun rise confirms my belief.” For every day you survive, you update your belief according to Bayes’ theorem

P(H|e) = [P(e|H)P(H)]/P(e),

where evidence e stands for “not being eaten” and P(e|H) = 1. Given that there do exist other hypotheses than H, P(e) is less than 1 and a fortiori P(H|e) is greater than P(H). Every day you survive increases your probability belief that you will not be eaten. This is totally rational according to the Bayesian definition of rationality. Unfortunately, for every day that goes by, the traditional Christmas dinner also gets closer and closer …

The nodal point here is — of course — that although Bayes’ theorem is mathematically unquestionable, that doesn’t qualify it as indisputably applicable to scientific questions.

Bayesian probability calculus is far from the automatic inference engine that its protagonists maintain it is. Where do the priors come from? Wouldn’t it be better in science if we did some scientific experimentation and observation if we are uncertain, rather than starting to make calculations based on people’s often vague and subjective personal beliefs? Is it, from an epistemological point of view, really credible to think that the Bayesian probability calculus makes it possible to somehow fully assess people’s subjective beliefs? And are — as Bayesians maintain — all scientific controversies and disagreements really possible to explain in terms of differences in prior probabilities? I’ll be dipped!

Making sense of data — categorical models

26 February, 2016 at 13:24 | Posted in Economics, Statistics & Econometrics | Comments Off on Making sense of data — categorical models


Great lecture by one of my favourite lecturers — Scott Page.

Olof Palme In Memoriam

24 February, 2016 at 15:27 | Posted in Politics & Society | 1 Comment


Olof Palme.

Born in January 1927.

Murdered in February 1986.

30 years and a loss my country — Sweden — is still suffering from.

Next Page »

Blog at WordPress.com.
Entries and comments feeds.