Friedman’s response to Romer & Romer

29 Feb, 2016 at 18:27 | Posted in Economics | 3 Comments

As yours truly wrote the other day, reading the different reactions, critiques and ‘analyses’ of Gerald Friedman’s calculations on the long term effects of implementing the Sanders’ program, the whole issue seems to basically burn down to if the Verdoorn law is operative or not.

In Friedman’s response to Romer & Romer today this is made even clearer than in the original Friedman analysis:

The Romers … would acknowledge that following a negative shock, government stimulus spending may accelerate the recovery somewhat …They deny, however, that stimulus spending could change the permanent level of output … Like mosquitos on an otherwise delightful summer afternoon, slow growth is unfortunate but there is little that can safely be done about it.

slide_40Or maybe we can find safe pesticides. Here I agree with John Maynard Keynes that the economy can have a low-employment equilibrium because of a lack of effective demand, and I agree with Nicholas Kaldor and Petrus Verdoorn that productivity and the growth rate of capacity can be increased by policies that push the economy to a higher level of employment … I see an economy at low-employment equilibrium where discouraged workers have abandoned the labor market and firms have had little incentive to innovate or to raise productivity. In this situation, additional stimulus can not only temporarily raise output but by priming the pump and encouraging additional private spending and investment, it can push the economy upwards towards capacity. And, beyond because at higher levels of employment, more people will look for work, more businesses will invest, and employment will grow faster and productivity will rise pushing up the growth rate in capacity. That is why I see lasting effects from a government stimulus when, as now, the economy is in a low-employment equilibrium.

Is 0.999 … = 1? (wonkish)

29 Feb, 2016 at 12:52 | Posted in Statistics & Econometrics | 8 Comments

What is 0.999 …, really? Is it 1? Or is it some number infinitesimally less than 1?

The right answer is to unmask the question. What is 0.999 …, really? It appears to refer to a kind of sum:

.9 + + 0.09 + 0.009 + 0.0009 + …

9781594205224M1401819961But what does that mean? That pesky ellipsis is the real problem. There can be no controversy about what it means to add up two, or three, or a hundred numbers. But infinitely many? That’s a different story. In the real world, you can never have infinitely many heaps. What’s the numerical value of an infinite sum? It doesn’t have one — until we give it one. That was the great innovation of Augustin-Louis Cauchy, who introduced the notion of limit into calculus in the 1820s.

The British number theorist G. H. Hardy … explains it best: “It is broadly true to say that mathematicians before Cauchy asked not, ‘How shall we define 1 – 1 – 1 + 1 – 1 …’ but ‘What is 1 -1 + 1 – 1 + …?'”

No matter how tight a cordon we draw around the number 1, the sum will eventually, after some finite number of steps, penetrate it, and never leave. Under those circumstances, Cauchy said, we should simply define the value of the infinite sum to be 1.

I have no problem with solving problems in mathematics by ‘defining’ them away. But how about the real world? Maybe that ought to be a question to ponder even for economists all to fond of uncritically following the mathematical way when applying their mathematical models to the real world, where indeed “you can never have infinitely many heaps” …

In econometrics we often run into the ‘Cauchy logic’ —the data is treated as if it were from a larger population, a ‘superpopulation’ where repeated realizations of the data are imagined. Just imagine there could be more worlds than the one we live in and the problem is fixed …

Accepting Haavelmo’s domain of probability theory and sample space of infinite populations – just as Fisher’s “hypothetical infinite population, of which the actual data are regarded as constituting a random sample”, von Mises’s “collective” or Gibbs’s ”ensemble” – also implies that judgments are made on the basis of observations that are actually never made!

Infinitely repeated trials or samplings never take place in the real world. So that cannot be a sound inductive basis for a science with aspirations of explaining real-world socio-economic processes, structures or events. It’s — just as the Cauchy mathematical logic of ‘defining’ away problems — not tenable.

In social sciences — including economics — it’s always wise to ponder C. S. Peirce’s remark that universes are not as common as peanuts …

Transitivity — just another questionable assumption

29 Feb, 2016 at 10:33 | Posted in Economics | 1 Comment

My doctor once recommended I take niacin for the sake of my heart. Yours probably has too, unless you’re a teenager or a marathon runner or a member of some other metabolically privileged caste. Here’s the argument: Consumption of niacin is correlated with higher levels of HDL, or “good cholesterol,” and high HDL is correlated with lower risk of “cardiovascular events.” If you’re not a native speaker of medicalese, that means people with plenty of good cholesterol are less likely on average to clutch their hearts and keel over dead.

But a large-scale trial carried out by the National Heart, Lung, and Blood Institute was halted in 2011, a year and a half before the scheduled finish, because the results were so weak it didn’t seem worth it to continue. Patients who got niacin did indeed have higher HDL levels, but they had just as many heart attacks and strokes as everybody else.

rockPaperScisssor How can this be? Because correlation isn’t transitive. That is: Just because niacin is correlated with HDL, and high HDL is correlated with low risk of heart disease, you can’t conclude that niacin is correlated with low risk of heart disease.

Transitive relations are ones like “weighs more than.” If I weigh more than my son and my son weighs more than my daughter, it’s an absolute certainty that I weigh more than my daughter. “Lives in the same city as” is transitive, too—if I live in the same city as Bill, who lives in the same city as Bob, then I live in the same city as Bob.

But many of the most interesting relations we find in the world of data aren’t transitive. Correlation, for instance, is more like “blood relation.” I’m related to my son, who’s related to my wife, but my wife and I aren’t blood relatives. In fact, it’s not a terrible idea to think of correlated variables as “sharing part of their DNA.” Suppose I run a boutique money management firm with just three investors, Laura, Sara, and Tim. Their stock positions are pretty simple: Laura’s fund is split 50–50 between Facebook and Google, Tim’s is one-half General Motors and one-half Honda, and Sara, poised between old economy and new, goes one-half Honda, one-half Facebook. It’s pretty obvious that Laura’s returns will be positively correlated with Sara’s; they have half their portfolio in common. And the correlation between Sara’s returns and Tim’s will be equally strong. But there’s no reason (except insofar as the whole stock market tends to move in concert) to think Tim’s performance has to be correlated with Laura’s. Those two funds are like the parents, each contributing one-half of their “genetic material” to form Sara’s hybrid fund.

Jordan Ellenberg

Statistics — a question of life and death

29 Feb, 2016 at 10:19 | Posted in Statistics & Econometrics | 1 Comment

In 1997, Christopher, the eleven-week-old child of a young lawyer named Sally Clark, died in his sleep: an apparent case of Sudden Infant Death Sybdrome (SIDS) … One year later, Sally’s second child, Harry, also died, aged just eight weeks. Sally was arrested and accused of killing the children. She was convicted of murdering them, and in 1999 was given a life sentence …

71saNqrmn1L._SL1500_Now … I want to show how a simple mistaken assumption led to incorrect probabilities.

In this case the mistaken evidence came from Sir Roy Meadow, a paediatrician. Despite not being an expert statistician or probabilist, he felt able to make a statement about probabilities … He asserted that the probability of two SIDS deaths in a family like Sally Clark’s was 1 in 73 million. A probability as small as this suggests we might apply Borel’s law: we shouldn’t expect to see an improbable event …

Unfortunately, however, Meadow’s 1 in 73 million probability is based on a crucial assumption: that the deaths are independent; that one such death in a family does not make it more or less likely that there will be another …

Now … that assumption does seem unjustified: data show that if one SIDS death has occurred, then a subsequent child is about ten times more likely to die of SIDS … To arrive at a valid conclusion, we would have to compare the probability that the two children had been murdered with the probability that they had both died from SIDS … There is a factor-of-ten differeence between Meadow’s estimate and the estimate based on recognizing that SIDS events in the same family are not independent, and that difference shifts the probability from favouring homicide to favouring SIDS deaths …

Following widespread criticism of the misuse and indeed misunderstanding of statistical evidence, Sally Clark’s conviction was overturned, and she was released in 2003.

Tina

28 Feb, 2016 at 23:52 | Posted in Varia | Comments Off on Tina

 

A guide to econometrics

28 Feb, 2016 at 10:37 | Posted in Statistics & Econometrics | Comments Off on A guide to econometrics

kennedyguide1. Thou shalt use common sense and economic theory.
2. Thou shalt ask the right question.
3. Thou shalt know the context.
4. Thou shalt inspect the data.
5. Thou shalt not worship complexity.
6. Thou shalt look long and hard at thy results.
7. Thou shalt beware the costs of data mining.
8. Thou shalt be willing to compromise.
9. Thou shalt not confuse statistical significance with substance.
10. Thou shalt confess in the presence of sensitivity.

Bernie Sanders and the Verdoorn law

27 Feb, 2016 at 16:54 | Posted in Economics | 12 Comments

Reading the different reactions, critiques and ‘analyses’ of Gerald Friedman’s calculations on the long term effects of implementing the Sanders’ program, it seem to me that what it basically burns down to is if the Verdoorn law is operative or not.

Estimating the impact of Sanders’ program Friedman writes (p. 13):

Higher demand for labor is also associated with an increase in labor productivity and this accounts for about half of the increase in economic growth under the Sanders program.

Obviously, that’s a view that  Christina Romer and David Romer (p. 8) don’t share:

Friedman … argues that as demand expansion raised output, endogenous productivity growth … would raise productive capacity by enough to prevent it from constraining output … The evidence that productivity growth would surge as a result of a demand-driven boom is weak. The fact that there is a correlation between output growth and productivity growth is not surprising. Periods of rapid productivity growth, such as the 1990s, are naturally also periods of rapid output growth. But this does not tell us that an extended period of rapid output growth resulting from demand stimulus would cause sustained high productivity growth …

In the standard mainstream economic analysis, a demand expansion may very well raise measured productivity — in the short run. But in the long run, expansionary demand policy measures cannot lead to sustained higher productivity and output levels.

verdoornIn some non-standard heterodox analyses, however, labour productivity growth is often described as a function of output growth. The rate of technical progress varies directly with the rate of growth according to the Verdoorn law. Growth and productivity is in this view highly demand-determined not only in the short run but also in the long run.

Given that the Verdoorn law is operative, Sanders’ policy could actually lead to increases in productivity and growth. Living in a world permeated by genuine Keynes-type uncertainty, we can, of course, not with any greater precision forecast how great those effects would be.

So, the nodal point is — has the Verdoorn Law been validated or not in empirical studies?

There have been hundreds of studies that have tried to answer that question, and as could be imagined, the answers differ. The law has been investigated with different econometric methods (time-series, IV, OLS, ECM, cointegration, etc.). The statistical and econometric problems are enormous (especially when it comes to the question, highlighted by Romer & Romer, on the direction of causality). Given this, however, most studies on the country level do confirm that the Verdoorn law holds — United States included. Most of the studies are for the period before the subprime crisis of 2006/2007, but if anything, it is more in line with Friedman than Romer & Romer.

Oh, dear.

Bayesianism — an unacceptable scientific reasoning

26 Feb, 2016 at 16:14 | Posted in Theory of Science & Methodology | 5 Comments

9780702249631 A major, and notorious, problem with this approach, at least in the domain of science, concerns how to ascribe objective prior probabilities to hypotheses. What seems to be necessary is that we list all the possible hypotheses in some domain and distribute probabilities among them, perhaps ascribing the same probability to each employing the principal of indifference. But where is such a list to come from? It might well be thought that the number of possible hypotheses in any domain is infinite, which would yield zero for the probability of each and the Bayesian game cannot get started. All theories have zero
probability and Popper wins the day. How is some finite list of hypotheses enabling some objective distribution of nonzero prior probabilities to be arrived at? My own view is that this problem is insuperable, and I also get the impression from the current literature that most Bayesians are themselves
coming around to this point of view.

Chalmers is absolutely right here in his critique of ‘objective’ Bayesianism, but I think it could actually be extended to also encompass its ‘subjective’ variety.

A classic example — borrowed from Bertrand Russell — may perhaps be allowed to illustrate the main point of the critique:

Assume you’re a Bayesian turkey and hold a nonzero probability belief in the hypothesis H that “people are nice vegetarians that do not eat turkeys and that every day I see the sun rise confirms my belief.” For every day you survive, you update your belief according to Bayes’ theorem

P(H|e) = [P(e|H)P(H)]/P(e),

where evidence e stands for “not being eaten” and P(e|H) = 1. Given that there do exist other hypotheses than H, P(e) is less than 1 and a fortiori P(H|e) is greater than P(H). Every day you survive increases your probability belief that you will not be eaten. This is totally rational according to the Bayesian definition of rationality. Unfortunately, for every day that goes by, the traditional Christmas dinner also gets closer and closer …

The nodal point here is — of course — that although Bayes’ theorem is mathematically unquestionable, that doesn’t qualify it as indisputably applicable to scientific questions.

Bayesian probability calculus is far from the automatic inference engine that its protagonists maintain it is. Where do the priors come from? Wouldn’t it be better in science if we did some scientific experimentation and observation if we are uncertain, rather than starting to make calculations based on people’s often vague and subjective personal beliefs? Is it, from an epistemological point of view, really credible to think that the Bayesian probability calculus makes it possible to somehow fully assess people’s subjective beliefs? And are — as Bayesians maintain — all scientific controversies and disagreements really possible to explain in terms of differences in prior probabilities? I’ll be dipped!

Making sense of data — categorical models

26 Feb, 2016 at 13:24 | Posted in Economics, Statistics & Econometrics | Comments Off on Making sense of data — categorical models


Great lecture by one of my favourite lecturers — Scott Page.

Olof Palme In Memoriam

24 Feb, 2016 at 15:27 | Posted in Politics & Society | 1 Comment


Olof Palme.

Born in January 1927.

Murdered in February 1986.

30 years and a loss my country — Sweden — is still suffering from.

Macroeconomics on a walk down a blind alley

24 Feb, 2016 at 10:17 | Posted in Economics | Comments Off on Macroeconomics on a walk down a blind alley

I would say that people like Kydland and Prescott, and so forth, people like that … changed the way that people do macroeconomics. But in my view it was not a positive change … One predominant idea is that of external shocks—and in particular the idea that the shocks that happen to the economy should essentially be the technological shocks. As Joe Stiglitz said, what could we mean by a negative technological shock? That people forget what they could do before?

So we have this idea that we have a system which is in equilibrium and that every now and then it gets knocked off the equilibrium by ‘a shock’. But shocks are part of the system! We have gone down a track that actually does not allow us to say much about the real, major movements in the macro-economy … We should be studying non-normal periods, instead of normal ones, because that is what causes real problems. And we do not do that.

unemployed-thumbSo my vision of the state of macroeconomics is that it somehow has the wrong view: an equilibrium view and a stationary state view. But what is important and interesting about macroeconomics is precisely when those two things do not hold. How can you talk of equilibrium when we move from 5% unemployment to 10% unemployment? If you are in Chicago, you say “Well, those extra 5% have made the calculation that it was better for them to be out of work”. But look at the reality; that is not what happens. People do not want to be out of work … Millions of people are out of work, and we are not worried about that?

That is the major failure in macroeconomics. It does not address the serious problems that we face when we get out of equilibrium. And we are out of equilibrium most of the time.

Alan Kirman

Yours truly is extremely fond of economists like Alan Kirman. With razor-sharp intellects they immediately go for the essentials. They have no time for bullshit. And neither should we.

Economists — dangerous arrogants

23 Feb, 2016 at 12:39 | Posted in Economics | 1 Comment

In advanced economics the question would be: ‘What besides mathematics should be in an economics lecture?’ In physics the familiar spirit is Archimedes the experimenter. aaaaafeynBut in economics, as in mathematics itself, it is theorem-proving Euclid who paces the halls …

Economics … has become a mathematical game. The science has been drained out of economics, replaced by a Nintendo game of assumption-making …

Most thoughtful economists think that the games on the blackboard and the computer have gone too far, absurdly too far. It is time to bring economic observation, economic history, economic literature, back into the teaching of economics.

Economists would be less arrogant, and less dangerous as experts, if they had to face up to the facts of the world. Perhaps they would even become as modest as the physicists.

D. McCloskey

Krugman owes Friedman an apology!

22 Feb, 2016 at 19:19 | Posted in Politics & Society | 1 Comment

 

February 20, 2016

Dear Paul,
Your suggestion that “personal ambition” in any way influenced my analysis of the Sanders economic plan is as insulting as it is wrong and you owe me an apology.

You don’t know me. We did not quite overlap in graduate school and our paths have diverged since. We have never met or spoken. The closest we came was when my department attempted to bring you to Amherst to give a guest lecture. Never happened because we could not afford your rate.

While you don’t know me, you seem to feel free to speculate about my values and interests. You assume that an outsider economist like myself must be considered not particularly “insightful or even technically competent.” And, elaborating this theory, you conclude that envy would lead me to jump on an opportunity for self-advancement by shilling for an outsider politician. Now this theory might be tested empirically. You could easily have tested your theory by investigating my motives empirically. You could have called me and asked. Or you could have read any of the news stories where I explained how I stumbled on this research project, and where I explained my (lack of) connection to the Sanders Campaign …

Since you did not bother to do the empirical work: let me do it for you. I undertook this study from simple scholarly curiosity; I did it without any connection to the Sanders campaign; and I have no expectation of reward. I have no desire to be involved in a Sanders Administration. I am completely happy teaching at UMass-Amherst and have no wish for anything more in the world than to do my work where I am.

Finally, if I may point out another flaw in your envy-ambition model: why would the Sanders camp ever appoint someone who has publicly acknowledged that he donates to the Hillary Clinton campaign and is undecided about for whom to vote in the upcoming Massachusetts primary?

In its lack of empirical grounding, your column is like the CEA-chairs’ letter: substituting attack language and ad hominem argument for reasoned discourse … Rather than jumping on my conclusion, a more constructive discussion would focus on identifying possible errors in my method that may have led to conclusions that may seem implausible. Certainly, we can agree that it is illogical to reject conclusions without finding fault with method …

Best wishes,
Gerald Friedman
Professor of Economics at the University of Massachusetts at Amherst

[Naked Capitalism]

 

‘Mathematical’ economists and real mathematicians

22 Feb, 2016 at 17:19 | Posted in Economics | Comments Off on ‘Mathematical’ economists and real mathematicians

Years ago, I was involved in organising conferences with Christopher Zeeman … and we organized one between economists and mathematicians. We had some great mathematicians—John Milnor, Steve Smale, Rene Thom, and others—wonderful mathematicians. And on the other side, we had Gérard Debreu, Hugo Sonnenschein, Werner Hildenbrand, and a whole group of very distinguished mathematical economists.

concoctions

After the first two, three hours, I think it was Milnor who said: “We all know that you guys can do mathematics, you do not have to show us. Everybody does his own thing. You want to show us that you are good at doing certain sorts of mathematics; that is fine. But we are interested in the economic problems. We thought that you were going to tell us about economic problems and we were going to use our mathematical tools to help you. But all you are telling us is the mathematical tools that you use and how you are doing well with them. But that is not going to create much”. I think that was absolutely right. After that, the economists were rather silenced and started shifting in their seats uncomfortably. Debreu never said very much anyway, but it was clear he was very insulted, because basically he liked to think of himself as a mathematician.

Alan Kirman

The limits of game theory

22 Feb, 2016 at 11:45 | Posted in Economics | 4 Comments

If you read Binmore’s Essays on the foundations of game theory (1990)
you will find a section where he says that, unfortunately, we get into a
kind of impasse. We get this infinite regress linked to the common
knowledge problem. For example, I drive frequently from Aix to
Marseille. You have the autoroute and parallel to it is the route
nationale. Say there is, one day, congestion on the autoroute and nobody
on the nationale. I think: “Tomorrow I will take the nationale. But, wait
a minute, these other drivers are intelligent too, so they will take the
nationale tomorrow, I would do better to stay over here. But, wait a
minute, these drivers are pretty intelligent so they can make that step
too…” It is actually not logically possible to reason to the solution of
these kinds of problems that people are supposed to be solving in game
theory.

untitledYou can surely define an equilibrium, and say that if we were there
nobody would want to move. But then you get to the problem of how we
get to this equilibrium—the exact same problem that we have with
general equilibrium …

For certain specific, local problems, game theory is a very nice way
of thinking about how people might try to solve them, but as soon as
you are dealing with a general problem like an economy or a market,
I think it is difficult to believe that there is full strategic interaction
going on. It is just asking too much of people. Game theory imposes a
huge amount of abstract reasoning on the part of people …

That is why I think game theory, as an approach to large scale interaction, is probably not the right way to go.

Alan Kirman

Next Page »

Blog at WordPress.com.
Entries and comments feeds.