Kvinde min

25 October, 2014 at 23:00 | Posted in Varia | Leave a comment


Fred Lee

25 October, 2014 at 16:38 | Posted in Varia | Leave a comment

leeLast night (Oct. 23) at 11:20 PM, CDT, prominent heterodox economist, Fred Lee of the University of Missouri-Kansas City, died of cancer.  He had stopped teaching during the last spring semester and was honored at the 12th International Post Keynesian Conference held at UMKC a month ago …

Whatever one thinks of heterodox economics in general, or of the views of Fred Lee in particular, he should be respected as the person more than any other who was behind the founding of the International Conference of Associations for Pluralism in Economics (ICAPE), and also the Heterodox Economics Newsletter.  While many talked about the need for there to be an organized group pushing heterodox economics in all its varieties, Fred did more than talk and went and organized the group and its main communications outlet.  He also regularly and strongly spoke in favor of heterodox economics, the unity of which he may have exaggerated.  But his voice in advocating the superiority of heterodox economics over mainstream neoclassical economics was as strong as that of anybody that I have known.  I also note that he was the incoming President for the Association for Evolutionary Economics (AFEE), and they will now have to find a replacement.  He had earlier stepped down from his positions with ICAPE and the Heterodox Economics Newsletter.

It was both sad and moving to see Fred at the PK conference last month in Kansas City … Although he was having trouble even breathing and could barely even speak, he rose and made his comments, at the end becoming impassioned and speaking up forcefully to proclaim his most firmly held positions.  He declared that his entire career had been devoted to battling for the downtrodden, poor, and suffering around the world, “against the 1% percent!” and I know that there was not a single person in that standing room only audience who doubted him.  He openly wept after he finished with those stirring words, as those who were not already standing rose to applaud him with a standing ovation.

J. Barkley Rosser

Fred was together with Nai Pew Ong and Bob Pollin one of those who made a visit to University of California such a great experience back in the beginning of the 1980s for a young Swedish economics student. I especially remember our long and intense discussions on Sraffa and neoricardianism. I truly miss this open-minded and good-hearted heterodox economist. Rest in peace my dear old friend.

A Post Keynesian response to Piketty

25 October, 2014 at 12:55 | Posted in Economics | Leave a comment

o-ECON-CHART-facebookThe rejection of specific theoretical arguments does not diminish the achievements of Piketty’s work. Capital is an outstanding work, it has brought issues of wealth and income distribution to the spotlight, where heterodox economists have failed to do so. It has also put together, and made readily available, an invaluable data set, and it allows future researchers to analyse macroeconomics with a much broader time horizon, covering much of the history of capitalism rather than the last few decades. But we do suggest that the analysis of the book would have been strengthened if Piketty had had also considered a post-Keynesian instead of a neoclassical framework.

Post Keynesian Economics Study Group

How mainstream economics imperils our economies

24 October, 2014 at 09:31 | Posted in Economics | Leave a comment

[h/t Mark Thoma]

Piketty and the elasticity of substitution

23 October, 2014 at 22:39 | Posted in Economics | 2 Comments

When “Capital in the 21st Century” was published in English earlier this year, Thomas Piketty’s book was met with rapt attention and constant conversation. The book was lauded but also faced criticism, particularly from other economists who wanted to fit Piketty’s work into the models they knew well …

whereswaldo1A particularly technical and effective critique of Piketty is from Matt Rognlie, a graduate student in economics at the Massachusetts Institute of Technology. Rognlie points out that for capital returns to be consistently higher than the overall growth of the economy—or “r > g” as framed by Piketty—an economy needs to be able to easily substitute capital such as machinery or robots for labor. In the terminology of economics this is called the elasticity of substitution between capital and labor, which needs to be greater than 1 for r to be consistently higher than g. Rognlie argues that most studies looking at this particular elasticity find that it is below 1, meaning a drop in economic growth would result in a larger drop in the rate of return and then g being larger than r. In turn, this means capital won’t earn an increasing share of income and the dynamics laid out by Piketty won’t arise …

Enter the new paper by economists Loukas Karabarbounis and Brent Neiman … Their new paper investigates how depreciation affects the measurement of labor share and the elasticity between capital and labor. Using their data set of labor shares income and a model, Karabarnounis and Neiman show that the gross labor share and the net labor share move in the same direction when the shift is caused by a technological shock—as has been the case, they argue, in recent decades. More importantly for this conversation, they point out that the gross and net elasticities are on the same side of 1 if that shock is technological. In the case of a declining labor share, this means they would both be above 1.

This means Rognlie’s point about these two elasticities being lower than 1 doesn’t hold up if capital is gaining due to a new technology that makes capital cheaper …

In short, this new paper gives credence to one of the key dynamics in Piketty’s “Capital in the 21st Century”—that the returns on capital can be higher than growth in the economy, or r > g.

Nick Bunker

To me this is only a confirmation of what I wrote earlier this autumn on the issue:

Being able to show that you can get the Piketty results using one or another of the available standard neoclassical growth models is of course — from a realist point of view — of limited value. As usual — the really interesting thing is how in accord with reality are the assumptions you make and the numerical values you put into the model specification.

Sherlock Holmes inference and econometric testing

23 October, 2014 at 15:10 | Posted in Statistics & Econometrics | Leave a comment

Basil Rathbone as Sherlock HolmesSherlock Holmes stated that ‘It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts.’ True this may be in the circumstance of crime investigation, the principle does not apply to testing. In a crime investigation one wants to know what actually happened: who did what, when and how. Testing is somewhat different.

With testing, not only what happened is interesting, but what could have happened, and what would have happened were the circumstances to repeat itself. The particular events under study are considered draws from a larger population. It is the distribution of this population one is primarily interested in, and not so much the particular realizations of that distribution. So not the particular sequence of head and tails in coin flipping is of interest, but whether that says something about a coin being biased or not. Not (only) whether inflation and unemployment went together in the sixties is interesting, but what that tells about the true trade-off between these two economic variables. In short, one wants to test.

The tested hypothesis has to come from somewhere and to base it, like Holmes, on data is valid procedure … The theory should however not be tested on the same data they were derived from. To use significance as a selection criterion in a regression equation constitutes a violation of this principle …

Consider for example time series econometrics … It may not be clear a priori which lags matter, while it is clear that some definitely do … The Box-Jenkins framework models the auto-correlation structure of a series as good as possible first, postponing inference to the next stage. In this next stage other variables or their lagged values may be related to the time series under study. While this justifies why time series uses data mining, it leaves unaddressed the issue of the true level of significance …

This is sometimes recommended in a general-to-specific approach where the most general model is estimated and insignificant variables are subsequently discarded. As superfluous variables increase the variance of estimators, omitting irrelevant variables this way may increase efficiency. Problematic is that variables were included in the first place because they were thought to be (potentially) relevant. If then for example twenty variables, believed to be potentially relevant a priori, are included, then one or more will bound to be insignificant (depending on the power, which cannot be trusted to be high). Omitting relevant variables, whether they are insignificant or not, generally biases all other estimates as well due to the well-known omitted variable bias. The data are thus used both to specify the model and test the model; this is the problem of estimation. Without further notice this double use of the data is bound to be misleading if not incorrect. The tautological nature of this procedure is apparent; as significance is the selection criterion it is not very surprising selected variables are significant.

D. A. Hollanders Five methodological fallacies in applied econometrics

Econometric testing — playing tennis with the net down

22 October, 2014 at 21:43 | Posted in Statistics & Econometrics | Leave a comment

91LnGmWgEeLSuppose you test a highly confirmed hypothesis, for example, that the price elasticity of demand is negative. What would you do if the computer were to spew out a positive coefficient? Surely you would not claim to have overthrown the law of demand … Instead, you would rerun many variants of your regression until the recalcitrant computer finally acknowledged the sovereignty of your theory …

Only the naive are shocked by such soft and gentle testing … Easy it is. But also wrong, when the purpose of the exercise is not to use a hypothesis, but to determine its validity …

Econometric tests are far from useless. They are worth doing, and their results do tell something … But many economists insist that economics can deliver more, much more, than merely, more or less, plausible knowledge, that it can reach its results with compelling demonstrations. By such a standard how should one describe our usual way of testing hypotheses? One possibility is to interpret it as Blaug [The Methodology of Economics, 1980, p. 256] does, as ‘playing tennis with the net down’ …

Perhaps my charge that econometric testing lacks seriousness of purpose is wrong … But regardless of the cause, it should be clear that most econometric testing is not rigorous. Combining such tests with formalized theoretical analysis or elaborate techniques is another instance of the principle of the strongest link. The car is sleek and elegant; too bad the wheels keep falling off.


Econometric disillusionment

22 October, 2014 at 11:16 | Posted in Statistics & Econometrics | 1 Comment

reality header3

Because I was there when the economics department of my university got an IBM 360, I was very much caught up in the excitement of combining powerful computers with economic research. Unfortunately, I lost interest in econometrics almost as soon as I understood how it was done. My thinking went through four stages:

1.Holy shit! Do you see what you can do with a computer’s help.
2.Learning computer modeling puts you in a small class where only other members of the caste can truly understand you. This opens up huge avenues for fraud:
3.The main reason to learn stats is to prevent someone else from committing fraud against you.
4.More and more people will gain access to the power of statistical analysis. When that happens, the stratification of importance within the profession should be a matter of who asks the best questions.

Disillusionment began to set in. I began to suspect that all the really interesting economic questions were FAR beyond the ability to reduce them to mathematical formulas. Watching computers being applied to other pursuits than academic economic investigations over time only confirmed those suspicions.

1.Precision manufacture is an obvious application for computing. And for many applications, this worked magnificently. Any design that combined straight line and circles could be easily described for computerized manufacture. Unfortunately, the really interesting design problems can NOT be reduced to formulas. A car’s fender, for example, can not be describe using formulas—it can only be described by specifying an assemblage of multiple points. If math formulas cannot describe something as common and uncomplicated as a car fender, how can it hope to describe human behavior?
2.When people started using computers for animation, it soon became apparent that human motion was almost impossible to model correctly. After a great deal of effort, the animators eventually put tracing balls on real humans and recorded that motion before transferring it to the the animated character. Formulas failed to describe simple human behavior—like a toddler trying to walk.

Lately, I have discovered a Swedish economist who did NOT give up econometrics merely because it sounded so impossible. In fact, he still teaches the stuff. But for the rest of us, he systematically destroys the pretensions of those who think they can describe human behavior with some basic Formulas.

Jonathan Larson

Wonder who that Swedish guy is …

Post-Keynesian economics — an introduction

22 October, 2014 at 00:02 | Posted in Economics | 1 Comment


[h/t Jan Milch]

DSGE models — a case of non-contagious rigour

21 October, 2014 at 18:05 | Posted in Economics | Leave a comment

broken-linkMicrofounded DSGE models standardly assume rational expectations, Walrasian market clearing, unique equilibria, time invariance, linear separability and homogeneity of both inputs/outputs and technology, infinitely lived intertemporally optimizing representative household/ consumer/producer agents with homothetic and identical preferences, etc., etc. At the same time the models standardly ignore complexity, diversity, uncertainty, coordination problems, non-market clearing prices, real aggregation problems, emergence, expectations formation, etc., etc.

Behavioural and experimental economics — not to speak of psychology — show beyond any doubts that “deep parameters” — peoples’ preferences, choices and forecasts — are regularly influenced by those of other participants in the economy. And how about the homogeneity assumption? And if all actors are the same – why and with whom do they transact? And why does economics have to be exclusively teleological (concerned with intentional states of individuals)? Where are the arguments for that ontological reductionism? And what about collective intentionality and constitutive background rules?

These are all justified questions – so, in what way can one maintain that these models give workable microfoundations for macroeconomics? Science philosopher Nancy Cartwright gives a good hint at how to answer that question:

Our assessment of the probability of effectiveness is only as secure as the weakest link in our reasoning to arrive at that probability. We may have to ignore some issues to make heroic assumptions about them. But that should dramatically weaken our degree of confidence in our final assessment. Rigor isn’t contagious from link to link. If you want a relatively secure conclusion coming out, you’d better be careful that each premise is secure going on.

Next Page »

Blog at WordPress.com. | The Pool Theme.
Entries and comments feeds.