How mainstream economics imperils our economies

24 October, 2014 at 09:31 | Posted in Economics | Leave a comment


[h/t Mark Thoma]

Piketty and the elasticity of substitution

23 October, 2014 at 22:39 | Posted in Economics | Leave a comment

When “Capital in the 21st Century” was published in English earlier this year, Thomas Piketty’s book was met with rapt attention and constant conversation. The book was lauded but also faced criticism, particularly from other economists who wanted to fit Piketty’s work into the models they knew well …

whereswaldo1A particularly technical and effective critique of Piketty is from Matt Rognlie, a graduate student in economics at the Massachusetts Institute of Technology. Rognlie points out that for capital returns to be consistently higher than the overall growth of the economy—or “r > g” as framed by Piketty—an economy needs to be able to easily substitute capital such as machinery or robots for labor. In the terminology of economics this is called the elasticity of substitution between capital and labor, which needs to be greater than 1 for r to be consistently higher than g. Rognlie argues that most studies looking at this particular elasticity find that it is below 1, meaning a drop in economic growth would result in a larger drop in the rate of return and then g being larger than r. In turn, this means capital won’t earn an increasing share of income and the dynamics laid out by Piketty won’t arise …

Enter the new paper by economists Loukas Karabarbounis and Brent Neiman … Their new paper investigates how depreciation affects the measurement of labor share and the elasticity between capital and labor. Using their data set of labor shares income and a model, Karabarnounis and Neiman show that the gross labor share and the net labor share move in the same direction when the shift is caused by a technological shock—as has been the case, they argue, in recent decades. More importantly for this conversation, they point out that the gross and net elasticities are on the same side of 1 if that shock is technological. In the case of a declining labor share, this means they would both be above 1.

This means Rognlie’s point about these two elasticities being lower than 1 doesn’t hold up if capital is gaining due to a new technology that makes capital cheaper …

In short, this new paper gives credence to one of the key dynamics in Piketty’s “Capital in the 21st Century”—that the returns on capital can be higher than growth in the economy, or r > g.

Nick Bunker

To me this is only a confirmation of what I wrote earlier this autumn on the issue:

Being able to show that you can get the Piketty results using one or another of the available standard neoclassical growth models is of course — from a realist point of view — of limited value. As usual — the really interesting thing is how in accord with reality are the assumptions you make and the numerical values you put into the model specification.

Post-Keynesian economics — an introduction

22 October, 2014 at 00:02 | Posted in Economics | 1 Comment

 

[h/t Jan Milch]

DSGE models — a case of non-contagious rigour

21 October, 2014 at 18:05 | Posted in Economics | Leave a comment

broken-linkMicrofounded DSGE models standardly assume rational expectations, Walrasian market clearing, unique equilibria, time invariance, linear separability and homogeneity of both inputs/outputs and technology, infinitely lived intertemporally optimizing representative household/ consumer/producer agents with homothetic and identical preferences, etc., etc. At the same time the models standardly ignore complexity, diversity, uncertainty, coordination problems, non-market clearing prices, real aggregation problems, emergence, expectations formation, etc., etc.

Behavioural and experimental economics — not to speak of psychology — show beyond any doubts that “deep parameters” — peoples’ preferences, choices and forecasts — are regularly influenced by those of other participants in the economy. And how about the homogeneity assumption? And if all actors are the same – why and with whom do they transact? And why does economics have to be exclusively teleological (concerned with intentional states of individuals)? Where are the arguments for that ontological reductionism? And what about collective intentionality and constitutive background rules?

These are all justified questions – so, in what way can one maintain that these models give workable microfoundations for macroeconomics? Science philosopher Nancy Cartwright gives a good hint at how to answer that question:

Our assessment of the probability of effectiveness is only as secure as the weakest link in our reasoning to arrive at that probability. We may have to ignore some issues to make heroic assumptions about them. But that should dramatically weaken our degree of confidence in our final assessment. Rigor isn’t contagious from link to link. If you want a relatively secure conclusion coming out, you’d better be careful that each premise is secure going on.

Microfounded DSGE models — a total waste of time!

20 October, 2014 at 15:21 | Posted in Economics | Leave a comment

In conclusion, one can say that the sympathy that some of the traditional and Post-Keynesian authors show towards DSGE models is rather hard to understand. Even before the recent financial and economic crisis put some weaknesses of the model – such as the impossibility of generating asset price bubbles or the lack of inclusion of financial sector issues – into the spotlight and brought them even to the attention of mainstream media, the models’ inner working were highly questionable from the very beginning. While one can understand that some of the elements in DSGE models seem to appeal to Keynesians at first sight, after closer examination, these models are in fundamental contradiction to Post-Keynesian and even traditional Keynesian thinking. The DSGE model is a model in which output is determined in the labour market as in New Classical models and in which aggregate demand plays only a very secondary role, even in the short run.

12-02-03-ostwärts-dullien-01In addition, given the fundamental philosophical problems presented for the use of DSGE models for policy simulation, namely the fact that a number of parameters used have completely implausible magnitudes and that the degree of freedom for different parameters is so large that DSGE models with fundamentally different parametrization (and therefore different policy conclusions) equally well produce time series which fit the real-world data, it is also very hard to understand why DSGE models have reached such a prominence in economic science in general.

Sebastian Dullien

Neither New Classical nor “New Keynesian” microfounded DSGE macro models have helped us foresee, understand or craft solutions to the problems of today’s economies. But still most young academic macroeconomists want to work with DSGE models. After reading Dullien’s article, that certainly should be a very worrying confirmation of economics — at least from the point of view of realism and relevance — becoming more and more a waste of time. Why do these young bright guys waste their time and efforts? Besides aspirations of being published, I think maybe Frank Hahn gave the truest answer back in 2005, when interviewed on the occasion of his 80th birthday, he confessed that some economic assumptions didn’t really say anything about “what happens in the world,” but still had to be considered very good “because it allows us to get on this job.”

Germany is turning EU recovery into recession

19 October, 2014 at 14:25 | Posted in Economics, Politics & Society | Leave a comment

beppe-grillo.-satira-300x431Beppe Grillo, the comedian-turned-rebel leader of Italian politics, must have laughed heartily. No sooner had he announced to supporters that the euro was “a total disaster” than the currency union was driven to the brink of catastrophe once again.

Grillo launched a campaign in Rome last weekend for a 1 million-strong petition against the euro, saying: “We have to leave the euro as soon as possible and defend the sovereignty of the Italian people from the European Central Bank.”

Hours later markets slumped on news that the 18-member eurozone was probably heading for recession. And there was worse to come. Greece, the trigger for the 2010 euro crisis, saw its borrowing rates soar, putting it back on the “at-risk register”. Investors, already digesting reports of slowing global growth, were also spooked by reports that a row in Brussels over spending caps on France and Italy had turned nasty …

In the wake of the 2008 global financial crisis, voters backed austerity and the euro in expectation of a debt-reducing recovery. But as many Keynesian economists warned, this has proved impossible. More than five years later, there are now plenty of voters willing to call time on the experiment, Grillo among them. And there seems to be no end to austerity-driven low growth in sight. The increasingly hard line taken by Berlin over the need for further reforms in debtor nations such as Greece and Italy – by which it means wage cuts – has worked to turn a recovery into a near recession.

merkelphoneAngela Merkel and her finance minister Wolfgang Schäuble are shaping up to fight all comers over maintaining the 3% budget deficit limit and already-agreed austerity measures.

Even if France and Italy find a fudge to bypass the deficit rule, they will be prevented from embarking on the Marshall Plan each believes is needed to turn their economies around. Hollande wants a EU-wide €300bn stimulus to boost investment and jobs – something that is unlikely to ever get off the ground …

So a rally is likely to be short-lived. Volatility is here to stay. The only answer comes from central bankers, who propose pumping more funds into the financial system to bring down the cost of credit and encourage lending and, hopefully, sustainable growth …

Andy Haldane, the chief economist at the Bank of England, said he was gloomier now than at any time this year. He expects interest rates to stay low until at least next summer.

It’s not a plan with much oomph. Most economists believe the impact of central bank money is waning. Yet without growth and the hope of well-paid jobs for young people, parents across the EU who previously feared for their savings following a euro exit appear ready to consider the potential benefits of a break-up. There is a Grillo in almost every eurozone nation. Now that would bring real volatility.

The Observer

What’s behind rising wealth inequality?

19 October, 2014 at 14:00 | Posted in Economics | 1 Comment

The Initiative on Global Markets at the University of Chicago yesterday released a survey of a panel of highly regarded economists asking about rising wealth inequality. Specifically, IGM asked if the difference between the after-tax rate of return on capital and the growth rate of the overall economy was the “most powerful force pushing towards greater wealth inequality in the United States since the 1970s.”

The vast majority of the economists disagreed with the statement. As would economist Thomas Piketty, the originator of the now famous r > g inequality. He explicitly states that rising inequality in the United States is about rising labor income at the very top of the income distribution. As Emmanuel Saez, an economist at the University of California, Berkeley and a frequent Piketty collaborator, points out r > g is a prediction about the future.

But if wealth inequality has risen in the United States over the past four decades, what has been behind the rise? A new paper by Saez and the London School of Economics’ Gabriel Zucman provides an answer: the calcification of income inequality into wealth inequality …

101514-new-saez-data

 

Nick Bunker

The Holy Grail of econometrics — ‘true models’

16 October, 2014 at 10:16 | Posted in Economics | Leave a comment

achenHaving mastered all the technicalities of regression analysis and econometrics, students often feel as though they are the masters of universe. I usually cool them down with a required reading of Christopher Achen‘s modern classic Interpreting and Using Regression. It usually get them back on track again, and they understand that

no increase in methodological sophistication … alter the fundamental nature of the subject. It remains a wondrous mixture of rigorous theory, experienced judgment, and inspired guesswork. And that, finally, is its charm.

Giving an introductory econometrics course, yours truly usually — at the exam — asks students to explain how one should correctly interpret p-values. Although the correct definition is p(data|null hypothesis), a majority of the students either misinterpreted the p-value as being the likelihood of a sampling error (which of course is wrong, since the very computation of the p-value is based on the assumption that sampling errors are what causes the sample statistics not coinciding with the null hypothesis) or that the p-value is the probability of the null hypothesis being true, given the data (which of course also is wrong, since it is p(null hypothesis|data) rather than the correct p(data|null hypothesis)).

This is not to blame on students’ ignorance, but rather on significance testing not being particularly transparent – conditional probability inference is difficult even to those of us who teach and practice it. A lot of researchers fall pray to the same mistakes. So – given that it anyway is very unlikely than any population parameter is exactly zero, and that contrary to assumption most samples in social science and economics are not random or having the right distributional shape – why continue to press students and researchers to do null hypothesis significance testing, testing that relies on weird backward logic that students and researchers usually don’t understand? As Achen writes:

Significance testing as a search for specification errors substitutes calculations for substantive thinking. Worse, it channels energy toward the hopeless search for functionally correct specifications and divert attention from the real tasks, which are to formulate a manageable description of the data and to exclude competing ones.

Modern macroeconomics and the perils of using ‘Mickey Mouse’ models

15 October, 2014 at 10:23 | Posted in Economics | 4 Comments

The techniques we use affect our thinking in deep and not always conscious ways. This was very much the case in macroeconomics in the decades preceding the crisis. The techniques were best suited to a worldview in which economic fluctuations occurred but were regular, and essentially self correcting. The problem is that we came to believe that this was indeed the way the world worked.

blanchard02_1To understand how that view emerged, one has to go back to the so-called rational expectations revolution of the 1970s … These techniques however made sense only under a vision in which economic fluctuations were regular enough so that, by looking at the past, people and firms (and the econometricians who apply statistics to economics) could understand their nature and form expectations of the future, and simple enough so that small shocks had small effects and a shock twice as big as another had twice the effect on economic activity. The reason for this assumption, called linearity, was technical: models with nonlinearities—those in which a small shock, such as a decrease in housing prices, can sometimes have large effects, or in which the effect of a shock depends on the rest of the economic environment—were difficult, if not impossible, to solve under rational expectations.

Thinking about macroeconomics was largely shaped by those assumptions. We in the field did think of the economy as roughly linear, constantly subject to different shocks, constantly fluctuating, but naturally returning to its steady state over time …

From the early 1980s on, most advanced economies experienced what has been dubbed the “Great Moderation,” a steady decrease in the variability of output and its major components—such as consumption and investment … Whatever caused the Great Moderation, for a quarter Century the benign, linear view of fluctuations looked fine.

Olivier Blanchard

Blanchard’s piece is a confirmation of  what I argued in my paper Capturing causality in economics and the limits of statistical inference —  since “modern” macroeconom(etr)ics doesn’t content itself with only making “optimal” predictions,” but also aspires to explain things in terms of causes and effects, macroeconomists and econometricians need loads of assumptions — and one of the more  important of these is linearity.

So bear with me when I take the opportunity to elaborate a little more on why I — and Olivier Blanchard — find that assumption of such paramount importance and ought to be much more argued for — on both epistemological and ontological grounds — if at all being used.

Limiting model assumptions in economic science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we “export” them to our “target systems”, we have to be able to show that they do not only hold under ceteris paribus conditions and a fortiori only are of limited value to our understanding, explanations or predictions of real economic systems. As the always eminently quotable Keynes wrote (emphasis added) in Treatise on Probability (1921):

The kind of fundamental assumption about the character of material laws, on which scientists appear commonly to act, seems to me to be [that] the system of the material universe must consist of bodies … such that each of them exercises its own separate, independent, and invariable effect, a change of the total state being compounded of a number of separate changes each of which is solely due to a separate portion of the preceding state … Yet there might well be quite different laws for wholes of different degrees of complexity, and laws of connection between complexes which could not be stated in terms of laws connecting individual parts … If different wholes were subject to different laws qua wholes and not simply on account of and in proportion to the differences of their parts, knowledge of a part could not lead, it would seem, even to presumptive or probable knowledge as to its association with other parts … These considerations do not show us a way by which we can justify induction … /427 No one supposes that a good induction can be arrived at merely by counting cases. The business of strengthening the argument chiefly consists in determining whether the alleged association is stable, when accompanying conditions are varied … /468 In my judgment, the practical usefulness of those modes of inference … on which the boasted knowledge of modern science depends, can only exist … if the universe of phenomena does in fact present those peculiar characteristics of atomism and limited variety which appears more and more clearly as the ultimate result to which material science is tending.

Econometrics may be an informative tool for research. But if its practitioners do not investigate and make an effort of providing a justification for the credibility of the assumptions on which they erect their building, it will not fulfill its tasks. There is a gap between its aspirations and its accomplishments, and without more supportive evidence to substantiate its claims, critics will continue to consider its ultimate argument as a mixture of rather unhelpful metaphors and metaphysics. Maintaining that economics is a science in the “true knowledge” business, yours truly remains a skeptic of the pretences and aspirations of econometrics. So far, I cannot really see that it has yielded very much in terms of relevant, interesting economic knowledge.

The marginal return on its ever higher technical sophistication in no way makes up for the lack of serious under-labouring of its deeper philosophical and methodological foundations that already Keynes complained about. The rather one-sided emphasis of usefulness and its concomitant instrumentalist justification cannot hide that neither Haavelmo, nor the legions of probabilistic econometricians following in his footsteps, give supportive evidence for their considering it “fruitful to believe” in the possibility of treating unique economic data as the observable results of random drawings from an imaginary sampling of an imaginary population. After having analyzed some of its ontological and epistemological foundations, I cannot but conclude that econometrics on the whole has not delivered “truth”. And I doubt if it has ever been the intention of its main protagonists.

Our admiration for technical virtuosity should not blind us to the fact that we have to have a cautious attitude towards probabilistic inferences in economic contexts. Science should help us penetrate to “the true process of causation lying behind current events” and disclose “the causal forces behind the apparent facts” [Keynes 1971-89 vol XVII:427]. We should look out for causal relations, but econometrics can never be more than a starting point in that endeavour, since econometric (statistical) explanations are not explanations in terms of mechanisms, powers, capacities or causes. Firmly stuck in an empiricist tradition, econometrics is only concerned with the measurable aspects of reality, But there is always the possibility that there are other variables – of vital importance and although perhaps unobservable and non-linear, not necessarily epistemologically inaccessible – that were not considered for the model. Those who were can hence never be guaranteed to be more than potential causes, and not real causes. A rigorous application of econometric methods in economics really presupposes that the phenomena of our real world economies are ruled by stable causal relations between variables. A perusal of the leading econom(etr)ic journals shows that most econometricians still concentrate on fixed parameter models and that parameter-values estimated in specific spatio-temporal contexts are presupposed to be exportable to totally different contexts. To warrant this assumption one, however, has to convincingly establish that the targeted acting causes are stable and invariant so that they maintain their parametric status after the bridging. The endemic lack of predictive success of the econometric project indicates that this hope of finding fixed parameters is a hope for which there really is no other ground than hope itself.

Real world social systems are not governed by stable causal mechanisms or capacities. As Keynes wrote in his critique of econometrics and inferential statistics already in the 1920s (emphasis added):

The atomic hypothesis which has worked so splendidly in Physics breaks down in Psychics. We are faced at every turn with the problems of Organic Unity, of Discreteness, of Discontinuity – the whole is not equal to the sum of the parts, comparisons of quantity fails us, small changes produce large effects, the assumptions of a uniform and homogeneous continuum are not satisfied. Thus the results of Mathematical Psychics turn out to be derivative, not fundamental, indexes, not measurements, first approximations at the best; and fallible indexes, dubious approximations at that, with much doubt added as to what, if anything, they are indexes or approximations of.

The kinds of “laws” and relations that mainstream econ(ometr)ics has established, are laws and relations about entities in models that presuppose causal mechanisms being atomistic and linear (additive). When causal mechanisms operate in real world social target systems they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made “nomological machines” they are rare, or even non-existant. Unfortunately that also makes most of the achievements of econometrics — as most of contemporary endeavours of mainstream economic theoretical modeling — rather useless.

Market clearing and rational expectations — ideas that are neat, plausible and wrong

14 October, 2014 at 20:45 | Posted in Economics | Leave a comment

emh

Unfortunately, in case it needs restating, freshwater economics turned out to be based on two ideas that aren’t true. The first (Fama) is that financial markets are efficient. The second (Lucas/Sargent/Wallace) is that the economy as a whole is a stable and self-correcting mechanism. The rational-expectations theorists didn’t refute Keynesianism: they assumed away the reason for its existence. Their models were based not just on rational expectations but on the additional assertion that markets clear more or less instantaneously. But were that true, there wouldn’t be any such thing as involuntary unemployment, or any need for counter-cyclical monetary policy.

John Cassidy

Next Page »

Blog at WordPress.com. | The Pool Theme.
Entries and comments feeds.