Post-Keynesian economics — an introduction

22 October, 2014 at 00:02 | Posted in Economics | 1 Comment

 

[h/t Jan Milch]

DSGE models — a case of non-contagious rigour

21 October, 2014 at 18:05 | Posted in Economics | Leave a comment

broken-linkMicrofounded DSGE models standardly assume rational expectations, Walrasian market clearing, unique equilibria, time invariance, linear separability and homogeneity of both inputs/outputs and technology, infinitely lived intertemporally optimizing representative household/ consumer/producer agents with homothetic and identical preferences, etc., etc. At the same time the models standardly ignore complexity, diversity, uncertainty, coordination problems, non-market clearing prices, real aggregation problems, emergence, expectations formation, etc., etc.

Behavioural and experimental economics — not to speak of psychology — show beyond any doubts that “deep parameters” — peoples’ preferences, choices and forecasts — are regularly influenced by those of other participants in the economy. And how about the homogeneity assumption? And if all actors are the same – why and with whom do they transact? And why does economics have to be exclusively teleological (concerned with intentional states of individuals)? Where are the arguments for that ontological reductionism? And what about collective intentionality and constitutive background rules?

These are all justified questions – so, in what way can one maintain that these models give workable microfoundations for macroeconomics? Science philosopher Nancy Cartwright gives a good hint at how to answer that question:

Our assessment of the probability of effectiveness is only as secure as the weakest link in our reasoning to arrive at that probability. We may have to ignore some issues to make heroic assumptions about them. But that should dramatically weaken our degree of confidence in our final assessment. Rigor isn’t contagious from link to link. If you want a relatively secure conclusion coming out, you’d better be careful that each premise is secure going on.

The Ten Commandments of econometrics

21 October, 2014 at 13:31 | Posted in Statistics & Econometrics | Leave a comment

top-10-retail-news-thumb-610xauto-79997-600x240-1

  1. Always, but always, plot your data.
  2. Remember that data quality is at least as important as data
    quantity
    .
  3. Always ask yourself, “Do these results make economic/common sense”?
  4. Check whether your “statistically significant” results are also
    “numerically/economically significant”.
  5. Be sure that you know exactly what assumptions are used/needed to obtain
    the results relating to the properties of any estimator or test that you
    use.
  6. Just because someone else has used a particular approach to analyse a
    problem that looks like yours, that doesn’t mean they were right!
  7. “Test, test, test”! (David Hendry). But don’t forget that “pre-testing”
    raises some important issues of its own.
  8. Don’t assume that the computer code that someone gives to you is
    relevant for your application, or that it even produces correct results.
  9. Keep in mind that published results will represent only a fraction of the
    results that the author obtained, but is not publishing.
  10. Don’t forget that “peer-reviewed” does NOT mean “correct results”, or
    even “best practices were followed”.

Dave Giles

Data mining and the meaning of the Econometric Scripture

20 October, 2014 at 21:19 | Posted in Statistics & Econometrics | 1 Comment

Some variants of ‘data mining’ can be classified as the greatest of the basement sins, but other variants of ‘data mining’ can be viewed as important ingredients in data analysis. Unfortunately, these two variants usually are not mutually exclusive and so frequently conflict in the sense that to gain the benefits of the latter, one runs the risk of incurring the costs of the former.

mining-e1379773721738Hoover and Perez (2000, p. 196) offer a general definition of data mining as referring to “a broad class of activities that have in common a search over different ways to process or package data statistically or econometrically with the purpose of making the final presentation meet certain design criteria.” Two markedly different views of data mining lie within the scope of this general definition. One view of ‘data mining’ is that it refers to experimenting with (or ‘fishing through’) the data to produce a specification … The problem with this, and why it is viewed as a sin, is that such a procedure is almost guaranteed to produce a specification tailored to the peculiarities of that particular data set, and consequently will be misleading in terms of what it says about the underlying process generating the data. Furthermore, traditional testing procedures used to ‘sanctify’ the specification are no longer legitimate, because these data, since they have been used to generate the specification, cannot be judged impartial if used to test that specification …

An alternative view of ‘data mining’ is that it refers to experimenting with (or ‘fishing through’) the data to discover empirical regularities that can inform economic theory … Hand et al (2000) describe data mining as the process of seeking interesting or valuable information in large data sets. Its greatest virtue is that it can uncover empirical regularities that point to errors/omissions in theoretical specifications …

In summary, this second type of ‘data mining’ identifies regularities in or characteristics of the data that should be accounted for and understood in the context of the underlying theory. This may suggest the need to rethink the theory behind one’s model, resulting in a new specification founded on a more broad-based understanding. This is to be distinguished from a new specification created by mechanically remolding the old specification to fit the data; this would risk incurring the costs described earlier when discussing the first variant of ‘data mining.’

The issue here is how should the model specification be chosen? As usual, Leamer (1996, p. 189) has an amusing view: “As you wander through the thicket of models, you may come to question the meaning of the Econometric Scripture that presumes the model is given to you at birth by a wise and beneficent Holy Spirit.”

In practice, model specifications come from both theory and data, and given the absence of Leamer’s Holy Spirit, properly so.

Peter Kennedy

Microfounded DSGE models — a total waste of time!

20 October, 2014 at 15:21 | Posted in Economics | Leave a comment

In conclusion, one can say that the sympathy that some of the traditional and Post-Keynesian authors show towards DSGE models is rather hard to understand. Even before the recent financial and economic crisis put some weaknesses of the model – such as the impossibility of generating asset price bubbles or the lack of inclusion of financial sector issues – into the spotlight and brought them even to the attention of mainstream media, the models’ inner working were highly questionable from the very beginning. While one can understand that some of the elements in DSGE models seem to appeal to Keynesians at first sight, after closer examination, these models are in fundamental contradiction to Post-Keynesian and even traditional Keynesian thinking. The DSGE model is a model in which output is determined in the labour market as in New Classical models and in which aggregate demand plays only a very secondary role, even in the short run.

12-02-03-ostwärts-dullien-01In addition, given the fundamental philosophical problems presented for the use of DSGE models for policy simulation, namely the fact that a number of parameters used have completely implausible magnitudes and that the degree of freedom for different parameters is so large that DSGE models with fundamentally different parametrization (and therefore different policy conclusions) equally well produce time series which fit the real-world data, it is also very hard to understand why DSGE models have reached such a prominence in economic science in general.

Sebastian Dullien

Neither New Classical nor “New Keynesian” microfounded DSGE macro models have helped us foresee, understand or craft solutions to the problems of today’s economies. But still most young academic macroeconomists want to work with DSGE models. After reading Dullien’s article, that certainly should be a very worrying confirmation of economics — at least from the point of view of realism and relevance — becoming more and more a waste of time. Why do these young bright guys waste their time and efforts? Besides aspirations of being published, I think maybe Frank Hahn gave the truest answer back in 2005, when interviewed on the occasion of his 80th birthday, he confessed that some economic assumptions didn’t really say anything about “what happens in the world,” but still had to be considered very good “because it allows us to get on this job.”

Watch out for econometric sinning in the basement!

19 October, 2014 at 17:00 | Posted in Statistics & Econometrics | 2 Comments

Brad DeLong wonders why Cliff Asness is clinging to a theoretical model that has clearly been rejected by the data …

Data-Mining
 
There’s a version of this in econometrics, i.e. you know the model is correct, you are just having trouble finding evidence for it. It goes as follows. You are testing a theory you came up with, but the data are uncooperative and say you are wrong. But instead of accepting that, you tell yourself “My theory is right, I just haven’t found the right econometric specification yet. I need to add variables, remove variables, take a log, add an interaction, square a term, do a different correction for misspecification, try a different sample period, etc., etc., etc.” Then, after finally digging out that one specification of the econometric model that confirms your hypothesis, you declare victory, write it up, and send it off (somehow never mentioning the intense specification mining that produced the result).

Too much econometric work proceeds along these lines. Not quite this blatantly, but that is, in effect, what happens in too many cases. I think it is often best to think of econometric results as the best case the researcher could make for a particular theory rather than a true test of the model.

Mark Thoma

Mark touches the spot — and for the sake of balancing the overly rosy picture of econometric achievements given in the usual econometrics textbooks today, it may also be interesting to see how Trygve Haavelmo, with the completion (in 1958) of the twenty-fifth volume of Econometrica, assessed the the role of econometrics in the advancement of economics. Although mainly positive of the “repair work” and “clearing-up work” done, Haavelmo also found some grounds for despair:

We have found certain general principles which would seem to make good sense. Essentially, these principles are based on the reasonable idea that, if an economic model is in fact “correct” or “true,” we can say something a priori about the way in which the data emerging from it must behave. We can say something, a priori, about whether it is theoretically possible to estimate the parameters involved. And we can decide, a priori, what the proper estimation procedure should be … But the concrete results of these efforts have often been a seemingly lower degree of accuracy of the would-be economic laws (i.e., larger residuals), or coefficients that seem a priori less reasonable than those obtained by using cruder or clearly inconsistent methods.

Haavelmo-intro-2-125397_630x210There is the possibility that the more stringent methods we have been striving to develop have actually opened our eyes to recognize a plain fact: viz., that the “laws” of economics are not very accurate in the sense of a close fit, and that we have been living in a dream-world of large but somewhat superficial or spurious correlations.

And as the quote below shows, Frisch also shared some of Haavelmo’s — and Keynes’s — doubts on the applicability of econometrics:

sp9997db.hovedspalteI have personally always been skeptical of the possibility of making macroeconomic predictions about the development that will follow on the basis of given initial conditions … I have believed that the analytical work will give higher yields – now and in the near future – if they become applied in macroeconomic decision models where the line of thought is the following: “If this or that policy is made, and these conditions are met in the period under consideration, probably a tendency to go in this or that direction is created”.

Ragnar Frisch

Slippery slope arguments

19 October, 2014 at 16:10 | Posted in Theory of Science & Methodology | Leave a comment

 

Germany is turning EU recovery into recession

19 October, 2014 at 14:25 | Posted in Economics, Politics & Society | Leave a comment

beppe-grillo.-satira-300x431Beppe Grillo, the comedian-turned-rebel leader of Italian politics, must have laughed heartily. No sooner had he announced to supporters that the euro was “a total disaster” than the currency union was driven to the brink of catastrophe once again.

Grillo launched a campaign in Rome last weekend for a 1 million-strong petition against the euro, saying: “We have to leave the euro as soon as possible and defend the sovereignty of the Italian people from the European Central Bank.”

Hours later markets slumped on news that the 18-member eurozone was probably heading for recession. And there was worse to come. Greece, the trigger for the 2010 euro crisis, saw its borrowing rates soar, putting it back on the “at-risk register”. Investors, already digesting reports of slowing global growth, were also spooked by reports that a row in Brussels over spending caps on France and Italy had turned nasty …

In the wake of the 2008 global financial crisis, voters backed austerity and the euro in expectation of a debt-reducing recovery. But as many Keynesian economists warned, this has proved impossible. More than five years later, there are now plenty of voters willing to call time on the experiment, Grillo among them. And there seems to be no end to austerity-driven low growth in sight. The increasingly hard line taken by Berlin over the need for further reforms in debtor nations such as Greece and Italy – by which it means wage cuts – has worked to turn a recovery into a near recession.

merkelphoneAngela Merkel and her finance minister Wolfgang Schäuble are shaping up to fight all comers over maintaining the 3% budget deficit limit and already-agreed austerity measures.

Even if France and Italy find a fudge to bypass the deficit rule, they will be prevented from embarking on the Marshall Plan each believes is needed to turn their economies around. Hollande wants a EU-wide €300bn stimulus to boost investment and jobs – something that is unlikely to ever get off the ground …

So a rally is likely to be short-lived. Volatility is here to stay. The only answer comes from central bankers, who propose pumping more funds into the financial system to bring down the cost of credit and encourage lending and, hopefully, sustainable growth …

Andy Haldane, the chief economist at the Bank of England, said he was gloomier now than at any time this year. He expects interest rates to stay low until at least next summer.

It’s not a plan with much oomph. Most economists believe the impact of central bank money is waning. Yet without growth and the hope of well-paid jobs for young people, parents across the EU who previously feared for their savings following a euro exit appear ready to consider the potential benefits of a break-up. There is a Grillo in almost every eurozone nation. Now that would bring real volatility.

The Observer

What’s behind rising wealth inequality?

19 October, 2014 at 14:00 | Posted in Economics | 1 Comment

The Initiative on Global Markets at the University of Chicago yesterday released a survey of a panel of highly regarded economists asking about rising wealth inequality. Specifically, IGM asked if the difference between the after-tax rate of return on capital and the growth rate of the overall economy was the “most powerful force pushing towards greater wealth inequality in the United States since the 1970s.”

The vast majority of the economists disagreed with the statement. As would economist Thomas Piketty, the originator of the now famous r > g inequality. He explicitly states that rising inequality in the United States is about rising labor income at the very top of the income distribution. As Emmanuel Saez, an economist at the University of California, Berkeley and a frequent Piketty collaborator, points out r > g is a prediction about the future.

But if wealth inequality has risen in the United States over the past four decades, what has been behind the rise? A new paper by Saez and the London School of Economics’ Gabriel Zucman provides an answer: the calcification of income inequality into wealth inequality …

101514-new-saez-data

 

Nick Bunker

Lies that economics is built on

18 October, 2014 at 10:38 | Posted in Statistics & Econometrics | 2 Comments

Peter Dorman is one of those rare economists that it is always a pleasure to read. Here his critical eye is focused on economists’ infatuation with homogeneity and averages:

You may feel a gnawing discomfort with the way economists use statistical techniques. Ostensibly they focus on the difference between people, countries or whatever the units of observation happen to be, but they nevertheless seem to treat the population of cases as interchangeable—as homogenous on some fundamental level. As if people were replicants.

You are right, and this brief talk is about why and how you’re right, and what this implies for the questions people bring to statistical analysis and the methods they use.

Our point of departure will be a simple multiple regression model of the form

y = β0 + β1 x1 + β2 x2 + …. + ε

where y is an outcome variable, x1 is an explanatory variable of interest, the other x’s are control variables, the β’s are coefficients on these variables (or a constant term, in the case of β0), and ε is a vector of residuals. We could apply the same analysis to more complex functional forms, and we would see the same things, so let’s stay simple.

notes7-2What question does this model answer? It tells us the average effect that variations in x1 have on the outcome y, controlling for the effects of other explanatory variables. Repeat: it’s the average effect of x1 on y.

This model is applied to a sample of observations. What is assumed to be the same for these observations? (1) The outcome variable y is meaningful for all of them. (2) The list of potential explanatory factors, the x’s, is the same for all. (3) The effects these factors have on the outcome, the β’s, are the same for all. (4) The proper functional form that best explains the outcome is the same for all. In these four respects all units of observation are regarded as essentially the same.

Now what is permitted to differ across these observations? Simply the values of the x’s and therefore the values of y and ε. That’s it.

Thus measures of the difference between individual people or other objects of study are purchased at the cost of immense assumptions of sameness. It is these assumptions that both reflect and justify the search for average effects …

In the end, statistical analysis is about imposing a common structure on observations in order to understand differentiation. Any structure requires assuming some kinds of sameness, but some approaches make much more sweeping assumptions than others. An unfortunate symbiosis has arisen in economics between statistical methods that excessively rule out diversity and statistical questions that center on average (non-diverse) effects. This is damaging in many contexts, including hypothesis testing, program evaluation, forecasting—you name it …

The first step toward recovery is admitting you have a problem. Every statistical analyst should come clean about what assumptions of homogeneity are being made, in light of their plausibility and the opportunities that exist for relaxing them.

Firmly stuck in an empiricist tradition, econometrics is only concerned with the measurable aspects of reality. But there is always the possibility that there are other variables – of vital importance and although perhaps unobservable and non-additive, not necessarily epistemologically inaccessible – that were not considered for the model.

Real world social systems are not governed by stable causal mechanisms or capacities. If economic regularities obtain they — as a rule — do it only because we engineered them for that purpose. Outside man-made “nomological machines” they are rare, or even non-existant. Unfortunately that also makes them rather useless.

Remember that a model is not the truth. It is a lie to help you get your point across. And in the case of modeling economic risk, your model is a lie about others, who are probably lying themselves. And what’s worse than a simple lie? A complicated lie.

Sam L. Savage The Flaw of Averages

« Previous PageNext Page »

Create a free website or blog at WordPress.com. | The Pool Theme.
Entries and comments feeds.