Why experimental economics is no real alternative to neoclassical economics

17 October, 2015 at 20:17 | Posted in Economics, Theory of Science & Methodology | Comments Off on Why experimental economics is no real alternative to neoclassical economics

One obvious interpretation is that Model-Platonism implies that economic models are Platonic, if they take the form of thought-experiments, which use idealized conceptions of certain objects or entities (Platonic archetypes). This clearly sounds like a pretty familiar procedure, although the rationale for the thought-experimental character of economic models (if they are conceived as such) has been transformed over the years from apriorism to story-telling ‘without empirical commitment’ …

When interpreting economic models as thought-experiments, that is, as mainly speculative endeavours, these models operate without any obligation to accommodate empirical results. However, if the idealized assumptions employed in these models are interpreted as true forms in a Platonic sense they gain strong empirical relevance, since they are assumed to provide us with a form of knowledge far superior to observational data …

PeanutsPlatonismModel-Platonism as an epistemological concept can be understood as the combination of the following two routines: the reliance on thought-experimental style of theorizing as well as the introduction of idealized, metaphysical and, hence, ‘Platonic’ arguments in the form of basic assumptions … We still find resemblances of the Platonic idea of ‘superior insights’ through ‘true forms’ in economic models. Thereby, these insights are sometimes even believed to be generally immune to conflicting empirical evidence …

Taking assumptions like utility maximization or market equilibrium as a matter of course leads to the ‘standing presumption in economics that, if an empirical statement is deduced from standard assumptions then that statement is reliable’ …

The ongoing importance of these assumptions is especially evident in those areas of economic research, where empirical results are challenging standard views on economic behaviour like experimental economics or behavioural finance … From the perspective of Model-Platonism, these research-areas are still framed by the ‘superior insights’ associated with early 20th century concepts, essentially because almost all of their results are framed in terms of rational individuals, who engage in optimizing behaviour and, thereby, attain equilibrium. For instance, the attitude to explain cooperation or fair behaviour in experiments by assuming an ‘inequality aversion’ integrated in (a fraction of) the subjects’ preferences is strictly in accordance with the assumption of rational individuals, a feature which the authors are keen to report …

So, while the mere emergence of research areas like experimental economics is sometimes deemed a clear sign for the advent of a new era … a closer look at these fields allows us to illustrate the enduring relevance of the Model-Platonism-topos and, thereby, shows the pervasion of these fields with a traditional neoclassical style of thought.

Jakob Kapeller

For more on model Platonism in economics see here and here.


Deborah Mayo vs. Andrew Gelman on the (non-)significance of significance tests

17 October, 2015 at 10:34 | Posted in Statistics & Econometrics | Comments Off on Deborah Mayo vs. Andrew Gelman on the (non-)significance of significance tests


You seem to have undergone a gestalt switch from the Gelman of a short time ago–the one who embraced significance tests …


I believed, and still believe, in checking the fit of a model by comparing data to hypothetical replications. This is not the same as significance testing in which a p-value is used to decide whether to reject a model or whether to believe that a finding is true.

I don’t know that significance tests are used to decide that a finding is true, and I’m surprised to see you endorsing/spreading the hackneyed and much lampooned view of significance tests, p-values, etc. despite so many of us trying to correct the record. And statistical hypothesis testing denies uncertainty? Where in the world do you get this? (I know it’s not because they don’t use posterior probabilities…)
But never mind, let me ask: when you check the fit of a model using p-value assessments, are you not inferring the adequacy/inadequacy of the model? Tell me what you are doing if not. I don’t particularly like calling it a decision, neither do many people, and I like viewing the output as “whether to believe” even less. But I don’t know what your output is supposed to be.

I don’t think hypothesis testing inherently denies uncertainty. But I do think that it is used by many researchers as a way of avoiding uncertainty: it’s all too common for “significant” to be interpreted as “true” and “non-significant” to be interpreted as “zero.” Consider, for example, all the trash science we’ve been discussing on this blog recently, studies that may have some scientific content but which get ruined by their authors’ deterministic interpretations.
 When I check the fit of a model, I’m assessing its adequacy for some purpose. This is not the same as looking for p< .05 or p<.01 in order to go around saying that some theory is now true.

I fail to see how a deterministic interpretation could go hand in hand with error probabilities; and I never hear even the worst test abusers declare a theory is not true, give me A break…
So when you assess adequacy for a purpose, what does this mean? Adequate vs inadequate for a purpose is pretty dichotomous. Do you assess how adequate? I’m unclear as to where the uncertainty enters for you, because as I understand it is not in terms of a posterior probability.


Here’s a quote from a researcher, I posted it on the blog a few days ago: “Our results demonstrate that physically weak males are more reluctant than physically strong males to assert their self-interest…”
Here’s another quote: “Ovulation led single women to become more liberal, less religious, and more likely to vote for Barack Obama. In contrast, ovulation led married women to become more conservative, more religious, and more likely to vote for Mitt Romney.”
These are deterministic statements based on nothing more than p-values that happen to be statistically significant. Researchers make these sorts of statements all the time. It’s not your fault, I’m not saying you would do this, but it’s a serious problem.
Along similar lines, we’ll see claims that a treatment has an effect on men and not on women, when really what is happening is that p< .05 for the men in the study and p>.05 for the women.
In addition to brushing away uncertainty, people also seem to want to brush away uncertainty, thus talking about “the effect” as if it is a constant across all groups and all people. A recent example featured on this blog was a study primarily of male college students which was referred repeatedly (by its authors, not just by reporters and public relations people) as a study of “men” with no qualifications.
P.S. Bayesians do this too, indeed there’s a whole industry (which I hate) of Bayesian methods for getting the posterior probability that a null hypothesis is true. Bayesians use different methods but often have the misguided goal of other statisticians, to deny uncertainty and variation.


These moves from observed associations, and even correlations, to causal claims are poorly warranted, but these are classic fallacies that go beyond tests to reading all manner of “explanations” into the data. I find it very odd to view this as a denial of uncertainty by significance tests. Even if they got their statistics right, the link from stat to substantive causal claim would exist. I just find it odd to regard the statistical vs substantive and correlation vs cause fallacies, which every child knows, some kind of shortcoming with significance tests. Any method or no method can commit these fallacies, especially from observational studies. But when you berate the tests as somehow responsible, you misleadingly suggest that other methods are better, rather than worse. At least error statistical methods can identify the flaws at 3 levels (data, statistical inference, stat-> substantive causal claim) in a systematic way. We can spot the flaws a mile off…
I still don’t know where you want the uncertainty to show up; I’ve indicated how I do.


You write, “I still don’t know where you want the uncertainty to show up;” I want the uncertainty to show up in a posterior distribution for continuous parameters, as described in my books.

You write, “I want the uncertainty to show up in a posterior distribution for continuous parameters”. Let’s see if I have this right. You would report the posterior probabilities that a model was adequate for a goal. Yes? Now you have also said you are a falsificationist. So is your falsification rule to move from a low enough posterior probability in the adequacy of a model, to the falsity of a claim that the model of is adequate (for the goal). And would high enough posterior in the adequacy of a model translate into something like, not being able to falsify its adequacy or perhaps, accepting it as adequate (the latter would not be falsificationist, but might be more sensible than the former). Or are you no longer falsificationist-leaning.

No, I would not “report the posterior probabilities that a model was adequate for a goal.” That makes no sense to me. I would report the posterior distribution of parameters and make probabilistic predictions within a model.

Well if you’re going to falsify as a result, you need a rule from these posteriors to infer the predictions are met satisfactorily or not. Else there is no warrant for rejecting/improving the model. That’s the kind of thing significance tests can do. But specifically, with respect to the misleading interpretations of data that you were just listing, it isn’t obvious how they are avoided by you. The data may fit these hypotheses swimmingly. 
Anyhow, this is not the place to discuss this further. In signing off, I just want to record my objection to (mis)portraying statistical tests and other error statistical methods as flawed because of some blatant, age-old misuses or misleading language, like “demonstrate” (flaws that are at least detectable and self-correctable by these same methods, whereas they might remain hidden by other methods now in use). [Those examples should not even be regarded as seeking evidence but at best colorful and often pseudoscientific interpretations.] When the Higgs particle physicists found their 2 and 3 standard deviation effects were disappearing with new data—just to mention a recent example from my blog—they did not say the flaw was with the p-values! They tightened up their analyses and made them more demanding. They didn’t report posterior distributions for the properties of the Higgs, but they were able to make inferences about their values, and identify gaps for further analysis.

Statistical Modeling, Causal Inference, and Social Science

For my own take on significance tests see here.

The Swedish for-profit ‘free’ school disaster

16 October, 2015 at 16:18 | Posted in Education & School | 2 Comments

Gustav Fridolin, Sweden’s rather youthful education minister, emerges from behind his desk in a pleasant office in central Stockholm wearing what looks like a pair of Vans and the open, fresh-faced smile of a newly qualified teacher.

gustavThe smile falters when he begins to describe the plight of Sweden’s schools and the scale of the challenge that lies ahead. Fridolin, it turns out, is the man in charge of rescuing a school system in crisis.

Sweden, once regarded as a byword for high-quality education – free preschool, formal school at seven, no fee-paying private schools, no selection – has seen its scores in Programme for International Student Assessment (Pisa) assessments plummet in recent years.

Fridolin acknowledges the sense of shame and embarrassment felt in Sweden. “The problem is that this embarrassment is carried by the teachers. But this embarrassment should be carried by us politicians. We were the ones who created the system. It’s a political failure,” he says …

Fridolin, who has a degree in teaching, says not only have scores in international tests gone down, inequality in the Swedish system has gone up. “This used to be the great success story of the Swedish system,” he said. “We could offer every child, regardless of their background, a really good education. The parents’ educational background is showing more and more in their grades.

“Instead of breaking up social differences and class differences in the education system, we have a system today that’s creating a wider gap between the ones that have and the ones that have not.”

Sally Weale/The Guardian

Ray Fisman has waged a similar critique against choice-based solutions in attempts to improve educational systems:

School_ChoiceWhat’s caused the recent crisis in Swedish education? Researchers and policy analysts are increasingly pointing the finger at many of the choice-oriented reforms that are being championed as the way forward for American schools. While this doesn’t necessarily mean that adding more accountability and discipline to American schools would be a bad thing, it does hint at the many headaches that can come from trying to do so by aggressively introducing marketlike competition to education …

In the wake of the country’s nose dive in the PISA rankings, there’s widespread recognition that something’s wrong with Swedish schooling … Competition was meant to discipline government schools, but it may have instead led to a race to the bottom …

It’s the darker side of competition that Milton Friedman and his free-market disciples tend to downplay: If parents value high test scores, you can compete for voucher dollars by hiring better teachers and providing a better education—or by going easy in grading national tests. Competition was also meant to discipline government schools by forcing them to up their game to maintain their enrollments, but it may have instead led to a race to the bottom as they too started grading generously to keep their students …

It’s a lesson that Swedish parents and students have learned all too well: Simply opening the floodgates to more education entrepreneurs doesn’t disrupt education. It’s just plain disruptive.

And this is what Henry M. Levin — distinguished economist and director of the National Center for the Study of Privatization in Education at Teachers College, Columbia University — wrote when he recently reviewed the evidence about the effects of vouchers:

VouchersOn December 3, 2012, Forbes Magazine recommended for the U.S. that: “…we can learn something about when choice works by looking at Sweden’s move to vouchers.” On March 11 and 12, 2013, the Royal Swedish Academy of Sciences did just that by convening a two day conference to learn what vouchers had accomplished in the last two decades … The following was my verdict:

  • On the criterion of Freedom of Choice, the approach has been highly successful. Parents and students have many more choices among both public schools and independent schools than they had prior to the voucher system.
  • On the criterion of productive efficiency, the research studies show virtually no difference in achievement between public and independent schools for comparable students. Measures of the extent of competition in local areas also show a trivial relation to achievement. The best study measures the potential choices, public and private, within a particular geographical area. For a 10 percent increase in choices, the achievement difference is about one-half of a percentile. Even this result must be understood within the constraint that the achievement measure is not based upon standardized tests, but upon teacher grades. The so-called national examination result that is also used in some studies is actually administered and graded by the teacher with examination copies available to the school principal and teachers well in advance of the “testing”. Another study found no difference in these achievement measures between public and private schools, but an overall achievement effect for the system of a few percentiles. Even this author agreed that the result was trivial.
  • With respect to equity, a comprehensive, national study sponsored by the government found that socio-economic stratification had increased as well as ethnic and immigrant segregation. This also affected the distribution of personnel where the better qualified educators were drawn to schools with students of higher socio-economic status and native students. The international testing also showed rising variance or inequality in test scores among schools. No evidence existed to challenge the rising inequality.Accordingly, I rated the Swedish voucher system as negative on equity.

A recent Swedish study on the effects of school-choice concluded:

The results from the analyses made in this paper confirm that school choice, rather than residential segregation, is a more important factor determining variation in grades than is residential segregation.

voucherThe empirical analysis in this paper confirms the PISA-based finding that between-school variance in student performance in the Swedish school system has increased rapidly since 2000. We have also been able to show that this trend towards increasing performance gaps cannot be explained by shifting patterns of residential segregation. A more likely explanation is that increasing possibilities for school choice have triggered a process towards a more unequal school system. A rapid growth in the number of students attending voucher-financed, independent schools has been an important element of this process …

The idea of voucher-based independent school choice is commonly ascribed to Milton Friedman. Friedman’s argument was that vouchers would decrease the role of government and expand the opportunities for free enterprise. He also believed that the introduction of competition would lead to improved school results. As we have seen in the Swedish case, this has not happened. As school choice has increased, differences between schools have increased but overall results have gone down. As has proved to be the case with other neo-liberal ideas, school choice—when tested—has not been able to deliver the results promised by theoretical speculation.

John Östh, Eva Andersson, Bo Malmberg

For my own take on this issue — only in Swedish, sorry — see here and here.

Why the euro divides Europe

16 October, 2015 at 11:56 | Posted in Economics | 1 Comment

The ‘European idea’—or better: ideology—notwithstanding, the euro has split Europe in two. As the engine of an ever-closer union the currency’s balance sheet has been disastrous. Norway and Switzerland will not be joining the eu any time soon; Britain is actively considering leaving it altogether. Sweden and Denmark were supposed to adopt the euro at some point; that is now off the table. The Eurozone itself is split between surplus and deficit countries, North and South, Germany and the rest. At no point since the end of World War Two have its nation-states confronted each other with so much hostility; the historic achievements of European unification have never been so threatened …

Anyone wishing to understand how an institution such as the single currency can wreak such havoc needs a concept of money that goes beyond that of the liberal economic tradition and the sociological theory informed by it. The conflicts in the Eurozone can only be decoded with the aid of an economic theory that can conceive of money not merely as a system of signs that symbolize claims and contractual obligations, but also, in tune with Weber’s view, as the product of a ruling organization, and hence as a contentious and contested institution with distributive consequences full of potential for conflict …

NLR95coverNow more than ever there is a grotesque gap between capitalism’s intensifying reproduction problems and the collective energy needed to resolve them … This may mean that there is no guarantee that the people who have been so kind as to present us with the euro will be able to protect us from its consequences, or will even make a serious attempt to do so. The sorcerer’s apprentices will be unable to let go of the broom with which they aimed to cleanse Europe of its pre-modern social and anti-capitalist foibles, for the sake of a neoliberal transformation of its capitalism. The most plausible scenario for the Europe of the near and not-so-near future is one of growing economic disparities—and of increasing political and cultural hostility between its peoples, as they find themselves flanked by technocratic attempts to undermine democracy on the one side, and the rise of new nationalist parties on the other. These will seize the opportunity to declare themselves the authentic champions of the growing number of so-called losers of modernization, who feel they have been abandoned by a social democracy that has embraced the market and globalization.

Wolfgang Streeck

[h/t Jan Milch]

Dumb and Dumber — the Chicago version

15 October, 2015 at 18:15 | Posted in Economics | 3 Comments


lucasbob-1Macroeconomics was born as a distinct field in the 1940s (sic!), as a part of the intellectual response to the Great Depression. The term then referred to the body of knowledge and expertise that we hoped would prevent the recurrence of that economic disaster. My thesis in this lecture is that macroeconomics in this original sense has succeeded: Its central problem of depression-prevention has been solved, for all practical purposes, and has in fact been solved for many decades.

Robert Lucas (2003)

In the past, I think you have been quoted as saying that you don’t even believe in the possibility of bubbles.

eugeneEugene Fama: I never said that. I want people to use the term in a consistent way. For example, I didn’t renew my subscription to The Economist because they use the world bubble three times on every page. Any time prices went up and down—I guess that is what they call a bubble. People have become entirely sloppy. People have jumped on the bandwagon of blaming financial markets. I can tell a story very easily in which the financial markets were a casualty of the recession, not a cause of it.

That’s your view, correct?

Fama: Yeah.

John Cassidy

Angus Deaton on the limited value of RCTs

14 October, 2015 at 18:04 | Posted in Economics | Comments Off on Angus Deaton on the limited value of RCTs

ap_693257706526-11I think economists, especially development economists, are sort of like economists in the 50’s with regressions. They have a magic tool but they don’t yet have much of an idea of the problems with that magic tool. And there are a lot of them. I think it’s just like any other method of estimation, it has its advantages and disadvantages. I think RCTs rarely meet the hype. People turned to RCTs because they got tired of all the arguments over observational studies about exogeneity and instruments and sample selectivity and all the rest of it. But all of those problems come back in somewhat different forms in RCTs …

People tend to split the issues into internal and external validity. There are a lot of problems with that distinction but it is a way of thinking about some of the issues. For instance if you go back to the 70s and 80s and you read what was written then, people thought quite hard about how you take the result from one experiment and how it would apply somewhere else. I see much too little of that in the development literature today …

For instance, in a newspaper story about economists’ experiments that I read today, a reporter wrote that an RCT allows you to establish causality for sure. But that statement is absurd. There’s a standard error, for a start, and there are lots of cases where it is hard to get the standard errors right. And even if we have causality, we need an argument that causality will work in the same way somewhere else, let alone in general.

magical-thinkingI think we are in something of a mess on this right now. There’s just a lot of stuff that’s not right. There is this sort of belief in magic, that RCTs are attributed with properties that they do not possess. For example, RCTs are supposed to automatically guarantee balance between treatment and controls. And there is an almost routine confusion that RCTs are somehow reliable, or that unbiasedness implies reliability …

We often find a randomized control trial with only a handful of observations in each arm and with enormous standard errors. But that’s preferred to a potentially biased study that uses 100 million observations. That just makes no sense. Each study has to be considered on its own. RCTs are fine, but they are just one of the techniques in the armory that one would use to try to discover things. Gold standard thinking is magical thinking.

Angus Deaton

What makes arguments successful

14 October, 2015 at 15:43 | Posted in Theory of Science & Methodology | Comments Off on What makes arguments successful


Economics textbooks — when the model becomes the message

14 October, 2015 at 10:43 | Posted in Economics | Comments Off on Economics textbooks — when the model becomes the message

Wendy Carlin and David Soskice have a new intermediate macroeonomics textbook — Macroeconomics: Institutions, Instability, and the Financial System (Oxford University Press 2015) — out on the market. It builds more than most other intermediate macroeconomics textbooks on supplying the student with a “systematic way of thinking through problems” with the help of formal-mathematical models.

wendyCarlin and Soskice explicitly adapts a ‘New Keynesian’ framework including price rigidities and adding a financial system to the usual neoclassical macroeconomic set-up. But although I find things like the latter amendment an improvement, it’s definitely more difficult to swallow their methodological stance, and especially their non-problematized acceptance of the need for macroeconomic microfoundations.

Some months ago, another sorta-kinda ‘New Keynesian’, Paul Krugman, argued on his blog that the problem with the academic profession is that some macroeconomists aren’t “bothered to actually figure out” how the New Keynesian model with its Euler conditions — “based on the assumption that people have perfect access to capital markets, so that they can borrow and lend at the same rate” — really works. According to Krugman, this shouldn’t be hard at all — “at least it shouldn’t be for anyone with a graduate training in economics.”

Carlin & Soskice seem to share Krugman’s attitude. From the first page of the book they start to elaborate their preferred 3-equations ‘New Keynesian’ macromodel. And after twenty-two pages they have already come to specifying the demand side with the help of the Permanent Income Hypothesis and its Euler equations.

But if people — not the representative agent — at least sometimes can’t help being off their labour supply curve — as in the real world — then what are these hordes of Euler equations that you find ad nauseam in these ‘New Keynesian’ macromodels gonna help us?

It is clear that the New Keynesian model, even extended to allow, say, for presence of investment and capital accumulation, or for the presence of both discrete price and nominal wage setting, is still just a toy model, and that it lacks many of the details which might be needed to understand fluctuations …

One striking (and unpleasant) characteristic of the basic New Keynesian model is that there is no unemployment! Movements take place along a labor supply curve, either at the intensive margin (with workers varying hours) or at the extensive margin (with workers deciding whether or not to participate). One has a sense, however, that this may give a misleading description of fluctuations, in positive terms, and, even more so, in normative terms: The welfare cost of fluctuations is often thought to fall disproportionately on the unemployed.

Olivier Blanchard

Yours truly’s doubts regarding the ‘New Keynesian’ modelers’ obsession with Euler equations is basically that, as with so many other assumptions in ‘modern’ macroeconomics, the Euler equations don’t fit reality.

For the uninitiated, the Consumption Euler Equation is sort of like the Flux Capacitor that powers all modern “DSGE” macro models … Basically, it says that how much you decide to consume today vs. tomorrow is determined by the interest rate (which is how much you get paid to put off your consumption til tomorrow), the time preference rate (which is how impatient you are) and your expected marginal utility of consumption (which is your desire to consume in the first place). When the equation appears in a macro model, “you” typically means “the entire economy”.

This equation underlies every DSGE model you’ll ever see, and drives much of modern macro’s idea of how the economy works. So why is Eichenbaum, one of the deans of modern macro, pooh-poohing it?

Simple: Because it doesn’t fit the data. The thing is, we can measure people’s consumption, and we can measure interest rates. If we make an assumption about people’s preferences, we can just go see if the Euler Equation is right or not!

[Martin] Eichenbaum was kind enough to refer me to the literature that tries to compare the Euler Equation to the data. The classic paper is Hansen and Singleton (1982), which found little support for the equation. But Eichenbaum also pointed me to this 2006 paper by Canzoneri, Cumby, and Diba of Georgetown (published version here), which provides simpler but more damning evidence against the Euler Equation …

[T]he Euler Equation says that if interest rates are high, you put off consumption more. That makes sense, right? Money markets basically pay you not to consume today. The more they pay you, the more you should keep your money in the money market and wait to consume until tomorrow.

But what Canzoneri et al. show is that this is not how people behave. The times when interest rates are high are times when people tend to be consuming more, not less.

OK, but what about that little assumption that we know people’s preferences? What if we’ve simply put the wrong utility function into the Euler Equation? Could this explain why people consume more during times when interest rates are high?

Well, Canzoneri et al. try out other utility functions that have become popular in recent years. The most popular alternative is habit formation … But when Canzoneri et al. put in habit formation, they find that the Euler Equation still contradicts the data …

Canzoneri et al. experiment with other types of preferences, including the other most popular alternative … No matter what we assume that people want, their behavior is not consistent with the Euler Equation …

If this paper is right … then essentially all modern DSGE-type macro models currently in use are suspect. The consumption Euler Equation is an important part of nearly any such model, and if it’s just wrong, it’s hard to see how those models will work.

Noah Smith

In the standard neoclassical consumption model — underpinning Carlin’s and Soskice’s microfounded macroeconomic modeling — people are basically portrayed as treating time as a dichotomous phenomenon today and the future — when contemplating making decisions and acting. How much should one consume today and how much in the future? Facing an intertemporal budget constraint of the form

ct + cf/(1+r) = ft + yt + yf/(1+r),

where ct is consumption today, cf is consumption in the future, ft is holdings of financial assets today, yt is labour incomes today, yf is labour incomes in the future, and r is the real interest rate, and having a lifetime utility function of the form

U = u(ct) + au(cf),

where a is the time discounting parameter, the representative agent (consumer) maximizes his utility when

u´(ct) = a(1+r)u´(cf).

This expression – the Euler equation – implies that the representative agent (consumer) is indifferent between consuming one more unit today or instead consuming it tomorrow. Typically using a logarithmic function form – u(c) = log c – which gives u´(c) = 1/c, the Euler equation can be rewritten as

1/ct = a(1+r)(1/cf),


cf/ct = a(1+r).

This importantly implies that according to the neoclassical consumption model that changes in the (real) interest rate and the ratio between future and present consumption move in the same direction.

So good, so far. But how about the real world? Is the neoclassical consumption as described in this kind of models in tune with the empirical facts? Not at all — the data and models are as a rule insconsistent!

In the Euler equation we only have one interest rate, equated to the money market rate as set by the central bank. The crux is that — given almost any specification of the utility function – the two rates are actually often found to be strongly negatively correlated in the empirical literature:

In this paper, we use U.S. data to calculate the interest rate implied by the Euler equation, and we compare this Euler equation rate with a money market rate. We find the behavior of the money market rate differs significantly from the implied Euler equation rate. This poses a fundamental challenge for models that equate the two rates.

The fact that the two interest rate series do not coincide – and that the spread between the Euler equation rate and the money market rate is generally positive – comes as no surprise; these anomalies have been well documented in the literature on the “equity premium puzzle” and the “risk free rate puzzle.” And the failure of consumption Euler equation models should come as no surprise; there is a sizable literature that tries to fit Euler equations, and generally finds that the data on returns and aggregate consumption are not consistent with the model.

If the spread between the two rates were simply a constant, or a constant plus a little statistical noise, then the problem might not be thought to be very serious. The purpose of this paper is to document something more fundamental – and more problematic – in the relationship between the Euler equation rate and observed money market rates … We compute the implied Euler equation rates for a number of specifications of preferences and find that they are strongly negatively correlated with money market rates …

Our results suggest that the problem is fundamental: alternative specifications of preferences can eliminate the excessive volatility, but they yield an Euler equation rate that is strongly negatively correlated with the money market rate.

Matthew Canzoneri, Robert Cumby and Behzad Diba

All empirical sciences use simplifying or unrealistic assumptions in their modeling activities. That is not the issue – as long as the assumptions made are not unrealistic in the wrong way or for the wrong reasons.

Theories are difficult to directly confront with reality. Economists therefore build models of their theories. Those models are representations that are directly examined and manipulated to indirectly say something about the target systems.

But models do not only face theory. They also have to look to the world. Being able to model a “credible world,” a world that somehow could be considered real or similar to the real world, is not the same as investigating the real world. Even though all theories are false, since they simplify, they may still possibly serve our pursuit of truth. But then they cannot be unrealistic or false in any way. The falsehood or unrealisticness has to be qualified.

Some of the standard assumptions made in neoclassical economic theory – on rationality, information handling and types of uncertainty – are not possible to make more realistic by “de-idealization” or “successive approximations” without altering the theory and its models fundamentally.

If we cannot show that the mechanisms or causes we isolate and handle in our models are stable, in the sense that what when we export them from are models to our target systems they do not change from one situation to another, then they only hold under ceteris paribus conditions and a fortiori are of limited value for our understanding, explanation and prediction of our real world target system.

No matter how many convoluted refinements of concepts made in the model, if the “successive approximations” do not result in models similar to reality in the appropriate respects (such as structure, isomorphism etc), the surrogate system becomes a substitute system that does not bridge to the world but rather misses its target.

From this methodological pespective yours truly has to conclude that Carlin’s and Soskice’s microfounded macroeconomic model is a rather unimpressive attempt at legitimizing using fictitious idealizations — such as Euler equations — for reasons more to do with model tractability than with a genuine interest of understanding and explaining features of real economies.

As May Brodbeck once had it:

Model ships appear frequently in bottles; model boys in heaven only.

Upside down (private)

14 October, 2015 at 10:34 | Posted in Varia | Comments Off on Upside down (private)

upside downAfter almost forty years in Lund, yours truly last year returned to the town where he was born and bred — Malmö. Taking a stroll in the neighbourhood with the dog further convinced me returning was a good decision …

Representative agent models — macroeconomic foundations made of sand

13 October, 2015 at 23:47 | Posted in Economics | 1 Comment

Representative-agent models suffer from an inherent, and, in my view, fatal, flaw: they can’t explain any real macroeconomic phenomenon, because a macroeconomic phenomenon has to encompass something more than the decision of a single agent, even an omniscient central planner. At best, the representative agent is just a device for solving an otherwise intractable general-equilibrium model, which is how I think Lucas originally justified the assumption.

o-OSTRICH-IN-THE-SAND-facebookYet just because a general-equilibrium model can be formulated so that it can be solved as the solution of an optimizing agent does not explain the economic mechanism or process that generates the solution. The mathematical solution of a model does not necessarily provide any insight into the adjustment process or mechanism by which the solution actually is, or could be, achieved in the real world …

Here’s an example of what I am talking about. Consider a traffic-flow model explaining how congestion affects vehicle speed and the flow of traffic. It seems obvious that traffic congestion is caused by interactions between the different vehicles traversing a thoroughfare, just as it seems obvious that market exchange arises as the result of interactions between the different agents seeking to advance their own interests. OK, can you imagine building a useful traffic-flow model based on solving for the optimal plan of a representative vehicle?

I don’t think so. Once you frame the model in terms of a representative vehicle, you have abstracted from the phenomenon to be explained. The entire exercise would be pointless – unless, that is, you assumed that interactions between vehicles are so minimal that they can be ignored. But then why would you be interested in congestion effects? If you want to claim that your model has any relevance to the effect of congestion on traffic flow, you can’t base the claim on an assumption that there is no congestion.

David Glasner

Instead of just methodologically sleepwalking into their models, modern followers of the Lucasian microfoundational program ought to do some reflection and at least try to come up with a sound methodological justification for their position.  Just looking the other way won’t do. Writes Kevin Hoover:

garciaThe representative-­agent program elevates the claims of microeconomics in some version or other to the utmost importance, while at the same time not acknowledging that the very microeconomic theory it privileges undermines, in the guise of the Sonnenschein­Debreu­Mantel theorem, the likelihood that the utility function of the representative agent will be any direct analogue of a plausible utility function for an individual agent … The new classicals treat [the difficulties posed by aggregation] as a non-issue, showing no apprciation of the theoretical work on aggregation and apparently unaware that earlier uses of the representative-agent model had achieved consistency wiyh theory only at the price of empirical relevance.

Where ‘New Keynesian’ and New Classical economists think that they can rigorously deduce the aggregate effects of (representative) actors with their reductionist microfoundational methodology, they — as argued in chapter 4 of my  On the use and misuse of theories and models in economics — have to put a blind eye on the emergent properties that characterize all open social and economic systems. The interaction between animal spirits, trust, confidence, institutions, etc., cannot be deduced or reduced to a question answerable on the individual level. Macroeconomic structures and phenomena have to be analyzed also on their own terms.

And then, of course, there is Sonnenschein-Mantel-Debreu!

So what? Why should we care about Sonnenschein-Mantel-Debreu?

Because  Sonnenschein-Mantel-Debreu ultimately explains why “modern neoclassical economics” — New Classical, Real Business Cycles, Dynamic Stochastic General Equilibrium (DSGE) and “New Keynesian” — with its microfounded macromodels are such bad substitutes for real macroeconomic analysis!

These models try to describe and analyze complex and heterogeneous real economies with a single rational-expectations-robot-imitation-representative-agent. That is, with something that has absolutely nothing to do with reality. And — worse still — something that is not even amenable to the kind of general equilibrium analysis that they are thought to give a foundation for, since Hugo Sonnenschein (1972) , Rolf Mantel (1976) and Gerard Debreu (1974) unequivocally showed that there did not exist any condition by which assumptions on individuals would guarantee neither stability nor uniqueness of the equlibrium solution.

Opting for cloned representative agents that are all identical is of course not a real solution to the fallacy of composition that the Sonnenschein-Mantel-Debreu theorem points to. Representative agent models are rather an evasion whereby issues of distribution, coordination, heterogeneity — everything that really defines macroeconomics — are swept under the rug.

« Previous PageNext Page »

Create a free website or blog at WordPress.com.
Entries and comments feeds.