Econometric forecasting — no icing on the economist’s cake

31 Oct, 2013 at 17:31 | Posted in Statistics & Econometrics | Comments Off on Econometric forecasting — no icing on the economist’s cake

twice_two__fav_scene_It is clearly the case that experienced modellers could easily come up with significantly different models based on the same set of data thus undermining claims to researcher-independent objectivity. This has been demonstrated empirically by Magnus and Morgan (1999) who conducted an experiment in which an apprentice had to try to replicate the analysis of a dataset that might have been carried out by three different experts (Leamer, Sims, and Hendry) following their published guidance. In all cases the results were different from each other, and different from that which would have been produced by the expert, thus demonstrating the importance of tacit knowledge in statistical analysis.

Magnus and Morgan conducted a further experiment which involved eight expert teams, from different universities, analysing the same sets of data each using their own particular methodology. The data concerned the demand for food in the US and in the Netherlands and was based on a classic study by Tobin (1950) augmented with more recent data. The teams were asked to estimate the income elasticity of food demand and to forecast per capita food consumption. In terms of elasticities, the lowest estimates were around 0.38 whilst the highest were around 0.74 – clearly vastly different especially when remembering that these were based on the same sets of data. The forecasts were perhaps even more extreme – from a base of around 4000 in 1989 the lowest forecast for the year 2000 was 4130 while the highest was nearly 18000!

John Mingers

The truth about increasing inequality

31 Oct, 2013 at 09:57 | Posted in Economics, Politics & Society | 1 Comment


Inequality continues to grow all over the world — so don’t even for a second think that this is only an American problem!

In case you think — like e. g. Paul Krugman — that it’s different in my own country — Sweden — you should take a look at some new data from Statistics Sweden and this video.

The Gini coefficient is a measure of inequality (where a higher number signifies greater inequality) and for Sweden we have this for the disposable income distribution:
SwedenGini1980to2011            Source: SCB and own calculations

What we see happen in the US and Sweden is deeply disturbing. The rising inequality is outrageous – not the least since it has to a large extent to do with income and wealth increasingly being concentrated in the hands of a very small and privileged elite.

Societies where we allow the inequality of incomes and wealth to increase without bounds, sooner or later implode. The cement that keeps us together erodes and in the end we are only left with people dipped in the ice cold water of egoism and greed. It’s high time to put an end to this the worst Juggernaut of our time!

On ‘good’ and ‘bad’ economics

30 Oct, 2013 at 16:22 | Posted in Economics, Theory of Science & Methodology | 3 Comments

Chris Dillow — of Stumbling and Mumbling — isn’t too happy about Aditya Chakrabortty’s attack on mainstream economics.

EconomicForecast2012_logo-630x314In defence of mainstream economics, Dillow maintains that “forecasting isn’t part of proper economics at all, so a forecasting error tells us nothing about the merits or not of economics,” and approvingly quotes a fellow economist who says that “the fact that most economists failed to predict the crash is actually a vindication of mainstream economics, which says that such things should be unpredictable.” Above all, he maintains that “Aditya is missing a bigger point. The division that matters is not so much between heterodox and mainstream economics, but between good economics and bad.”

Good economics in the eyes of Dillow isn’t about “bigthink and meta-theorizing,” but about “careful consideration of the facts” and asking itself  “which model (or better, which mechanism or which theory) fits the problem at hand?”

Well, wouldn’t that be great if mainstream neoclassical economics was like that! The problem, of course, is that it’s far, far away from it.

Let me elaborate a little on why that is.

One obvious shortcoming of Dillow’s argumentation is a lacking problematization of the mediation between economic models/experiments and the real world.

Mary Morgan — in her The World in the Model — characterizes the modelling tradition of economics as one concerned with “thin men acting in small worlds”‘ and writes:

Strangely perhaps, the most obvious element in the inference gap for models … lies in the validity of any inference between two such different media – forward from the real world to the artificial world of the mathematical model and back again from the model experiment to the real material of the economic world. The model is at most a parallel world. The parallel quality does not seem to bother economists. But materials do matter: it matters that economic models are only representations of things in the economy, not the things themselves.

Most models in science are representations of something else. And all empirical sciences use simplifying or unrealistic assumptions in their modeling activities. That is not the issue – as long as the assumptions made are not unrealistic in the wrong way or for the wrong reasons.

Theories are difficult to directly confront with reality. Economists therefore experiment or build models of their theories. Those models are representations that are directly examined and manipulated to indirectly say something about the target systems.

Being able to model a “credible world,” a world that somehow could be considered real or similar to the real world, is not the same as investigating the real world. Even though all theories are false, since they simplify, they may still possibly serve our pursuit of truth. But then they cannot be unrealistic or false in any way. The falsehood or unrealisticness has to be qualified.

Some of the standard assumptions made in neoclassical economic theory – on rationality, information handling and types of uncertainty – are not possible to make more realistic by “de-idealization” or “successive approximations” without altering the theory and its models fundamentally.

If we cannot show that the mechanisms or causes we isolate and handle in our models (or experiments) are stable, in the sense that what when we export them from are models to our target systems they do not change from one situation to another, then they only hold under ceteris paribus conditions and a fortiori are of limited value for our understanding, explanation and prediction (which contrary to Dillow, I think neoclassical economists usually consider being of prime importance) of our real world target system.

The obvious ontological shortcoming of the basically epistemic approach of mainstream neoclassical  economics is tout court that it does not guarantee that the (eventual) correspondence between model and target is interesting, relevant, revealing or somehow adequate in terms of mechanisms, causal powers, capacities or tendencies. No matter how many convoluted refinements of concepts made in the model, if the “successive approximations” do not result in models similar to reality in the appropriate respects (such as structure, isomorphism etc), it doesn’t succeed in bridging to the world.

Constructing mainstream neoclassical economic models – such as microfounded macroeconomic models —  is a rather unimpressive attempt at legitimizing using fictitious idealizations for reasons more to do with model tractability than with a genuine interest of understanding and explaining features of real economies.

Many of the model assumptions standardly made by neoclassical economics are restrictive rather than harmless and could a fortiori anyway not in any sensible meaning be considered approximations at all. Or as May Brodbeck had it:

Model ships appear frequently in bottles; model boys in heaven only.

Now, where does this leave us? Can mainstream neoclassical economics still be ‘good’ in a Dillowian sense? I guess so. But it will for sure be much more difficult than starting out with a realist and relevant heterodox economics.

Mainstream economics in denial

30 Oct, 2013 at 14:41 | Posted in Economics | Comments Off on Mainstream economics in denial

We’d gathered at Downing College, Cambridge, to discuss the economic crisis, although the quotidian misery of that topic seemed a world away from the honeyed quads and endowment plush of this place.

Equally incongruous were the speakers. The Cambridge economist Victoria Bateman looked as if saturated fat wouldn’t melt in her mouth, yet demolished her colleagues. They’d been stupidly cocky before the crash – remember the 2003 boast from Nobel prizewinner Robert Lucas that the “central problem of depression-prevention has been solved”? – and had learned no lessons since. Yet they remained the seers of choice for prime ministers and presidents. She ended: “If you want to hang anyone for the crisis, hang me – and my fellow economists.”

wise-monkeys

What followed was angry agreement. On the night before the latest growth figures, no one in this 100-strong hall used the word “recovery” unless it was to be sarcastic. Instead, audience members – middle-aged, smartly dressed and doubtless sizably mortgaged – took it in turn to attack bankers, politicians and, yes, economists. They’d created the mess everyone else was paying for, yet they’d suffered no retribution.

Yet look around at most of the major economics degree courses and neoclassical economics – that theory that treats humans as walking calculators, all-knowing and always out for themselves, and markets as inevitably returning to stability – remains in charge. Why? In a word: denial. The high priests of economics refuse to recognise the world has changed.

In his new book, Never Let a Serious Crisis Go to Waste, the US economist Philip Mirowski recounts how a colleague at his university was asked by students in spring 2009 to talk about the crisis. The world was apparently collapsing around them, and what better forum to discuss this in than a macroeconomics class. The response? “The students were curtly informed that it wasn’t on the syllabus, and there was nothing about it in the assigned textbook, and the instructor therefore did not wish to diverge from the set lesson plan. And he didn’t.”

Something similar is going on at Manchester University, where … economics undergraduates are petitioning their tutors for a syllabus that acknowledges there are other ways to view the world than as a series of algebraic problem sets.

Aditya Chakrabortty/The Guardian

[h/t Jan Milch]

Can experiments save economics as science?

29 Oct, 2013 at 21:33 | Posted in Theory of Science & Methodology | 3 Comments

2008-05-08-99“There’s nothing so radical as empiricism,” Stumbling and Mumbling (Chris Dillow) argued in a post the other day. The main drift of Dillow’s argumentation is that there’s really no need for heterodox theoretical critiques of mainstream neoclassical economics, but rather challenges to neoclassical economics “buttressed by good empirical work.” Out with “big-think theorizing about the nature of rationality” and in with “ordinary empiricism – looking at the data and running experiments.”

Although — as always — thought provoking, I think Dillow here offers a view on empiricism and experiments that is too simplistic. And for several reasons — but mostly because the kind of experimental empiricism it favours, is largely untenable. Let me elaborate a little.

Experiments are actually very similar to theoretical models in many ways  — they e. g. have the same basic problem that they are built on rather artificial conditions and have difficulties with the “trade-off” between internal and external validity. The more artificial conditions, the more internal validity, but also less external validity. The more we rig experiments/models to avoid the “confounding factors”, the less the conditions are reminicent of the real “target system”. The nodal issue is how economists using different isolation strategies in different “nomological machines” attempt to learn about causal relationships. I doubt the generalizability of both research strategies, because the probability is high that causal mechanisms are different in different contexts and that lack of homogeneity/ stability/invariance doesn’t give us warranted export licenses to the “real” societies or economies.

If we see experiments as theory tests or models that ultimately aspire to say something about the real “target system”, then the problem of external validity is central.

Assume that you have examined how the work performance of Swedish workers A is affected by B (“treatment”). How can we extrapolate/generalize to new samples outside the original population (e.g. to the UK)? How do we know that any replication attempt “succeeds”? How do we know when these replicated experimental results can be said to justify inferences made in samples from the original population? If, for example, P(A|B) is the conditional density function for the original sample, and we are interested in doing a extrapolative prediction of E [P(A|B)], how can we know that the new sample’s density function is identical with the original? Unless we can give some really good argument for this being the case, inferences built on P(A|B) is not really saying anything on that of the target system’s P'(A|B).

As I see it is this heart of the matter. External validity/extrapolation/generalization is founded on the assumption that we could make inferences based on P(A|B) that is exportable to other populations for which P'(A|B) applies. Sure, if one can convincingly show that P and P’are similar enough, the problems are perhaps surmountable. But arbitrarily just introducing functional specification restrictions of the type invariance/stability /homogeneity, is, at least for an epistemological realist far from satisfactory. And often it is – unfortunately – exactly this that we see when we take part of neoclassical economists’ models/experiments.

By this I do not mean to say that empirical methods per se are so problematic that they can never be used. On the contrary, I am basically – though not without reservations – in favour of the increased use of experiments within economics. Not least as an alternative to completely barren “bridge-less” axiomatic-deductive theory models. My criticism is more about aspiration levels and what we believe that we can achieve with our mediational epistemological tools and methods in the social sciences.

Many ‘experimentalists’ claim that it is easy to replicate experiments under different conditions and therefore a fortiori easy to test the robustness of experimental results. But is it really that easy? If in the example given above, we run a test and find that our predictions were not correct – what can we conclude? The B “works” in Sweden but not in the UK? Or that B “works” in a backward agrarian society, but not in a post-modern service society? That B “worked” in the field study conducted in year 2005 but not in year 2013? Population selection is almost never simple. Had the problem of external validity only been about inference from sample to population, this would be no critical problem. But the really interesting inferences are those we try to make from specific labs/experiments to specific real world situations/institutions/structures that we are interested in understanding or (causally) to explain. And then the population problem is more difficult to tackle.

Just as traditional neoclassical modelling, randomized experiments is basically a deductive method. Given  the assumptions (such as manipulability, transitivity, separability, additivity, linearity etc)  these methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right.  Real target systems are seldom epistemically isomorphic to our axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of  the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by randomization procedures may be valid in “closed” models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.

Ideally controlled experiments (still the benchmark even for natural and quasi experiments) tell us with certainty what causes what effects – but only given the right “closures”. Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. “It works there” is no evidence for “it will work here”. Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of  “rigorous” and “precise” methods is despairingly small.

Many advocates of randomization and experiments want  to have deductively automated answers to  fundamental causal questions. But to apply “thin” methods we have to have “thick” background knowledge of  what’s going on in the real world, and not in (ideally controlled) experiments. Conclusions  can only be as certain as their premises – and that also goes for methods based on randomized experiments.

test-tubeThe claimed strength of a social experiment, relatively to non-experimental methods, is that few assumptions are required to establish its internal validity in identifying a project’s impact. The identification is not assumption-free. People are (typically and thankfully) free agents who make purposive choices about whether or not they should take up an assigned intervention. As is well understood by the randomistas, one needs to correct for such selective compliance … The randomized assignment is assumed to only affect outcomes through treatment status (the “exclusion restriction”).

There is another, more troubling, assumption just under the surface. Inferences are muddied by the presence of some latent factor—unobserved by the evaluator but known to the participant—that influences the individual-specific impact of the program in question … Then the standard instrumental variable method for identifying [the average treatment effect on the treated] is no longer valid, even when the instrumental variable is a randomized assignment … Most social experiments in practice make the implicit and implausible assumption that the program has the same impact for everyone.

While internal validity … is the claimed strength of an experiment, its acknowledged weakness is external validity—the ability to learn from an evaluation about how the specific intervention will work in other settings and at larger scales. The randomistas see themselves as the guys with the lab coats—the scientists—while other types, the “policy analysts,” worry about things like external validity. Yet it is hard to argue that external validity is less important than internal validity when trying to enhance development effectiveness against poverty; nor is external validity any less legitimate as a topic for scientific inquiry.

Martin Ravaillon

Causal inference from observational data

29 Oct, 2013 at 17:44 | Posted in Statistics & Econometrics | 2 Comments

Distinguished Professor of social psychology Richard E. Nisbett takes on the idea of intelligence and IQ testing in his Intelligence and How to Get It (Norton 2011). He also has a some interesting thoughts on multiple-regression analysis and writes:

nisbettResearchers often determine the individual’s contemporary IQ or IQ earlier in life, socioeconomic status of the family of origin, living circumstances when the individual was a child, number of siblings, whether the family had a library card, educational attainment of the individual, and other variables, and put all of them into a multiple-regression equation predicting adult socioeconomic status or income or social pathology or whatever. Researchers then report the magnitude of the contribution of each of the variables in the regression equation, net of all the others (that is, holding constant all the others). It always turns out that IQ, net of all the other variables, is important to outcomes. But … the independent variables pose a tangle of causality – with some causing others in goodness-knows-what ways and some being caused by unknown variables that have not even been measured. Higher socioeconomic status of parents is related to educational attainment of the child, but higher-socioeconomic-status parents have higher IQs, and this affects both the genes that the child has and the emphasis that the parents are likely to place on education and the quality of the parenting with respect to encouragement of intellectual skills and so on. So statements such as “IQ accounts for X percent of the variation in occupational attainment” are built on the shakiest of statistical foundations. What nature hath joined together, multiple regressions cannot put asunder.

Now, I think this is right as far as it goes, although it would certainly have strengthened Nisbett’s argumentation if he had elaborated more on the methodological question around causality, or at least had given some mathematical-statistical-econometric references. Unfortunately, his alternative approach is not more convincing than regression analysis. As so many other contemporary social scientists today, Nisbett seems to think that randomization may solve the empirical problem. By randomizing we are getting different “populations” that are homogeneous in regards to all variables except the one we think is a genuine cause. In that way we are supposed being able not having to actually know what all these other factors are.

If you succeed in performing an ideal randomization with different treatment groups and control groups that is attainable. But it presupposes that you really have been able to establish – and not just assume – that the probability of all other causes but the putative have the same probability distribution in the treatment and control groups, and that the probability of assignment to treatment or control groups are independent of all other possible causal variables.

Unfortunately, real experiments and real randomizations seldom or never achieve this. So, yes, we may do without knowing all causes, but it takes ideal experiments and ideal randomizations to do that, not real ones.

As I have argued that means that in practice we do have to have sufficient background knowledge to deduce causal knowledge. Without old knowledge, we can’t get new knowledge – and, no causes in, no causes out.

On the issue of the shortcomings of multiple regression analysis, no one sums it up better than David Freedman:

Layout 1If the assumptions of a model are not derived from theory, and if predictions are not tested against reality, then deductions from the model must be quite shaky. However, without the model, the data cannot be used to answer the research question …

In my view, regression models are not a particularly good way of doing empirical work in the social sciences today, because the technique depends on knowledge that we do not have. Investigators who use the technique are not paying adequate attention to the connection – if any – between the models and the phenomena they are studying. Their conclusions may be valid for the computer code they have created, but the claims are hard to transfer from that microcosm to the larger world …

Regression models often seem to be used to compensate for problems in measurement, data collection, and study design. By the time the models are deployed, the scientific position is nearly hopeless. Reliance on models in such cases is Panglossian …

Given the limits to present knowledge, I doubt that models can be rescued by technical fixes. Arguments about the theoretical merit of regression or the asymptotic behavior of specification tests for picking one version of a model over another seem like the arguments about how to build desalination plants with cold fusion and the energy source. The concept may be admirable, the technical details may be fascinating, but thirsty people should look elsewhere …

Causal inference from observational data presents many difficulties, especially when underlying mechanisms are poorly understood. There is a natural desire to substitute intellectual capital for labor, and an equally natural preference for system and rigor over methods that seem more haphazard. These are possible explanations for the current popularity of statistical models.

Indeed, far-reaching claims have been made for the superiority of a quantitative template that depends on modeling – by those who manage to ignore the far-reaching assumptions behind the models. However, the assumptions often turn out to be unsupported by the data. If so, the rigor of advanced quantitative methods is a matter of appearance rather than substance.

Robert Shiller agrees to disagree with Fama

28 Oct, 2013 at 19:23 | Posted in Economics | 33 Comments

Professor Fama is the father of the modern efficient-markets theory, which says financial prices efficiently incorporate all available information and are in that sense perfect. In contrast, I have argued that the theory makes little sense, except in fairly trivial ways. Of course, prices reflect available information. But they are far from perfect. Along with like-minded colleagues and former students, I emphasize the enormous role played in markets by human error, as documented in a now-established literature called behavioral finance …

nobel-prize-winner-robert-shiller-once-said-something-stunning-about-his-namesake-valuation-ratio

Actually, I do not completely oppose the efficient-markets theory. I have been calling it a half-truth. If the theory said nothing more than that it is unlikely that the average amateur investor can get rich quickly by trading in the markets based on publicly available information, the theory would be spot on. I personally believe this, and in my own investing I have avoided trading too much, and have a high level of skepticism about investing tips.

But the theory is commonly thought, at least by enthusiasts, to imply much more. Notably, it has been argued that regular movements in the markets reflect a wisdom that transcends the best understanding of even the top professionals, and that it is hopeless for an ordinary mortal, even with a lifetime of work and preparation, to question pricing. Market prices are esteemed as if they were oracles.

This view grew to dominate much professional thinking in economics, and its implications are dangerous. It is a substantial reason for the economic crisis we have been stuck in for the past five years, for it led authorities in the United States and elsewhere to be complacent about asset mispricing, about growing leverage in financial markets and about the instability of the global system. In fact, markets are not perfect, and really need regulation, much more than Professor Fama’s theories would allow …

We disagree on a number of important points, but there is nothing wrong with our sharing the prize. In fact, I am happy to share it with my co-recipients, even if we sometimes seem to come from different planets.

 Robert Shiller

Fama and Shiller — an odd couple indeed

28 Oct, 2013 at 12:27 | Posted in Economics | 1 Comment

the-odd-coupleI would love to be in the audience watching the body language at this year’s “Nobel” ceremony for economics. Robert Shiller, who is far too polite a person to make it obvious, will nonetheless at least fidget as he listens to Eugene Fama’s speech, since Fama continues to dispute that bubbles in asset prices can even be defined. Shiller, in contrast, first came to public prominence with his warnings in the early 2000s that the stock and housing markets in the States were displaying signs of “irrational exuberance” …

How can two such diametrically opposed views receive the Nobel Prize in one year? The equivalent in physics would be to award the prize to one research team that proved that the Higgs Boson existed, and another that proved it didn’t …

Shiller is a worthy recipient for a number of reasons. First and foremost, he has emphasised the need for long term empirical data in economics, and he has provided it as well  … He also maintains (though he didn’t originate) a stock market index which compares the price of stocks now to their accumulated earnings over the previous decade …

These two contributions in effect make him the Tycho Brahe of economics, and for those alone, I support the award to him. Economics generally behaves as a pre-Galilean discipline in which observation comes a very poor second to theoretical beliefs. Shiller has instead always emphasized that the data must be considered—especially when it contradicts beliefs, which is so often the case in economics.

Fama has received the award for proving “that stock prices are extremely difficult to predict in the short run, and that new information is very quickly incorporated into prices”.

Poppycock. Years before Fama promoted the “Efficient Markets Hypothesis” as an equilibrium-fixated explanation for the obvious fact that it’s “extremely difficult to predict in the short run”, Benoit Mandelbrot had developed the concept of fractals which gave a far-from-equilibrium explanation for precisely the same phenomenon. Economics ignored Mandelbrot’s work completely …

On the intimidation front, Fama’s outspoken championing of an equilibrium approach to modeling asset markets was the exact opposite of Shiller’s brave resistance to the groupthink of American economics. He has been an enforcer of conformity to mainstream thought who has stuck with his equilibrium beliefs despite evidence to the contrary.

Shiller, on the other hand, has been willing to accept that the messiness of the real world is indeed reality, even if it conflicts with the mainstream economics preference for believing that everything happens in equilibrium.

The Odd Couple indeed. Give me Robert Shiller’s messy Oscar over Fama’s anally retentive Felix any day.

Steve Keen

Simon Wren-Lewis on rational expectations — so wrong, so wrong

27 Oct, 2013 at 18:16 | Posted in Economics | 2 Comments

Oxford professor Simon Wren-Lewis tries — again — to defend the rational expectations hypothesis:

To put it simply, the media help cause changes in public opinion, rather than simply reflect that opinion. Yet, if you have a certain caricature of what a modern macroeconomist believes in your head, this is a strange argument for one to make. That caricature is that we all believe in rational expectations, where agents use all readily available information in an efficient way to make decisions …

Some who read my posts will also know that I am a fan of rational expectations. I tend to get irritated with those (e.g. some heterodox economists) that pan the idea by talking about superhuman agents that know everything. To engage constructively with how to model expectations, you have to talk about practical alternatives. If we want something simple (and, in particular, if we do not want to complicate by borrowing from the extensive recent literature on learning), we often seem to have to choose between assuming rationality or something naive, like adaptive expectations. I have argued that, for the kind of macroeconomic issues that I am interested in, rational expectations provides a more realistic starting point, although that should never stop us analysing the consequences of expectations errors.

It’s easy — and I think this also goes for those who are not “heterodox economists” — not to be exactly impressed by this kind of argumentation. If Wren-Lewis and other macroeconomic modellers don’t “believe in rational expectations,” then why use such a preposterous assumption? The only “caricature” here, is the view of science that the advocates of rational expectations give.

Those who want to build macroeconomics on microfoundations usually maintain that the only robust policies and institutions are those based on rational expectations and representative actors. As yours truly tried to show in a paper in Real-World Economics Review last year —Rational expectations — a fallacious foundation for macroeconomics in a non-ergodic world –there is really no support for this conviction at all. On the contrary. If we want to have anything of interest to say on real economies, financial crisis and the decisions and choices real people make, it is high time to place macroeconomic models building on representative actors and rational expectations-microfoundations where they belong – in the dustbin of history.

For if this microfounded macroeconomics has nothing to say about the real world and the economic problems out there, why should we care about it? The final court of appeal for macroeconomic models is the real world, and as long as no convincing justification is put forward for how the inferential bridging de facto is made, macroeconomic modelbuilding is little more than hand waving that give us rather little warrant for making inductive inferences from models to real world target systems. If substantive questions about the real world are being posed, it is the formalistic-mathematical representations utilized to analyze them that have to match reality, not the other way around.

“To engage constructively with how to model expectations, you have to talk about practical alternatives,” writes Wren-Lewis. And of course there are alternatives to neoclassical general equilibrium microfoundations. Behavioural economics and Roman Frydman and Michael Goldberg’s “imperfect knowledge” economics being two noteworthy examples that easily come to mind. In one of their recent books re rational expectations, Frydman and Goldberg write:

Beyond_Mechanical_MarketsThe belief in the scientific stature of fully predetermined models, and in the adequacy of the Rational Expectations Hypothesis to portray how rational individuals think about the future, extends well beyond asset markets. Some economists go as far as to argue that the logical consistency that obtains when this hypothesis is imposed in fully predetermined models is a precondition of the ability of economic analysis to portray rationality and truth.

For example, in a well-known article published in The New York Times Magazine in September 2009, Paul Krugman (2009, p. 36) argued that Chicago-school free-market theorists “mistook beauty . . . for truth.” One of the leading Chicago economists, John Cochrane (2009, p. 4), responded that “logical consistency and plausible foundations are indeed ‘beautiful’ but to me they are also basic preconditions for ‘truth.’” Of course, what Cochrane meant by plausible foundations were fully predetermined Rational Expectations models. But, given the fundamental flaws of fully predetermined models, focusing on their logical consistency or inconsistency, let alone that of the Rational Expectations Hypothesis itself, can hardly be considered relevant to a discussion of the basic preconditions for truth in economic analysis, whatever “truth” might mean.

There is an irony in the debate between Krugman and Cochrane. Although the New Keynesian and behavioral models, which Krugman favors, differ in terms of their specific assumptions, they are every bit as mechanical as those of the Chicago orthodoxy. Moreover, these approaches presume that the Rational Expectations Hypothesis provides the standard by which to define rationality and irrationality.

In fact, the Rational Expectations Hypothesis requires no assumptions about the intelligence of market participants whatsoever … Rather than imputing superhuman cognitive and computational abilities to individuals, the hypothesis presumes just the opposite: market participants forgo using whatever cognitive abilities they do have. The Rational Expectations Hypothesis supposes that individuals do not engage actively and creatively in revising the way they think about the future. Instead, they are presumed to adhere steadfastly to a single mechanical forecasting strategy at all times and in all circumstances. Thus, contrary to widespread belief, in the context of real-world markets, the Rational Expectations Hypothesis has no connection to how even minimally reasonable profit-seeking individuals forecast the future in real-world markets. When new relationships begin driving asset prices, they supposedly look the other way, and thus either abjure profit-seeking behavior altogether or forgo profit opportunities that are in plain sight.

Beyond Mechanical Markets

And in a more recent article the same authors write:

Contemporary economists’ reliance on mechanical rules to understand – and influence – economic outcomes extends to macroeconomic policy as well, and often draws on an authority, John Maynard Keynes, who would have rejected their approach. Keynes understood early on the fallacy of applying such mechanical rules. “We have involved ourselves in a colossal muddle,” he warned, “having blundered in the control of a delicate machine, the working of which we do not understand.”

In The General Theory of Employment, Interest, and Money, Keynes sought to provide the missing rationale for relying on expansionary fiscal policy to steer advanced capitalist economies out of the Great Depression. But, following World War II, his successors developed a much more ambitious agenda. Instead of pursuing measures to counter excessive fluctuations in economic activity, such as the deep contraction of the 1930’s, so-called stabilization policies focused on measures that aimed to maintain full employment. “New Keynesian” models underpinning these policies assumed that an economy’s “true” potential – and thus the so-called output gap that expansionary policy is supposed to fill to attain full employment – can be precisely measured.

But, to put it bluntly, the belief that an economist can fully specify in advance how aggregate outcomes – and thus the potential level of economic activity – unfold over time is bogus …

Roman Frydman & Michael Goldberg

The real macroeconomic challenge is to accept uncertainty and still try to explain why economic transactions take place – instead of simply conjuring the problem away by assuming rational expectations and treating uncertainty as if it was possible to reduce it to stochastic risk. That is scientific cheating. And it has been going on for too long now.

Micro vs Macro

27 Oct, 2013 at 10:15 | Posted in Economics | 2 Comments

John Quiggin had an interesting post up yesterday on the microfoundations issue:

The basic problem is that standard neoclassical microeconomics is itself a macroeconomic theory in the sense that it’s derived from a general equilibrium model as a whole. The standard GE model takes full employment (in an appropriate technical sense) as given, and derives a whole series of fundamental results from this. Conversely, if the economy can exhibit sustained high unemployment, there must be something badly wrong with standard neoclassical microeconomics.

Yours truly totally agrees — there exist overwhelmingly strong reasons for being critical and doubtful re microfoundations of macroeconomics — and so let me elaborate on a couple of them.

Microfoundations today means more than anything else that you try to build macroeconomic models assuming “rational expectations” and hyperrational “representative actors” optimizing over time. Both are highly questionable assumptions.

The concept of rational expectations was first developed by John Muth (1961) and later applied to macroeconomics by Robert Lucas (1972). Those macroeconomic models building on rational expectations microfoundations that are used today among both New Classical and “New Keynesian” macroconomists, basically assume that people on the average hold expectations that will be fulfilled. This makes the economist’s analysis enormously simplistic, since it means that the model used by the economist is the same as the one people use to make decisions and forecasts of the future.

Macroeconomic models building on rational expectations-microfoundations assume that people, on average, have the same expectations. Someone like Keynes for example, on the other hand, would argue that people often have different expectations and information, which constitutes the basic rational behind macroeconomic needs of coordination. Something that is rather swept under the rug by the extremely simple-mindedness of assuming rational expectations in representative actors models, which is so in vogue in New Classical and “New Keynesian” macroconomics. But if all actors are alike, why do they transact? Who do they transact with? The very reason for markets and exchange seems to slip away with the sister assumptions of representative actors and rational expectations.

Macroeconomic models building on rational expectations microfoundations impute beliefs to the agents that is not based on any real informational considerations, but simply stipulated to make the models mathematically-statistically tractable. Of course you can make assumptions based on tractability, but then you do also have to take into account the necessary trade-off in terms of the ability to make relevant and valid statements on the intended target system. Mathematical tractability cannot be the ultimate arbiter in science when it comes to modeling real world target systems. One could perhaps accept macroeconomic models building on rational expectations-microfoundations  if they had produced lots of verified predictions and good explanations. But they have done nothing of the kind. Therefore the burden of proof is on those macroeconomists who still want to use models built on these particular unreal assumptions.

Read more …

No woman, no drive

26 Oct, 2013 at 22:05 | Posted in Politics & Society | Comments Off on No woman, no drive

 

Keynes Academy — an introduction to econometrics

26 Oct, 2013 at 18:43 | Posted in Statistics & Econometrics | Comments Off on Keynes Academy — an introduction to econometrics

 

Inside Job 2.0

26 Oct, 2013 at 16:40 | Posted in Economics, Politics & Society | 1 Comment

 

Peter Donnelly on how stats fool juries

25 Oct, 2013 at 21:46 | Posted in Statistics & Econometrics | 2 Comments

 

Jeffrey Rosenthal on probability and fraud

25 Oct, 2013 at 21:11 | Posted in Statistics & Econometrics | Comments Off on Jeffrey Rosenthal on probability and fraud

 

Next Page »

Blog at WordPress.com.
Entries and Comments feeds.