The increasing ascendancy of real business cycle theories of various stripes, with their common view that the economy is best modeled as a floating Walrasian equilibrium, buffeted by productivity shocks, is indicative of the depths of the divisions separating academic macroeconomists …
If these theories are correct, they imply that the macroeconomics developed in the wake of the Keynesian Revolution is well confined to the ashbin of history. And they suggest that most of the work of contemporary macroeconomists is worth little more than that of those pursuing astrological science …
The appearance of Ed Prescott’ s stimulating paper, “Theory Ahead of Business Cycle Measurement,” affords an opportunity to assess the current state of real business cycle theory and to consider its prospects as a foundation for macroeconomic analysis …
My view is that business cycle models of the type urged on us by Prescott have nothing to do with the business cycle phenomena observed in The United States or other capitalist economies …
Presoctt’s growth model is not an inconceivable representation of reality. But to claim that its prameters are securely tied down by growth and micro observations seems to me a gross overstatement. The image of a big loose tent flapping in the wind comes to mind …
In Prescott’s model, the central driving force behind cyclical fluctuations is technological shocks. The propagation mechansim is intertemporal substitution in employment. As I have argued so far, there is no independent evidence from any source for either of these phenomena …
Imagine an analyst confronting the market for ketchup. Suppose she or he decided to ignore data on the price of ketchup. This would considerably increase the analyst’s freedom in accounting for fluctuations in the quantity of ketchup purchased … It is difficult to believe that any explanation of fluctuations in ketchup sales that did not confront price data would be taken seriously, at least by hard-headed economists.
Yet Pescott offers an exercise in price-free economics … Others have confronted models like Prescott’s to data on prices with what I think can fairly be labeled dismal results. There is simply no evidence to support any of the price effects predicted by the model …
Improvement in the track record of macroeconomics will require the development of theories that can explain why exchange sometimes work and other times breaks down. Nothing could be more counterproductive in this regard than a lengthy professional detour into the analysis of stochastic Robinson Crusoes.
Lawrence Summers: Some Skeptical Observations on Real Business Cycle Theory
A few years ago Armin Falk and James Heckman published an acclaimed article titled “Lab Experiments Are a Major Source of Knowledge in the Social Sciences” in the journal Science. The authors – both renowned economists – argued that both field experiments and laboratory experiments are basically facing the same problems in terms of generalizability and external validity – and that a fortiori it is impossible to say that one would be better than the other.
What strikes me when reading both Falk & Heckman and advocators of field experiments – such as John List and Steven Levitt – is that field studies and experiments are both very similar to theoretical models. They all have the same basic problem – they are built on rather artificial conditions and have difficulties with the “trade-off” between internal and external validity. The more artificial conditions, the more internal validity, but also less external validity. The more we rig experiments/field studies/models to avoid the “confounding factors”, the less the conditions are reminicent of the real “target system”. To that extent, I also believe that Falk & Heckman are right in their comments on the discussion of the field vs. experiments in terms of realism – the nodal issue is not about that, but basically about how economists using different isolation strategies in different “nomological machines” attempt to learn about causal relationships. By contrast to Falk & Heckman and advocators of field experiments, as List and Levitt, I doubt the generalizability of both research strategies, because the probability is high that causal mechanisms are different in different contexts and that lack of homogeneity/ stability/invariance doesn’t give us warranted export licenses to the “real” societies or economies.
If you mainly conceive of experiments or field studies as heuristic tools, the dividing line between, say, Falk & Heckman and List or Levitt is probably difficult to perceive.
But if we see experiments or field studies as theory tests or models that ultimately aspire to say something about the real “target system”, then the problem of external validity is central (and was for a long time also a key reason why behavioural economists had trouble getting their research results published).
Assume that you have examined how the work performance of Chinese workers A is affected by B (“treatment”). How can we extrapolate/generalize to new samples outside the original population (e.g. to the US)? How do we know that any replication attempt “succeeds”? How do we know when these replicated experimental results can be said to justify inferences made in samples from the original population? If, for example, P(A|B) is the conditional density function for the original sample, and we are interested in doing a extrapolative prediction of E [P(A|B)], how can we know that the new sample’s density function is identical with the original? Unless we can give some really good argument for this being the case, inferences built on P(A|B) is not really saying anything on that of the target system’s P’(A|B).
As I see it is this heart of the matter. External validity/extrapolation/generalization is founded on the assumption that we could make inferences based on P(A|B) that is exportable to other populations for which P’(A|B) applies. Sure, if one can convincingly show that P and P’are similar enough, the problems are perhaps surmountable. But arbitrarily just introducing functional specification restrictions of the type invariance/stability /homogeneity, is, at least for an epistemological realist far from satisfactory. And often it is – unfortunately – exactly this that I see when I take part of neoclassical economists’ models/experiments/field studies.
By this I do not mean to say that empirical methods per se are so problematic that they can never be used. On the contrary, I am basically – though not without reservations – in favour of the increased use of experiments and field studies within economics. Not least as an alternative to completely barren “bridge-less” axiomatic-deductive theory models. My criticism is more about aspiration levels and what we believe that we can achieve with our mediational epistemological tools and methods in the social sciences.
Many ‘experimentalists’ claim that it is easy to replicate experiments under different conditions and therefore a fortiori easy to test the robustness of experimental results. But is it really that easy? If in the example given above, we run a test and find that our predictions were not correct – what can we conclude? The B “works” in China but not in the US? Or that B “works” in a backward agrarian society, but not in a post-modern service society? That B “worked” in the field study conducted in year 2008 but not in year 2012? Population selection is almost never simple. Had the problem of external validity only been about inference from sample to population, this would be no critical problem. But the really interesting inferences are those we try to make from specific labs/experiments/fields to specific real world situations/institutions/structures that we are interested in understanding or (causally) to explain. And then the population problem is more difficult to tackle.
Everyone – both “labs” and “experimentalists” – should consider the following lines from David Salsburg’s The Lady Tasting Tea (Henry Holt 2001:146):
In Kolmogorov’s axiomatization of probability theory, we assume there is an abstract space of elementary things called ‘events’ … If a measure on the abstract space of events fulfills certain axioms, then it is a probability. To use probability in real life, we have to identify this space of events and do so with sufficient specificity to allow us to actually calculate probability measurements on that space … Unless we can identify Kolmogorov’s abstract space, the probability statements that emerge from statistical analyses will have many different and sometimes contrary meanings.
Evidence-based theories and policies are highly valued nowadays. Randomization is supposed to best control for bias from unknown confounders. The received opinion is that evidence based on randomized experiments therefore is the best.
More and more economists have also lately come to advocate randomization as the principal method for ensuring being able to make valid causal inferences.
Renowned econometrician Ed Leamer has responded to these allegations, maintaning that randomization is not sufficient, and that the hopes of a better empirical and quantitative macroeconomics are to a large extent illusory. Randomization – just as econometrics – promises more than it can deliver, basically because it requires assumptions that in practice are not possible to maintain:
We economists trudge relentlessly toward Asymptopia, where data are unlimited and estimates are consistent, where the laws of large numbers apply perfectly andwhere the full intricacies of the economy are completely revealed. But it’s a frustrating journey, since, no matter how far we travel, Asymptopia remains infinitely far away. Worst of all, when we feel pumped up with our progress, a tectonic shift can occur, like the Panic of 2008, making it seem as though our long journey has left us disappointingly close to the State of Complete Ignorance whence we began.
The pointlessness of much of our daily activity makes us receptive when the Priests of our tribe ring the bells and announce a shortened path to Asymptopia … We may listen, but we don’t hear, when the Priests warn that the new direction is only for those with Faith, those with complete belief in the Assumptions of the Path. It often takes years down the Path, but sooner or later, someone articulates the concerns that gnaw away in each of us and asks if the Assumptions are valid … Small seeds of doubt in each of us inevitably turn to despair and we abandon that direction and seek another …
Ignorance is a formidable foe, and to have hope of even modest victories, we economists need to use every resource and every weapon we can muster, including thought experiments (theory), and the analysis of data from nonexperiments, accidental experiments, and designed experiments. We should be celebrating the small genuine victories of the economists who use their tools most effectively, and we should dial back our adoration of those who can carry the biggest and brightest and least-understood weapons. We would benefit from some serious humility, and from burning our “Mission Accomplished” banners. It’s never gonna happen.
Part of the problem is that we data analysts want it all automated. We want an answer at the push of a button on a keyboard … Faced with the choice between thinking long and hard verus pushing the button, the single button is winning by a very large margin.
Let’s not add a “randomization” button to our intellectual keyboards, to be pushed without hard reflection and thought.
Especially when it comes to questions of causality, randomization is nowadays considered some kind of “gold standard”. Everything has to be evidence-based, and the evidence has to come from randomized experiments.
But just as econometrics, randomization is basically a deductive method. Given the assumptions (such as manipulability, transitivity, Reichenbach probability principles, separability, additivity, linearity etc) these methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. [And although randomization may contribute to controlling for confounding, it does not guarantee it, since genuine ramdomness presupposes infinite experimentation and we know all real experimentation is finite. And even if randomization may help to establish average causal effects, it says nothing of individual effects unless homogeneity is added to the list of assumptions.] Real target systems are seldom epistemically isomorphic to our axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by randomization procedures may be valid in “closed” models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.
When does a conclusion established in population X hold for target population Y? Only under very restrictive conditions!
Science philosopher Nancy Cartwright has succinctly summarized the value of randomization. In The Lancet 23/4 2011 she states:
But recall the logic of randomized control trials … [T]hey are ideal for supporting ‘it-works-somewhere’ claims. But they are in no way ideal for other purposes; in particular they provide no better bases for extrapolating or generalising than knowledge that the treatmet caused the outcome in any other individuals in any other circumstances … And where no capacity claims obtain, there is seldom warrant for assuming that a treatment that works somewhere will work anywhere else. (The exception is where there is warrant to believe that the study population is a representative sample of the target population – and cases like this are hard to come by.)
And in BioSocieties 2/2007:
We experiment on a population of individuals each of whom we take to be described (or ‘governed’) by the same fixed causal structure (albeit unknown) and fixed probability measure (albeit unknown). Our deductive conclusions depend on that cvery causal structure and probability. How do we know what individuals beyond those in our experiment this applies to? … The [randomized experiment], with its vaunted rigor, takes us only a very small part of the way we need to go for practical knowledge. This is what disposes me to warn about the vanity of rigor in [randomized experiments].
Ideally controlled experiments (still the benchmark even for natural and quasi experiments) tell us with certainty what causes what effects – but only given the right “closures”. Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. “It works there” is no evidence for “it will work here”. Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of “rigorous” and “precise” methods is despairingly small.
Here I think Leamer’s “button” metaphor is appropriate. Many advocates of randomization want to have deductively automated answers to fundamental causal questions. But to apply “thin” methods we have to have “thick” background knowledge of what’s going on in the real world, and not in (ideally controlled) experiments. Conclusions can only be as certain as their premises – and that also goes for methods based on randomized experiments.
On conference. Blogging will be resumed in the weekend.
And Henry Blodget tells us why:
Now, I’m not an economist, and I’m not born of a particular economic school that I’ve bet my life’s work on, so I have observed the global economic events of the past five years with a fairly open mind.
I’ve listened to Keynesians like Paul Krugman argue that the way to fix the mess is to open the government spending spigot and invest like crazy.
And I’ve listened to Austerians like Niall Ferguson argue that the way to fix the mess is to cut spending radically, balance government budgets, and unleash the private sector.
And I’ve also looked back at history—namely, Reinhart and Rogoff’s analysis of prior financial crises, the Great Depression, Japan, Germany after Weimar, and so forth.
And more and more it appears that Keynes was right.
In the aftermath of a massive debt binge like the one we went on from 1980-2007, when the private sector collapses and then retreats to lick its wounds and deleverage, the best way to help the economy work its way out of its hole is for the government to spend like crazy.
Or, rather, if not the “best way,” at least the least-worst way.
Because, obviously, piling up even bigger mountains of debt is not a happy side-effect of such spending.
But let’s face it: Austerity doesn’t work.
At least, austerity doesn’t work to quickly fix the problem.
The reason austerity doesn’t work to quickly fix the problem is that, when the economy is already struggling, and you cut government spending, you also further damage the economy. And when you further damage the economy, you further reduce tax revenue, which has already been clobbered by the stumbling economy. And when you further reduce tax revenue, you increase the deficit and create the need for more austerity. And that even further clobbers the economy and tax revenue. And so on.
But getting the budget under control by radically chopping spending or increasing taxes this minute, as many Austerians want to do, won’t help. In fact, it will likely make the problems vastly worse, because it will put that many more people out of work and reduce tax revenue that much further (just take a look at Europe).Meanwhile, given that we’ve already racked up $15 trillion of debt, I certainly wouldn’t be opposed to our spending another couple of trillion upgrading our piss-poor infrastructure. Incurring debt to build things that help all Americans, from unemployed folks to business leaders to children, is a trade-off I’m willing to make. Especially if the jobs created by this “stimulus” spending help alleviate our massive unemployment and inequality problems.
And, by the way, I don’t think this “stimulus” necessarily needs to come from just the government. Our corporations are as profitable now as they have ever been. So I’d like to see a lot of them voluntarily decide to invest more and pay their low-wage employees more and hire more employees. They can afford it, and “cash flow” isn’t the sole objective or reward of running a business.
Anyway, based on the experience of the last five years, it seems to me that Keynes was right.
I still have an open mind, though, if any Austerians out there want to have another go.
Den här våren håller yours truly en kurs om Finanskriser – orsaker, förlopp och konsekvenser på Malmö högskola. Intresset för kursen har varit enormt stort och tyvärr har bara en tredjedel av alla som velat följa kursen kunnat antas (den kommer att ges igen vårterminen 2013, så den som väntar på något gott …).
Dagens lektion handlar om hur man eventuellt kan undvika finanskriser. Ett förslag som kommer att diskuteras är införandet av ett “jubileumsår” med skuldavskrivningar:
From Noahpinion we get this nice piece on Real Business Cycle models:
It has often been said of the Holy Roman Empire that it was “neither Holy, nor Roman, nor an Empire.” However, that joke has gotten a bit stale since Voltaire wrote it in the 1700s, so I think it’s time for a new one. Real Business Cycle models, it turns out, are neither Real, nor about Business, nor about Cycles.
They are, however, the macro models that annoy me far more than any other (and I’m not alone). I’ll explain the joke in increasing order of the things that annoy me.
First, “Cycles”. The “business cycles” in RBC models are not periodic, like cycles in physics. But they are also not “cycles” in the sense that a bust must follow a boom. Booms and busts are just random shocks. The “business cycle” that we think we see, according to these models, is simply a statistical illusion. (Actually, RBC shares this property with New Keynesian and Old Keynesian models alike. Very few people dare to write down a model in which knowing you’re in a boom today allows you to predict a bust tomorrow!)
Next, “Business”. Businesses are called “firms” in economic models. But if you look at the firms in an RBC model, you will see that they bear very little resemblance to real-life firms. For one thing, they make no profits; their revenues equal their costs. For another thing, they produce only one good. (Also, like firms in many economic models, they are all identical, they live forever, they make all their decisions to serve the interests of households, and they make all decisions perfectly. Etc. etc.) In other words, they display very few of the characteristics that real businesses display. This means that the “business cycle” in an RBC model is not really the result of any interesting characteristics of businesses; everything is due to the individual decisions of consumers and workers, and to the outside force of technological progress.
Finally, “Real”. This is the one that really gets me. “Real” refers to the fact that the shocks in RBC models are “real” as opposed to “nominal” shocks (I’ve actually never liked this terminology, since it seems to subtly imply that money is neutral, which it isn’t). But one would have to be a fool not to see the subtext in the use of the term – it implies that business-cycle theories based on demand shocks are not, in fact, real; that recessions and booms are obviously caused by supply shocks. If RBC is “real”, then RBC’s competitors – Keynesian models and the like – must be fantasy business cycle models.
However, it turns out that RBC and reality are not exactly drinking buddies. I hereby outsource the beatdown of the substance of RBC models to one of the greatest beatdown specialists in the history of economics: the formidable Larry Summers. In a 1986 essay … Summers identified three main reasons why RBC models are not, in fact, real:
1. RBC models use parameter values that are almost certainly wrong,
2. RBC models make predictions about prices that are completely, utterly wrong, and
3. The “technology shocks” that RBC models assume drive the business cycle have never been found.
I encourage everyone to go read the whole thing. Pure and utter pulpification! Actually, this essay was assigned to me on the first day of my intro macro course, but at the time I wasn’t able to appreciate it.
So Real Business Cycle models are neither Real, nor about Business, nor about Cycles. Are they models? Well, sadly, yes they are…of a sort. You actually can put today’s data into an RBC model and get a prediction about future data. But see, here’s the thing: that prediction will be entirely driven by the most ad-hoc, hard-to-swallow part of the model!
“New Keynesian” macroeconomic models are at heart based on the modeling strategy of DSGE – representative agents, rational expectations, equilibrium and all that. They do have some minor idiosyncracies (like “menu costs” and “price rigidities” preferably in a monopolistic competition setting ), but the differencies are not really that fundamental. The basic model assumptions are the same.
If macoeconomic models – no matter of what ilk – assume representative actors, rational expectations, market clearing and equilibrium, and we know that real people and markets cannot be expected to obey these assumptions, the warrants for supposing that conclusions or hypothesis of causally relevant mechanisms or regularities can be bridged, are obviously non-justifiable. Macroeconomic theorists – regardless of being “New Monetarist”, “New Classical” or ”New Keynesian” – ought to do some ontological reflection and heed Keynes’ warnings on using thought-models in economics:
The object of our analysis is, not to provide a machine, or method of blind manipulation, which will furnish an infallible answer, but to provide ourselves with an organized and orderly method of thinking out particular problems; and, after we have reached a provisional conclusion by isolating the complicating factors one by one, we then have to go back on ourselves and allow, as well as we can, for the probable interactions of the factors amongst themselves. This is the nature of economic thinking. Any other way of applying our formal principles of thought (without which, however, we shall be lost in the wood) will lead us into error.
People calling themselves “New Keynesians” ought to be rather embarrassed by the fact that the kind of microfounded dynamic stochastic general equilibrium models they use, cannot incorporate such a basic fact of reality as involuntary unemployment!
Of course, working with representative agent models, this should come as no surprise. If one representative agent is employed, all representative agents are. The kind of unemployment that occurs is voluntary, since it is only adjustments of the hours of work that these optimizing agents make to maximize their utility.
From a methodological and theoretical point of view Paul Krugman’s comments in the debate on microfounded macromodels is really interesting because they shed light on a kind of inconsistency in his own art of argumentation.
During a couple of years Krugman has in more than one article criticized mainstream economics for using to much (bad) mathematics and axiomatics in their model-building endeavours. But when it comes to defending his own position on various issues he usually himself ultimately falls back on the same kind of models. This shows up also in the microfoundations debate, where he refers to the work he has done with Gauti Eggertsson – work that actually, when it comes to methodology and assumptions, has a lot in common with the kind of model-building he otherwise criticizes.
In 1996 Krugman was invited to speak to the European Association for Evolutionary Political Economy. I think reading the speech gives more than one clue on the limits of Krugman’s critique of modern mainstream economics (italics added):
I like to think that I am more open-minded about alternative approaches to economics than most, but I am basically a maximization-and-equilibrium kind of guy. Indeed, I am quite fanatical about defending the relevance of standard economic models in many situations.
I won’t say that I am entirely happy with the state of economics. But let us be honest: I have done very well within the world of conventional economics. I have pushed the envelope, but not broken it, and have received very widespread acceptance for my ideas. What this means is that I may have more sympathy for standard economics than most of you. My criticisms are those of someone who loves the field and has seen that affection repaid. I don’t know if that makes me morally better or worse than someone who criticizes from outside, but anyway it makes me different.
To me, it seems that what we know as economics is the study of those phenomena that can be understood as emerging from the interactions among intelligent, self-interested individuals. Notice that there are really four parts to this definition. Let’s read from right to left.
1. Economics is about what individuals do: not classes, not “correlations of forces”, but individual actors. This is not to deny the relevance of higher levels of analysis, but they must be grounded in individual behavior. Methodological individualism is of the essence.
2. The individuals are self-interested. There is nothing in economics that inherently prevents us from allowing people to derive satisfaction from others’ consumption, but the predictive power of economic theory comes from the presumption that normally people care about themselves.
3. The individuals are intelligent: obvious opportunities for gain are not neglected. Hundred-dollar bills do not lie unattended in the street for very long.
4. We are concerned with the interaction of such individuals: Most interesting economic theory, from supply and demand on, is about “invisible hand” processes in which the collective outcome is not what individuals intended.
Gould is the John Kenneth Galbraith of his subject. That is, he is a wonderful writer who is beloved by literary intellectuals and lionized by the media because he does not use algebra or difficult jargon. Unfortunately, it appears that he avoids these sins not because he has transcended his colleagues but because he does not seem to understand what they have to say; and his own descriptions of what the field is about – not just the answers, but even the questions – are consistently misleading. His impressive literary and historical erudition makes his work seem profound to most readers, but informed readers eventually conclude that there’s no there there.
Personally, I consider myself a proud neoclassicist. By this I clearly don’t mean that I believe in perfect competition all the way. What I mean is that I prefer, when I can, to make sense of the world using models in which individuals maximize and the interaction of these individuals can be summarized by some concept of equilibrium. The reason I like that kind of model is not that I believe it to be literally true, but that I am intensely aware of the power of maximization-and-equilibrium to organize one’s thinking – and I have seen the propensity of those who try to do economics without those organizing devices to produce sheer nonsense when they imagine they are freeing themselves from some confining orthodoxy.
On many macroeconomic policy discussions I often find myself in agreement with Krugman. To me that just shows that Krugman is right in spite of and not thanks to those neoclassical models/methodology/theories he ultimately refers to. When he is discussing austerity measures, ricardian equivalence or problems with the euro, he is actually not using those neoclassical models/methodology/theories, but rather simpler and more adequate and relevant thought-constructions in the vein of Keynes.
As all students of economics know, time is limited. Given that, there has to be better ways to optimize its utilization than spending hours and hours working through, constructing or adding tweaks to irrelevant “New Keynesian” DSGE macroeconomic models. I would rather recommend my students allocating their time into constructing better, real and relevant macroeconomic models – models that really help us to explain and understand reality.
I’ve touched briefly before on how behavioural economics makes the central libertarian mantra of being ‘free to choose’ completely incoherent. Libertarians tend to have a difficult time grasping this, responding with things like ‘so people aren’t rational; they’re still the best judges of their own decisions’. My point here is not necessarily that people are not the best judges of their own decisions, but that the idea of freedom of choice – as interpreted by libertarians – is nonsensical once you start from a behavioural standpoint.
The problem is that neoclassical economics, by modelling people as rational utility maximisers, lends itself to a certain way of thinking about government intervention. For if you propose intervention on the grounds that they are not rational utility maximisers, you are told that you are treating people as if they are stupid. Of course, this isn’t the case – designing policy as if people are rational utility maximisers is no different ethically to designing it as if they rely on various heuristics and suffer cognitive biases.
This ‘treating people as if they are stupid’ mentality highlights problem with neoclassical choice modelling: behaviour is generally considered either ‘rational’ or ‘irrational’. But this isn’t a particularly helpful way to think about human action – as Daniel Kuehn says, heuristics are not really ‘irrational’; they simply save time, and as this video emphasises, they often produce better results than homo economicus-esque calculation. So the line between rationality and irrationality becomes blurred.
For an example of how this flawed thinking pervades libertarian arguments, consider the case of excessive choice. It is well documented that people can be overwhelmed by too much choice, and will choose to put off the decision or just abandon trying altogether. So is somebody who is so inundated with choice that they don’t know what to do ‘free to choose’? Well, not really – their liberty to make their own decisions is hamstrung.
Another example is the case of Nudge. The central point of this book is that people’s decisions are always pushed in a certain direction, either by advertising and packaging, by what the easiest or default choice is, by the way the choice is framed, or any number of other things. This completely destroys the idea of ‘free to choose’ – if people’s choices are rarely or never made neutrally, then one cannot be said to be ‘deciding for them’ any more than the choice was already ‘decided’ for them. The best conclusion is to push their choices in a ‘good’ direction (e.g. towards healthy food rather than junk). Nudging people isn’t a decision – they are almost always nudged. The question is the direction they are nudged in.
It must also be emphasised that choices do not come out of nowhere – they are generally presented with a flurry of bright colours and offers from profit seeking companies. These things do influence us, as much as we hate to admit it, so to work from the premise that the state is the only one that can exercise power and influence in this area is to miss the point.
The fact is that the way both neoclassical economists and libertarians think about choice is fundamentally flawed – in the case of neoclassicism, it cannot be remedied with ‘utility maximisation plus a couple of constraints’; in the case of libertarianism it cannot be remedied by saying ‘so what if people are irrational? They should be allowed to be irrational.’ Both are superficial remedies for a fundamentally flawed epistemological starting point for human action.
Most models in science are representations of something else. Models “stand for” or “depict” specific parts of a “target system” (usually the real world). A model that has neither surface nor deep resemblance to important characteristics of real economies ought to be treated with prima facie suspicion. How could we possibly learn about the real world if there are no parts or aspects of the model that have relevant and important counterparts in the real world target system? The burden of proof lays on the theoretical economists thinking they have contributed anything of scientific relevance without even hinting at any bridge enabling us to traverse from model to reality. All theories and models have to use sign vehicles to convey some kind of content that may be used for saying something of the target system. But purpose-built assumptions, like invariance, made solely to secure a way of reaching deductively validated results in mathematical models, are of little value if they cannot be validated outside of the model.
All empirical sciences use simplifying or unrealistic assumptions in their modeling activities. That is (no longer) the issue – as long as the assumptions made are not unrealistic in the wrong way or for the wrong reasons.
Theories are difficult to directly confront with reality. Economists therefore build models of their theories. Those models are representations that are directly examined and manipulated to indirectly say something about the target systems.
There are economic methodologists and philosophers that argue for a less demanding view on modeling and theorizing in economics. And to some theoretical economists it is deemed quite enough to consider economics as a mere “conceptual activity” where the model is not so much seen as an abstraction from reality, but rather a kind of “parallel reality”. By considering models as such constructions, the economist distances the model from the intended target, only demanding the models to be credible, thereby enabling him to make inductive inferences to the target systems.
But what gives license to this leap of faith, this “inductive inference”? Within-model inferences in formal-axiomatic models are usually deductive, but that does not come with a warrant of reliability for inferring conclusions about specific target systems. Since all models in a strict sense are false (necessarily building in part on false assumptions) deductive validity cannot guarantee epistemic truth about the target system. To argue otherwise would surely be an untenable overestimation of the epistemic reach of “surrogate models”.
Models do not only face theory. They also have to look to the world. But being able to model a credible world, a world that somehow could be considered real or similar to the real world, is not the same as investigating the real world. Even though all theories are false, since they simplify, they may still possibly serve our pursuit of truth. But then they cannot be unrealistic or false in any way. The falsehood or unrealisticness has to be qualified (in terms of resemblance, relevance etc). At the very least, the minimalist demand on models in terms of credibility has to give away to a stronger epistemic demand of “appropriate similarity and plausibility” (Pålsson Syll 2001:60). One could of course also ask for a sensitivity or robustness analysis, but the credible world, even after having tested it for sensitivity and robustness, can still be a far way from reality – and unfortunately often in ways we know are important. Robustness of claims in a model does not per se give a warrant for exporting the claims to real world target systems.
Questions of external validity are important more specifically also when it comes to microfounded macromodels. It can never be enough that these models somehow are regarded as internally consistent. One always also has to pose questions of consistency with the data. Internal consistency without external validity is worth nothing.
“New Keynesian” macroeconomist Simon Wren-Lewis has an interesting post on his blog on these topics and how they may be related to ideology:
I want to raise [the] problem that some researchers might select facts on the basis of ideology. The example that I find most telling here is unemployment and Real Business Cycle models.
Why is a large part of macroeconomics all about understanding the booms and busts of the business cycle? The answer is obvious: the consequences of booms – rising inflation – and busts – rising unemployment – are large macroeconomic ‘bads’. No one disagrees about rising inflation being a serious problem. Almost no one disagrees about rising unemployment. Except, it would appear, the large number of macroeconomists who use Real Business Cycle (RBC) models to study the business cycle.
In RBC models, all changes in unemployment are voluntary. If unemployment is rising, it is because more workers are choosing leisure rather than work. As a result, high unemployment in a recession is not a problem at all. It just so happens that (because of a temporary absence of new discoveries) real wages are relatively low, so workers choose to work less and enjoy more free time. As RBC models do not say much about inflation, then according to this theory the business cycle is not a problem at all …
Now the RBC literature is very empirically orientated. It is all about trying to get closer to the observed patterns of cyclical variation in key macro variables. Yet what seems like a rather important fact about business cycles, which is that changes in unemployment are involuntary, is largely ignored. (By involuntary I mean the unemployed are looking for work at the current real wage, which they would not be under RBC theory.) There would seem to be only one defence of this approach (apart from denying the fact), and that is that these models could be easily adapted to explain involuntary unemployment, without the rest of the model changing in any important way. If this was the case, you might expect papers that present RBC theory to say so, but they generally do not …
What could account for this particular selective use of evidence? One explanation is ideological. The commonsense view of the business cycle, and the need to in some sense smooth this cycle, is that it involves a market failure that requires the intervention of a state institution in some form. If your ideological view is to deny market failure where possible, and therefore minimise a role for the state, then it is natural enough (although hardly scientific) to ignore inconvenient facts …
Do these biases matter? I think they do for two reasons. First from a purely academic point of view they distort the development of the discipline. As I keep stressing, I do think the microfoundations project is important and useful, but that means anything that distorts in energies is a problem. Second, policy does rely on academic macroeconomics, and both the examples of bias that I use in this post … could have been the source of important policy errors.