## Real Business Cycle models – scientific joke of the century (II)

30 April, 2012 at 15:12 | Posted in Economics | Leave a comment

The increasing ascendancy of real business cycle theories of various stripes, with their common view that the economy is best modeled as a floating Walrasian equilibrium, buffeted by productivity shocks, is indicative of the depths of the divisions separating academic macroeconomists …

If  these theories are correct, they imply that the macroeconomics developed in the wake of the Keynesian Revolution is well confined to the ashbin of history. And they suggest that most of the work of contemporary macroeconomists is worth little more than that of those pursuing astrological science …

The appearance of Ed Prescott’ s stimulating paper, “Theory Ahead of Business Cycle Measurement,” affords an opportunity to assess the current state of real business cycle theory and to consider its prospects as a foundation for macroeconomic analysis …

My view is that business cycle models of the type urged on us by Prescott have nothing to do with the business cycle phenomena observed in The United States or other capitalist economies …

Presoctt’s growth model is not an inconceivable representation of reality. But to claim that its prameters are securely tied down by growth and  micro observations seems to me a gross overstatement. The image of a big loose tent flapping in the wind comes to mind …

In Prescott’s model, the central driving force behind cyclical fluctuations is technological shocks. The propagation mechansim is intertemporal substitution in employment. As I have argued so far, there is no independent evidence from any source for either of these phenomena …

Imagine an analyst confronting the market for ketchup. Suppose she or he decided to ignore data on the price of ketchup. This would considerably increase the analyst’s freedom in accounting for fluctuations in the quantity of ketchup purchased … It is difficult to believe that any explanation of fluctuations in ketchup sales that did not confront price data would be taken seriously, at least by hard-headed economists.

Yet Pescott offers an exercise in price-free economics … Others have confronted models like Prescott’s to data on prices with what I think can fairly be labeled dismal results. There is simply no evidence to support any of the price effects predicted by the model …

Improvement in the track record of macroeconomics will require the development of theories that can explain why exchange sometimes work and other times breaks down. Nothing could be more counterproductive in this regard than a lengthy professional detour into the analysis of stochastic Robinson Crusoes.

Lawrence Summers: Some Skeptical Observations on Real Business Cycle Theory

## Dumb and Dumber – the “New Keynesian” version

29 April, 2012 at 16:52 | Posted in Economics | Leave a comment

So how did macroeconomics arrive at its current state?

The original impulse to look for better or more explicit micro foundations was probably reasonable. What emerged was not a good idea. The preferred model has a single representative consumer optimizing over infinite time with perfect foresight or rational expectations, in an environment that realizes the resulting plans more or less flawlessly through perfectly competitive forward-looking markets for goods and labor, and perfectly flexible prices and wages.

How could anyone expect a sensible short-to-medium-run macroeconomics to come out of that set-up? My impression is that this approach (which seems now to be the mainstream, and certainly dominates the journals, if not the workaday world of macroeconomics) has had no empirical success; but that is not the point here. I start from the presumption that we want macroeconomics to account for the occasional aggregative pathologies that beset modern capitalist economies, like recessions, intervals of stagnation, inflation, “stagflation,” not to mention negative pathologies like unusually good times. A model that rules out pathologies by definition is unlikely to help. It is always possible to claim that those “pathologies” are delusions, and the economy is merely adjusting optimally to some exogenous shock. But why should reasonable people accept this? …

What is needed for a better macroeconomics? [S]ome of the gross implausibilities … need to be eliminated. The clearest candidate is the representative agent. Heterogeneity is the essence of a modern economy. In real life we worry about the relations between managers and shareowners, between banks and their borrowers, between workers and employers, between venture capitalists and entrepreneurs, you name it. We worry about those interfaces because they can and do go wrong, with likely macroeconomic consequences. We know for a fact that heterogeneous agents have different and sometimes conflicting goals, different information, different capacities to process it, different expectations, different beliefs about how the economy works. Representative-agent models exclude all this landscape, though it needs to be abstracted and included in macro-models.

I also doubt that universal rational expectations provide a useful framework for macroeconomics …

Now here is a peculiar thing. When I was in advanced middle age, I suddenly woke up to the fact that my colleagues in macroeconomics, the ones I most admired, thought that the fundamental problem of macro theory was to understand how nominal events could have real consequences. This is just a way of stating some puzzle or puzzles about the sources for sticky wages and prices. This struck me as peculiar in two ways.

First of all, when I was even younger, nobody thought this was a puzzle. You only had to look around you to stumble on a hundred different reasons why various prices and factor prices should be much less than perfectly flexible. I once wrote, archly I admit, that the world has its reasons for not being Walrasian. Of course I soon realized that what macroeconomists wanted was a formal account of price stickiness that would fit comfortably into rational, optimizing models. OK, that is a harmless enough activity, especially if it is not taken too seriously. But price and wage stickiness themselves are not a major intellectual puzzle unless you insist on making them one.

Robert Solow: Dumb and dumber in macroeconomics

## Endogeneity and causal claims

29 April, 2012 at 11:01 | Posted in Statistics & Econometrics, Theory of Science & Methodology | Leave a comment

## Randomization, experiments and claims of causality in economics

28 April, 2012 at 13:37 | Posted in Theory of Science & Methodology | Leave a comment

A few years ago Armin Falk and James Heckman published an acclaimed article titled “Lab Experiments Are a Major Source of Knowledge in the Social Sciences” in the journal Science. The authors – both renowned economists – argued that both field experiments and laboratory experiments are basically facing the same problems in terms of generalizability and external validity – and that a fortiori it is impossible to say that one would be better than the other.

What strikes me when reading both Falk & Heckman and advocators of field experiments – such as John List and Steven Levitt – is that field studies and experiments are both very similar to theoretical models. They all have the same basic problem – they are built on rather artificial conditions and have difficulties with the “trade-off” between internal and external validity. The more artificial conditions, the more internal validity, but also less external validity. The more we rig experiments/field studies/models to avoid the “confounding factors”, the less the conditions are reminicent of the real “target system”. To that extent, I also believe that Falk & Heckman are right in their comments on the discussion of the field vs. experiments in terms of realism – the nodal issue is not about that, but basically about how economists using different isolation strategies in different “nomological machines” attempt to learn about causal relationships. By contrast to Falk & Heckman and advocators of field experiments, as List and Levitt, I doubt the generalizability of both research strategies, because the probability is high that causal mechanisms are different in different contexts and that lack of homogeneity/ stability/invariance doesn’t give us warranted export licenses to the “real” societies or economies.

If you mainly conceive of experiments or field studies as heuristic tools, the dividing line between, say, Falk & Heckman and List or Levitt is probably difficult to perceive.

But if we see experiments or field studies as theory tests or models that ultimately aspire to say something about the real “target system”, then the problem of external validity is central (and was for a long time also a key reason why behavioural economists had trouble getting their research results published).

Assume that you have examined how the work performance of Chinese workers A is affected by B (“treatment”). How can we extrapolate/generalize to new samples outside the original population (e.g. to the US)? How do we know that any replication attempt “succeeds”? How do we know when these replicated experimental results can be said to justify inferences made in samples from the original population? If, for example, P(A|B) is the conditional density function for the original sample, and we are interested in doing a extrapolative prediction of E [P(A|B)], how can we know that the new sample’s density function is identical with the original? Unless we can give some really good argument for this being the case, inferences built on P(A|B) is not really saying anything on that of the target system’s P’(A|B).

As I see it is this heart of the matter. External validity/extrapolation/generalization is founded on the assumption that we could make inferences based on P(A|B) that is exportable to other populations for which P’(A|B) applies. Sure, if one can convincingly show that P and P’are similar enough, the problems are perhaps surmountable. But arbitrarily just introducing functional specification restrictions of the type invariance/stability /homogeneity, is, at least for an epistemological realist far from satisfactory. And often it is – unfortunately – exactly this that I see when I take part of neoclassical economists’ models/experiments/field studies.

By this I do not mean to say that empirical methods per se are so problematic that they can never be used. On the contrary, I am basically – though not without reservations – in favour of the increased use of experiments and field studies within economics. Not least as an alternative to completely barren “bridge-less” axiomatic-deductive theory models. My criticism is more about aspiration levels and what we believe that we can achieve with our mediational epistemological tools and methods in the social sciences.

Many ‘experimentalists’ claim that it is easy to replicate experiments under different conditions and therefore a fortiori easy to test the robustness of experimental results. But is it really that easy? If in the example given above, we run a test and find that our predictions were not correct – what can we conclude? The B “works” in China but not in the US? Or that B “works” in a backward agrarian society, but not in a post-modern service society? That B “worked” in the field study conducted in year 2008 but not in year 2012? Population selection is almost never simple. Had the problem of external validity only been about inference from sample to population, this would be no critical problem. But the really interesting inferences are those we try to make from specific labs/experiments/fields to specific real world situations/institutions/structures that we are interested in understanding or (causally) to explain. And then the population problem is more difficult to tackle.

Everyone – both “labs” and “experimentalists” – should consider the following lines from David Salsburg’s The Lady Tasting Tea (Henry Holt 2001:146):

In Kolmogorov’s axiomatization of probability theory, we assume there is an abstract space of elementary things called ‘events’ … If a measure on the abstract space of events fulfills certain axioms, then it is a probability. To use probability in real life, we have to identify this space of events and do so with sufficient specificity to allow us to actually calculate probability measurements on that space … Unless we can identify Kolmogorov’s abstract space, the probability statements that emerge from statistical analyses will have many different and sometimes contrary meanings.

Evidence-based theories and policies are highly valued nowadays. Randomization is supposed to best control for bias from unknown confounders. The received opinion is that evidence based on randomized experiments therefore is the best.

More and more economists have also lately come to advocate randomization as the principal method for ensuring being able to make valid causal inferences.

Renowned econometrician Ed Leamer has responded to these allegations, maintaning that randomization is not sufficient, and that the hopes of a better empirical and quantitative macroeconomics are to a large extent illusory. Randomization – just as econometrics – promises more than it can deliver, basically because it requires assumptions that in practice are not possible to maintain:

We economists trudge relentlessly toward Asymptopia, where data are unlimited and estimates are consistent, where the laws of large numbers apply perfectly andwhere the full intricacies of the economy are completely revealed. But it’s a frustrating journey, since, no matter how far we travel, Asymptopia remains infinitely far away. Worst of all, when we feel pumped up with our progress, a tectonic shift can occur, like the Panic of 2008, making it seem as though our long journey has left us disappointingly close to the State of Complete Ignorance whence we began.

The pointlessness of much of our daily activity makes us receptive when the Priests of our tribe ring the bells and announce a shortened path to Asymptopia … We may listen, but we don’t hear, when the Priests warn that the new direction is only for those with Faith, those with complete belief in the Assumptions of the Path. It often takes years down the Path, but sooner or later, someone articulates the concerns that gnaw away in each of us and asks if the Assumptions are valid … Small seeds of doubt in each of us inevitably turn to despair and we abandon that direction and seek another …

Ignorance is a formidable foe, and to have hope of even modest victories, we economists need to use every resource and every weapon we can muster, including thought experiments (theory), and the analysis of data from nonexperiments, accidental experiments, and designed experiments. We should be celebrating the small genuine victories of the economists who use their tools most effectively, and we should dial back our adoration of those who can carry the biggest and brightest and least-understood weapons. We would benefit from some serious humility, and from burning our “Mission Accomplished” banners. It’s never gonna happen.

Part of the problem is that we data analysts want it all automated. We want an answer at the push of a button on a keyboard …  Faced with the choice between thinking long and hard verus pushing the button, the single button is winning by a very large margin.

Let’s not add a “randomization” button to our intellectual keyboards, to be pushed without hard reflection and thought.

Especially when it comes to questions of causality, randomization is nowadays considered some kind of “gold standard”. Everything has to be evidence-based, and the evidence has to come from randomized experiments.

But just as econometrics, randomization is basically a deductive method. Given  the assumptions (such as manipulability, transitivity, Reichenbach probability principles, separability, additivity, linearity etc)  these methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. [And although randomization may contribute to controlling for confounding, it does not guarantee it, since genuine ramdomness presupposes infinite experimentation and we know all real experimentation is finite. And even if randomization may help to establish average causal effects, it says nothing of individual effects unless homogeneity is added to the list of assumptions.] Real target systems are seldom epistemically isomorphic to our axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of  the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by randomization procedures may be valid in “closed” models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.

When does a conclusion established in population X hold for target population Y? Only under  very restrictive conditions!

Science philosopher Nancy Cartwright has succinctly summarized the value of randomization. In The Lancet 23/4 2011 she states:

But recall the logic of randomized control trials … [T]hey are ideal for supporting ‘it-works-somewhere’ claims. But they are in no way ideal for other purposes; in particular they provide no better bases for extrapolating or generalising than knowledge that the treatmet caused the outcome in any other individuals in any other circumstances … And where no capacity claims obtain, there is seldom warrant for assuming that a treatment that works somewhere will work anywhere else. (The exception is where there is warrant to believe that the study population is a representative sample of the target population – and cases like this are hard to come by.)

And in BioSocieties 2/2007:

We experiment on a population of individuals each of whom we take to be described (or ‘governed’) by the same fixed causal structure (albeit unknown) and fixed probability measure (albeit unknown). Our deductive conclusions depend on that cvery causal structure and probability. How do we know what individuals beyond those in our experiment this applies to? … The [randomized experiment], with its vaunted rigor, takes us only a very small part of the way we need to go for practical knowledge. This is what disposes me to warn about the vanity of rigor in [randomized experiments].

Ideally controlled experiments (still the benchmark even for natural and quasi experiments) tell us with certainty what causes what effects – but only given the right “closures”. Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. “It works there” is no evidence for “it will work here”. Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of  “rigorous” and “precise” methods is despairingly small.

Here I think Leamer’s “button” metaphor is appropriate. Many advocates of randomization want  to have deductively automated answers to  fundamental causal questions. But to apply “thin” methods we have to have “thick” background knowledge of  what’s going on in the real world, and not in (ideally controlled) experiments. Conclusions  can only be as certain as their premises – and that also goes for methods based on randomized experiments.

26 April, 2012 at 14:13 | Posted in Varia | 2 Comments

On conference. Blogging will be resumed in the weekend.

## Keynes was right!

25 April, 2012 at 15:20 | Posted in Economics | Leave a comment

And Henry Blodget tells us why:

Now, I’m not an economist, and I’m not born of a particular economic school that I’ve bet my life’s work on, so I have observed the global economic events of the past five years with a fairly open mind.

I’ve listened to Keynesians like Paul Krugman argue that the way to fix the mess is to open the government spending spigot and invest like crazy.

And I’ve listened to Austerians like Niall Ferguson argue that the way to fix the mess is to cut spending radically, balance government budgets, and unleash the private sector.

And I’ve also looked back at history—namely, Reinhart and Rogoff’s analysis of prior financial crises, the Great Depression, Japan, Germany after Weimar, and so forth.

And more and more it appears that Keynes was right.

In the aftermath of a massive debt binge like the one we went on from 1980-2007, when the private sector collapses and then retreats to lick its wounds and deleverage, the best way to help the economy work its way out of its hole is for the government to spend like crazy.

Or, rather, if not the “best way,” at least the least-worst way.

Because, obviously, piling up even bigger mountains of debt is not a happy side-effect of such spending.

But let’s face it: Austerity doesn’t work.

At least, austerity doesn’t work to quickly fix the problem.

The reason austerity doesn’t work to quickly fix the problem is that, when the economy is already struggling, and you cut government spending, you also further damage the economy. And when you further damage the economy, you further reduce tax revenue, which has already been clobbered by the stumbling economy. And when you further reduce tax revenue, you increase the deficit and create the need for more austerity. And that even further clobbers the economy and tax revenue. And so on.

But getting the budget under control by radically chopping spending or increasing taxes this minute, as many Austerians want to do, won’t help. In fact, it will likely make the problems vastly worse, because it will put that many more people out of work and reduce tax revenue that much further (just take a look at Europe).Meanwhile, given that we’ve already racked up \$15 trillion of debt, I certainly wouldn’t be opposed to our spending another couple of trillion upgrading our piss-poor infrastructure. Incurring debt to build things that help all Americans, from unemployed folks to business leaders to children, is a trade-off I’m willing to make. Especially if the jobs created by this “stimulus” spending help alleviate our massive unemployment and inequality problems.

And, by the way, I don’t think this “stimulus” necessarily needs to come from just the government. Our corporations are as profitable now as they have ever been. So I’d like to see a lot of them voluntarily decide to invest more and pay their low-wage employees more and hire more employees. They can afford it, and “cash flow” isn’t the sole objective or reward of running a business.

Anyway, based on the experience of the last five years, it seems to me that Keynes was right.

I still have an open mind, though, if any Austerians out there want to have another go.

## Finanskriser – orsaker, förlopp och konsekvenser

25 April, 2012 at 14:27 | Posted in Economics | Leave a comment

Den här våren håller yours truly en kurs om Finanskriser – orsaker, förlopp och konsekvenser på Malmö högskola. Intresset för kursen har varit enormt stort och tyvärr har bara en tredjedel av alla som velat följa kursen kunnat antas (den kommer att ges igen vårterminen 2013, så den som väntar på något gott …).
Dagens lektion handlar om hur man eventuellt kan undvika finanskriser. Ett förslag som kommer att diskuteras är införandet av ett “jubileumsår” med skuldavskrivningar:

## Real Business Cycle models – scientific joke of the century?

25 April, 2012 at 08:54 | Posted in Economics | 1 Comment

From Noahpinion we get this nice piece on Real Business Cycle models:

It has often been said of the Holy Roman Empire that it was “neither Holy, nor Roman, nor an Empire.” However, that joke has gotten a bit stale since Voltaire wrote it in the 1700s, so I think it’s time for a new one. Real Business Cycle models, it turns out, are neither Real, nor about Business, nor about Cycles.

They are, however, the macro models that annoy me far more than any other (and I’m not alone). I’ll explain the joke in increasing order of the things that annoy me.

First, “Cycles”. The “business cycles” in RBC models are not periodic, like cycles in physics. But they are also not “cycles” in the sense that a bust must follow a boom. Booms and busts are just random shocks. The “business cycle” that we think we see, according to these models, is simply a statistical illusion. (Actually, RBC shares this property with New Keynesian and Old Keynesian models alike. Very few people dare to write down a model in which knowing you’re in a boom today allows you to predict a bust tomorrow!)

Next, “Business”. Businesses are called “firms” in economic models. But if you look at the firms in an RBC model, you will see that they bear very little resemblance to real-life firms. For one thing, they make no profits; their revenues equal their costs. For another thing, they produce only one good. (Also, like firms in many economic models, they are all identical, they live forever, they make all their decisions to serve the interests of households, and they make all decisions perfectly. Etc. etc.) In other words, they display very few of the characteristics that real businesses display. This means that the “business cycle” in an RBC model is not really the result of any interesting characteristics of businesses; everything is due to the individual decisions of consumers and workers, and to the outside force of technological progress.

Finally, “Real”. This is the one that really gets me. “Real” refers to the fact that the shocks in RBC models are “real” as opposed to “nominal” shocks (I’ve actually never liked this terminology, since it seems to subtly imply that money is neutral, which it isn’t). But one would have to be a fool not to see the subtext in the use of the term – it implies that business-cycle theories based on demand shocks are not, in fact, real; that recessions and booms are obviously caused by supply shocks. If RBC is “real”, then RBC’s competitors – Keynesian models and the like – must be fantasy business cycle models.

However, it turns out that RBC and reality are not exactly drinking buddies. I hereby outsource the beatdown of the substance of RBC models to one of the greatest beatdown specialists in the history of economics: the formidable Larry Summers. In a 1986 essay … Summers identified three main reasons why RBC models are not, in fact, real:

1. RBC models use parameter values that are almost certainly wrong,

2. RBC models make predictions about prices that are completely, utterly wrong, and

3. The “technology shocks” that RBC models assume drive the business cycle have never been found.

I encourage everyone to go read the whole thing. Pure and utter pulpification! Actually, this essay was assigned to me on the first day of my intro macro course, but at the time I wasn’t able to appreciate it.

So Real Business Cycle models are neither Real, nor about Business, nor about Cycles. Are they models? Well, sadly, yes they are…of a sort. You actually can put today’s data into an RBC model and get a prediction about future data. But see, here’s the thing: that prediction will be entirely driven by the most ad-hoc, hard-to-swallow part of the model!

## Krugman on methodology

24 April, 2012 at 20:05 | Posted in Economics, Theory of Science & Methodology | 1 Comment

“New Keynesian” macroeconomic models are at heart based on the modeling strategy of DSGE – representative agents, rational expectations, equilibrium and all that. They do have some minor idiosyncracies (like “menu costs” and “price rigidities” preferably in a monopolistic competition setting ), but the differencies are not really that fundamental. The basic model assumptions are the same.

If macoeconomic models – no matter of what ilk – assume representative actors, rational expectations, market clearing and equilibrium, and we know that real people and markets cannot be expected to obey these assumptions, the warrants for supposing that conclusions or hypothesis of causally relevant mechanisms or regularities can be bridged, are obviously non-justifiable. Macroeconomic theorists – regardless of being “New Monetarist”, “New Classical” or ”New Keynesian” – ought to do some ontological reflection and heed Keynes’ warnings on using thought-models in economics:

The object of our analysis is, not to provide a machine, or method of blind manipulation, which will furnish an infallible answer, but to provide ourselves with an organized and orderly method of thinking out particular problems; and, after we have reached a provisional conclusion by isolating the complicating factors one by one, we then have to go back on ourselves and allow, as well as we can, for the probable interactions of the factors amongst themselves. This is the nature of economic thinking. Any other way of applying our formal principles of thought (without which, however, we shall be lost in the wood) will lead us into error.

People calling themselves “New Keynesians” ought to be rather embarrassed by the fact that the kind of microfounded dynamic stochastic general equilibrium models they use, cannot incorporate such a basic fact of reality as involuntary unemployment!

Of course, working with representative agent models, this should come as no surprise. If one representative agent is employed, all representative agents are. The kind of unemployment that occurs is voluntary, since it is only adjustments of the hours of work that these optimizing agents make to maximize their utility.

From a methodological and theoretical point of view Paul Krugman’s comments in the debate on microfounded macromodels is really interesting because they shed light on a kind of inconsistency in his own art of argumentation.

During a couple of years Krugman has in more than one article criticized mainstream economics for using to much (bad) mathematics and axiomatics in their model-building endeavours. But when it comes to defending his own position on various issues he usually himself ultimately falls back on the same kind of models. This shows up also in the microfoundations debate, where he refers to the work he has done with Gauti Eggertsson – work that actually, when it comes to methodology and assumptions, has a lot in common with the kind of model-building he otherwise criticizes.

In 1996 Krugman was invited to speak to the European Association for Evolutionary Political Economy. I think reading the speech gives more than one clue on the limits of Krugman’s critique of modern mainstream economics (italics added):

I like to think that I am more open-minded about alternative approaches to economics than most, but I am basically a maximization-and-equilibrium kind of guy. Indeed, I am quite fanatical about defending the relevance of standard economic models in many situations.

I won’t say that I am entirely happy with the state of economics. But let us be honest: I have done very well within the world of conventional economics. I have pushed the envelope, but not broken it, and have received very widespread acceptance for my ideas. What this means is that I may have more sympathy for standard economics than most of you. My criticisms are those of someone who loves the field and has seen that affection repaid. I don’t know if that makes me morally better or worse than someone who criticizes from outside, but anyway it makes me different.

To me, it seems that what we know as economics is the study of those phenomena that can be understood as emerging from the interactions among intelligent, self-interested individuals. Notice that there are really four parts to this definition. Let’s read from right to left.
1. Economics is about what individuals do: not classes, not “correlations of forces”, but individual actors. This is not to deny the relevance of higher levels of analysis, but they must be grounded in individual behavior. Methodological individualism is of the essence.
2. The individuals are self-interested. There is nothing in economics that inherently prevents us from allowing people to derive satisfaction from others’ consumption, but the predictive power of economic theory comes from the presumption that normally people care about themselves.
3. The individuals are intelligent: obvious opportunities for gain are not neglected. Hundred-dollar bills do not lie unattended in the street for very long.
4. We are concerned with the interaction of such individuals: Most interesting economic theory, from supply and demand on, is about “invisible hand” processes in which the collective outcome is not what individuals intended.

Gould is the John Kenneth Galbraith of his subject. That is, he is a wonderful writer who is beloved by literary intellectuals and lionized by the media because he does not use algebra or difficult jargon. Unfortunately, it appears that he avoids these sins not because he has transcended his colleagues but because he does not seem to understand what they have to say; and his own descriptions of what the field is about – not just the answers, but even the questions – are consistently misleading. His impressive literary and historical erudition makes his work seem profound to most readers, but informed readers eventually conclude that there’s no there there.

Personally, I consider myself a proud neoclassicist. By this I clearly don’t mean that I believe in perfect competition all the way. What I mean is that I prefer, when I can, to make sense of the world using models in which individuals maximize and the interaction of these individuals can be summarized by some concept of equilibrium. The reason I like that kind of model is not that I believe it to be literally true, but that I am intensely aware of the power of maximization-and-equilibrium to organize one’s thinking – and I have seen the propensity of those who try to do economics without those organizing devices to produce sheer nonsense when they imagine they are freeing themselves from some confining orthodoxy.

On many macroeconomic policy discussions I often find myself in agreement with Krugman. To me that just shows that Krugman is right in spite of and not thanks to those neoclassical models/methodology/theories he ultimately refers to. When he is discussing austerity measures, ricardian equivalence or problems with the euro, he is actually not using those neoclassical models/methodology/theories, but rather simpler and more adequate and relevant thought-constructions in the vein of Keynes.

As all students of economics know, time is limited. Given that, there has to be better ways to optimize its utilization than spending hours and hours working through, constructing or adding tweaks to irrelevant “New Keynesian” DSGE macroeconomic models. I would rather recommend my students allocating their time into constructing better, real and relevant macroeconomic models – models that really help us to explain and understand reality.

## Mainstream economics and neoliberalism

24 April, 2012 at 13:11 | Posted in Economics, Theory of Science & Methodology | Leave a comment

Unlearning economics has an interesting post on some important shortcomings of neoclassical economics and libertarianism (on which I have written e. g. here, here, and  here ):

I’ve touched briefly before on how behavioural economics makes the central libertarian mantra of being ‘free to choose’ completely incoherent. Libertarians tend to have a difficult time grasping this, responding with things like ‘so people aren’t rational; they’re still the best judges of their own decisions’. My point here is not necessarily that people are not the best judges of their own decisions, but that the idea of freedom of choice – as interpreted by libertarians – is nonsensical once you start from a behavioural standpoint.

The problem is that neoclassical economics, by modelling people as rational utility maximisers, lends itself to a certain way of thinking about government intervention. For if you propose intervention on the grounds that they are not rational utility maximisers, you are told that you are treating people as if they are stupid. Of course, this isn’t the case – designing policy as if people are rational utility maximisers is no different ethically to designing it as if they rely on various heuristics and suffer cognitive biases.

This ‘treating people as if they are stupid’ mentality highlights problem with neoclassical choice modelling: behaviour is generally considered either ‘rational’ or ‘irrational’. But this isn’t a particularly helpful way to think about human action – as Daniel Kuehn says, heuristics are not really ‘irrational’; they simply save time, and as this video emphasises, they often produce better results than homo economicus-esque calculation. So the line between rationality and irrationality becomes blurred.

For an example of how this flawed thinking pervades libertarian arguments, consider the case of excessive choice. It is well documented that people can be overwhelmed by too much choice, and will choose to put off the decision or just abandon trying altogether. So is somebody who is so inundated with choice that they don’t know what to do ‘free to choose’? Well, not really – their liberty to make their own decisions is hamstrung.

Another example is the case of Nudge. The central point of this book is that people’s decisions are always pushed in a certain direction, either by advertising and packaging, by what the easiest or default choice is, by the way the choice is framed, or any number of other things. This completely destroys the idea of ‘free to choose’ – if people’s choices are rarely or never made neutrally, then one cannot be said to be ‘deciding for them’ any more than the choice was already ‘decided’ for them. The best conclusion is to push their choices in a ‘good’ direction (e.g. towards healthy food rather than junk). Nudging people isn’t a decision – they are almost always nudged. The question is the direction they are nudged in.

It must also be emphasised that choices do not come out of nowhere – they are generally presented with a flurry of bright colours and offers from profit seeking companies. These things do influence us, as much as we hate to admit it, so to work from the premise that the state is the only one that can exercise power and influence in this area is to miss the point.

The fact is that the way both neoclassical economists and libertarians think about choice is fundamentally flawed – in the case of neoclassicism, it cannot be remedied with ‘utility maximisation plus a couple of constraints’; in the case of libertarianism it cannot be remedied by saying ‘so what if people are irrational? They should be allowed to be irrational.’ Both are superficial remedies for a fundamentally flawed epistemological starting point for human action.

## Mistaking beauty for truth – Real Business Cycle models and the quest for external validity

23 April, 2012 at 15:30 | Posted in Economics, Theory of Science & Methodology | 2 Comments

Most models in science are representations of something else. Models “stand for” or “depict” specific parts of a “target system” (usually the real world). A model that has neither surface nor deep resemblance to important characteristics of real economies ought to be treated with prima facie suspicion. How could we possibly learn about the real world if there are no parts or aspects of the model that have relevant and important counterparts in the real world target system? The burden of proof lays on the theoretical economists thinking they have contributed anything of scientific relevance without even hinting at any bridge enabling us to traverse from model to reality. All theories and models have to use sign vehicles to convey some kind of content that may be used for saying something of the target system. But purpose-built assumptions, like invariance, made solely to secure a way of reaching deductively validated results in mathematical models, are of little value if they cannot be validated outside of the model.

All empirical sciences use simplifying or unrealistic assumptions in their modeling activities. That is (no longer) the issue – as long as the assumptions made are not unrealistic in the wrong way or for the wrong reasons.

Theories are difficult to directly confront with reality. Economists therefore build models of their theories. Those models are representations that are directly examined and manipulated to indirectly say something about the target systems.

There are economic methodologists and philosophers that argue for a less demanding view on modeling and theorizing in economics. And to some theoretical economists it is deemed quite enough to consider economics as a mere “conceptual activity” where the model is not so much seen as an abstraction from reality, but rather a kind of “parallel reality”. By considering models as such constructions, the economist distances the model from the intended target, only demanding the models to be credible, thereby enabling him to make inductive inferences to the target systems.
But what gives license to this leap of faith, this “inductive inference”? Within-model inferences in formal-axiomatic models are usually deductive, but that does not come with a warrant of reliability for inferring conclusions about specific target systems. Since all models in a strict sense are false (necessarily building in part on false assumptions) deductive validity cannot guarantee epistemic truth about the target system. To argue otherwise would surely be an untenable overestimation of the epistemic reach of “surrogate models”.

Models do not only face theory. They also have to look to the world. But being able to model a credible world, a world that somehow could be considered real or similar to the real world, is not the same as investigating the real world. Even though all theories are false, since they simplify, they may still possibly serve our pursuit of truth. But then they cannot be unrealistic or false in any way. The falsehood or unrealisticness has to be qualified (in terms of resemblance, relevance etc). At the very least, the minimalist demand on models in terms of credibility has to give away to a stronger epistemic demand of “appropriate similarity and plausibility” (Pålsson Syll 2001:60). One could of course also ask for a sensitivity or robustness analysis, but the credible world, even after having tested it for sensitivity and robustness, can still be a far way from reality – and unfortunately often in ways we know are important. Robustness of claims in a model does not per se give a warrant for exporting the claims to real world target systems.

Questions of external validity are important more specifically also when it comes to microfounded macromodels. It can never be enough that these models somehow are regarded as internally consistent. One always also has to pose questions of consistency with the data. Internal consistency without external validity is worth nothing.

“New Keynesian” macroeconomist Simon Wren-Lewis has an interesting post on his blog on these topics and how they may be related to ideology:

I want to raise [the] problem that some researchers might select facts on the basis of ideology. The example that I find most telling here is unemployment and Real Business Cycle models.

Why is a large part of macroeconomics all about understanding the booms and busts of the business cycle? The answer is obvious: the consequences of booms – rising inflation – and busts – rising unemployment – are large macroeconomic ‘bads’. No one disagrees about rising inflation being a serious problem. Almost no one disagrees about rising unemployment. Except, it would appear, the large number of macroeconomists who use Real Business Cycle (RBC) models to study the business cycle.

In RBC models, all changes in unemployment are voluntary. If unemployment is rising, it is because more workers are choosing leisure rather than work. As a result, high unemployment in a recession is not a problem at all. It just so happens that (because of a temporary absence of new discoveries) real wages are relatively low, so workers choose to work less and enjoy more free time. As RBC models do not say much about inflation, then according to this theory the business cycle is not a problem at all …

Now the RBC literature is very empirically orientated. It is all about trying to get closer to the observed patterns of cyclical variation in key macro variables. Yet what seems like a rather important fact about business cycles, which is that changes in unemployment are involuntary, is largely ignored. (By involuntary I mean the unemployed are looking for work at the current real wage, which they would not be under RBC theory.) There would seem to be only one defence of this approach (apart from denying the fact), and that is that these models could be easily adapted to explain involuntary unemployment, without the rest of the model changing in any important way. If this was the case, you might expect papers that present RBC theory to say so, but they generally do not …

What could account for this particular selective use of evidence? One explanation is ideological. The commonsense view of the business cycle, and the need to in some sense smooth this cycle, is that it involves a market failure that requires the intervention of a state institution in some form. If your ideological view is to deny market failure where possible, and therefore minimise a role for the state, then it is natural enough (although hardly scientific) to ignore inconvenient facts …

Do these biases matter? I think they do for two reasons. First from a purely academic point of view they distort the development of the discipline. As I keep stressing, I do think the microfoundations project is important and useful, but that means anything that distorts in energies is a problem. Second, policy does rely on academic macroeconomics, and both the examples of bias that I use in this post … could have been the source of important policy errors.

## Normativ mångkulturalism, “rule of law” och tolerans

23 April, 2012 at 13:16 | Posted in Politics & Society | 1 Comment

Bör vi som medborgare i ett modernt demokratiskt samhälle tolerera de intoleranta? Både ja och nej.

Människor i vårt land som kommer från länder eller tillhör grupperingar av olika slag – vars fränder och trosbröder kanske sitter vid makten och styr med brutal intolerans – måste självklart omfattas av vår tolerans. Men lika självklart är att denna tolerans bara gäller så länge intoleransen inte tillämpas i vårt samhälle.

Kultur, identitet, etnicitet, genus, religiositet får aldrig accepteras som grund för intolerans i politiska och medborgerliga hänseenden. I ett modernt demokratiskt samhälle måste människor som tillhör dessa olika grupper kunna räkna med att samhället också skyddar dem mot intoleransens övergrepp. Alla medborgare måste ha friheten och rätten att också ifrågasätta och lämna den egna gruppen. Mot dem som inte accepterar den toleransen måste vi vara intoleranta.

I Sverige har vi länge okritiskt omhuldat en ospecificerad och odefinierad mångkulturalism. Om vi med mångkulturalism menar att det i vårt samhälle finns flera olika kulturer ställer detta inte till med problem. Då är vi alla mångkulturalister. Men om vi med mångkulturalism menar att det med kulturell tillhörighet och identitet också kommer specifika moraliska, etiska och politiska rättigheter och skyldigheter, talar vi om något helt annat. Då talar vi om normativ multikulturalism. Och att acceptera normativ mångkulturalism, innebär också att tolerera oacceptabel intolerans, eftersom den normativa mångkulturalismen innebär att specifika kulturella gruppers rättigheter kan komma att ges högre dignitet än samhällsmedborgarens allmänmänskliga rättigheter – och därigenom indirekt bli till försvar för dessa gruppers (eventuella) intolerans. I ett normativt mångkulturalistiskt samhälle kan institutioner och regelverk användas för att inskränka människors frihet utifrån oacceptabla och intoleranta kulturella värderingar.

Den normativa mångkulturalismen innebär precis som främlingsfientlighet och rasism att individer på ett oacceptabelt sätt reduceras till att vara passiva medlemmar av kultur- eller identitetsbärande grupper. Men tolerans innebär inte att vi måste ha en värderelativistisk inställning till identitet och kultur. De som i vårt samhälle i handling visar att de inte respekterar andra människors rättigheter, kan inte räkna med att vi ska vara toleranta mot dem. De som med våld vill tvinga andra människor att underordna sig en speciell grupps religion, ideologi eller ”kultur” är själva ansvariga för den intolerans de måste bemötas med.

Om vi ska värna om det moderna demokratiska samhällets landvinningar måste samhället vara intolerant mot den intoleranta normativa mångkulturalismen. Och då kan inte samhället själv omhulda en normativ mångkulturalism. I ett modernt demokratiskt samhälle måste rule of law gälla – och gälla alla!

Mot dem som i vårt samhälle vill tvinga andra att leva efter deras egna religiösa, kulturella eller ideologiska trosföreställningar och tabun, ska samhället vara intolerant. Mot dem som vill tvinga samhället att anpassa lagar och regler till den egna religionens, kulturens eller gruppens tolkningar, ska samhället vara intolerant. Mot dem som i handling är intoleranta ska vi inte vara toleranta.

Idag har min kollega vid Malmö högskola, Pernilla Ouis, en intressant debattartikel i Sydsvenska Dagbladet om ”islamofobi” och ”strukturell diskriminering” som tangerar den här problematiken. Ouis skriver:

Ett diskrimineringsfall ska i veckan prövas i rätten, ett fall som handlar om en slöjbärande ung kvinna som blev nekad en praktikplats på en frisörsalong i Malmö med hänvisning till argumentet ”här säljer vi hår”. Många skulle säkert peka ut salongens ägare som islamofob, men jag vill rikta uppmärksamheten mot dem som gärna pekar, de som kallar sig för antirasister. Det är de som tycker att det är rätt att en man får lov att diskriminera en kvinna, för att han väljer att praktisera sin religion på ett visst sätt (”handskakningsdomen”). Dessa personer tycker att det är okej att dagisfröken har burka på sig när papporna är närvarande. De anser att muslimer blir diskriminerade i alla sammanhang där slöjan ifrågasätts, vare sig det handlar om ordningsregler eller ett företags image.

Att acceptera att en muslimsk man inte kan ta en kvinna i hand är att likställa denne muslimske mans val med en funktionsnedsättning. Diskussionen borde ha handlat om huruvida konventioner måste vara tvingande, och vem som ska ta konsekvenserna för individuella val. Principen om likabehandling har i sin tillämpning blivit principen om olikabehandling: Alla ska ha rätt att behandlas olika, som ett uttryck för jämlikhet.

Respekt innebär att vi ser varandra som jämlikar och för ett gemensamt samtal; inte att en självutnämnd grupp omhuldar och skyddar De Andra, och tystar dem i sin välvilja.

Ett slags ”omvänd” islamofobi är att känna sådan respekt för muslimer, att de inte ens betraktas som värdiga motståndare i en aktiv debatt. Religiösa islamister får därmed rätt i allt; eftersom ingen bemödar sig om att diskutera och argumentera med dem. För mig är detta en position präglad av både rädsla och förakt mot muslimer – ren och skär islamofobi med andra ord.

## Uncertainty and Decision Making

21 April, 2012 at 23:02 | Posted in Economics, Statistics & Econometrics, Theory of Science & Methodology | Leave a comment

## Conditional Probability and Bayes’ Theorem

21 April, 2012 at 18:42 | Posted in Statistics & Econometrics | Leave a comment

## Non-ergodic economics, expected utility and the Kelly criterion

21 April, 2012 at 15:18 | Posted in Economics, Statistics & Econometrics | 3 Comments

Suppose I want to play a game. Let’s say we are tossing a coin. If heads comes up, I win a dollar, and if tails comes up, I lose a dollar. Suppose further that I believe I know that the coin is asymmetrical and that the probability of getting heads (p) is greater than 50% – say 60% (0.6) – while the bookmaker assumes that the coin is totally symmetric. How much of my bankroll (T), should I optimally invest in this game?

A strict neoclassical utility-maximizing economist would suggest that my goal should be to maximize the expected value of my bankroll (wealth), and according to this view, I ought to bet my entire bankroll.

Does that sound rational? Most people would answer no to that question. The risk of losing is so high, that I already after few games played – the expected time until my first loss arises is 1/(1-p), which in this case is equal to 2.5 – with a high likelihood would be losing and thereby become bankrupt. The expected-value maximizing economist does not seem to have a particularly attractive approach.

So what’s the alternative? One possibility is to apply the so-called Kelly-strategy – after the American physicist and information theorist John L. Kelly, who in the article A New Interpretation of Information Rate (1956) suggested this criterion for how to optimize the size of the bet – under which the optimum is to invest a specific fraction (x) of wealth (T) in each game. How do we arrive at this fraction?

When I win, I have (1 + x) times more than before, and when I lose (1 – x) times less. After n rounds, when I have won v times and lost n – v times, my new bankroll (W) is

$\tiny \inline \dpi{150} (1)\: W = (1+x)^{v}(1-x)^{n-v}T$

The bankroll increases multiplicatively – “compound interest” – and the long-term average growth rate for my wealth can then be easily calculated by taking the logarithms of (1), which gives

(2) log (W/ T) = v log (1 + x) + (n – v) log (1 – x).

If we divide both sides by n we get

(3) [log (W / T)] / n = [v log (1 + x) + (n - v) log (1 - x)] / n

The left hand side now represents the average growth rate (g) in each game. On the right hand side the ratio v/n is equal to the percentage of bets that I won, and when n is large, this fraction will be close to p. Similarly, (n – v)/n is close to (1 – p). When the number of bets is large, the average growth rate is

(4) g = p log (1 + x) + (1 – p) log (1 – x).

Now we can easily determine the value of x that maximizes g:

(5) d [p log (1 + x) + (1 - p) log (1 - x)]/d x = p/(1 + x) – (1 – p)/(1 – x) =>
p/(1 + x) – (1 – p)/(1 – x) = 0 =>

(6) x = p – (1 – p)

Since p is the probability that I will win, and (1 – p) is the probability that I will lose, the Kelly strategy says that to optimize the growth rate of your bankroll (wealth) you should invest a fraction of the bankroll equal to the difference of the likelihood that you will win or lose. In our example, this means that I have in each game to bet the fraction of x = 0.6 – (1 – 0.6) ≈ 0.2 – that is, 20% of my bankroll. The optimal average growth rate becomes

(7) 0.6 log (1.2) + 0.4 log (0.8) ≈ 0.02.

If I bet 20% of my wealth in tossing the coin, I will after 10 games on average to be $\tiny \inline \dpi{150} 1.02^{10}$ times more than when I started (≈ 1.22 times more).

This game strategy will give us an outcome in the long run that is better than if we use a strategy building on the neoclassical economic theory of choice under uncertainty (risk) – expected value maximization. If we bet all our wealth in each game we will most likely lose our fortune, but because with low probability we will have a very large fortune, the expected value is still high. For a real-life player – for whom there is very little to benefit from this type of ensemble-average – it is more relevant to look at time-average of what he may be expected to win (in our game the averages are the same only if we assume that the player has a logarithmic utility function). What good does it do me if my tossing the coin maximizes an expected value when I might have gone bankrupt after four games played? If I try to maximize the expected value, the probability of bankruptcy soon gets close to one. Better then to invest 20% of my wealth in each game and maximize my long-term average wealth growth!

On a more economic-theoretical level, the Kelly strategy highlights the problems concerning the neoclassical theory of expected utility that I have raised before (e. g. in Why expected utility theory is wrong).

When applied to the neoclassical theory of expected utility, one thinks in terms of “parallel universe” and asks what is the expected return of an investment, calculated as an average over the “parallel universe”?  In our coin toss example, it is as if one supposes that various “I” are tossing a coin and that the loss of many of them will be offset by the huge profits one of these “I” does. But this ensemble-average does not work for an individual, for whom a time-average better reflects the experience made in the “non-parallel universe” in which we live.

The Kelly strategy gives a more realistic answer, where one thinks in terms of the only universe we actually  live in, and ask what is the expected return of an investment, calculated as an average over time.

Since we cannot go back in time – entropy and the “arrow of time ” make this impossible  – and the bankruptcy option is always at hand (extreme events and “black swans” are always possible) we have nothing to gain from thinking in terms of ensembles .

Actual events follow a fixed pattern of time, where events are often linked in a multiplicative process (as e. g. investment returns with “compound interest”) which is basically non-ergodic.

Instead of arbitrarily assuming that people have a certain type of utility function – as in the neoclassical theory – the Kelly criterion shows that we can obtain a less arbitrary and more accurate picture of real people’s decisions and actions by basically assuming that time is irreversible. When the bankroll is gone, it’s gone. The fact that in a parallel universe it could conceivably have been refilled, are of little comfort to those who live in the one and only possible world that we call the real world.

Our coin toss example can be applied to more traditional economic issues. If we think of an investor, we can basically describe his situation in terms of our coin toss. What fraction (x) of his assets (T) should an investor – who is about to make a large number of repeated investments – bet on his feeling that he can better evaluate an investment (p = 0.6) than the market (p = 0.5)? The greater the x, the greater is the leverage. But also – the greater is the risk. Since p is the probability that his investment valuation is correct and (1 – p) is the probability that the market’s valuation is correct, it means the Kelly strategy says he optimizes the rate of growth on his investments by investing a fraction of his assets that is equal to the difference in the probability that he will “win” or “lose”. In our example this means that he at each investment opportunity is to invest the fraction of x = 0.6 – (1 – 0.6), i.e. about 20% of his assets. The optimal average growth rate of investment is then about 11% (0.6 log (1.2) + 0.4 log (0.98)).

Kelly’s criterion shows that because we cannot go back in time, we should not take excessive risks. For high leverage increases the risk of bankruptcy. This should also be a warning for the financial world, where the constant quest for greater and greater leverage – and risks – creates extensive and recurrent systemic crises. A more appropriate level of risk-taking is a necessary ingredient in a policy to come to curb excessive risk taking.

Next Page »

Blog at WordPress.com. | Theme: Pool by Borja Fernandez.
Entries and comments feeds.