The euro — gold cage of our time

19 Jun, 2016 at 09:17 | Posted in Economics | 2 Comments


The euro has taken away the possibility for national governments to manage their economies in a meaningful way — and in Greece the people has had to pay the true costs of its concomitant misguided austerity policies.


The unfolding of the Greek tragedy during the last couple of years has shown beyond any doubts that the euro is not only an economic project, but just as much a political one. What the neoliberal revolution during the 1980s and 1990s didn’t manage to accomplish, the euro shall now force on us.

But do the peoples of Europe really want to deprive themselves of economic autonomy, enforce lower wages and slash social welfare at the slightest sign of economic distress? Is inreasing income inequality and a federal überstate really the stuff that our dreams are made of? I doubt it.

History ought to act as a deterrent. During the 1930s our economies didn’t come out of the depression until the folly of that time — the gold standard — was thrown on the dustbin of history. The euro will hopefully soon join it.

Mainstream economists have a tendency to get enthralled by their theories and models, and forget that behind the figures and abstractions there is a real world with real people. Real people that have to pay dearly for fundamentally flawed doctrines and recommendations.

Let’s make sure the consequences will rest on the conscience of those economists.


The confidence fairy history

18 Jun, 2016 at 17:01 | Posted in Economics | 1 Comment

confidence-next-exitBrad DeLong has an excellent presentation on the history of the confidence fairy up on his blog.

What I especially like with Brad’s history is that it makes it crystal clear how hard it has been for mainstream economists to grasp the simple fact that no matter how much confidence you have in the policies pursued by authorities, it cannot turn bad austerity policies into good job creating policies. Austerity measures and overzealous and simple-minded fixation on monetary measures and inflation, are — almost without exception — not what it takes to get limping economies out of their limbo. They simply do not get us out of the ‘magneto trouble’ — and neither does budget deficit discussions where economists and politicians seem to think that cutting government budgets would help us out of recessions and slumps. Although the ‘credibility’ that mainstream economists talk about arguably has some impact on the economy, the confidence fairy does not create recovery or offset the negative effects of Alessina-like ‘expansionary fiscal austerity.’ In a situation where monetary policies has become more and more decrepit, the solution is not fiscal austerity, but fiscal expansion!

David Andolfatto’s DSGE flimflam

15 Jun, 2016 at 17:03 | Posted in Economics | 1 Comment

David Andolfatto — vice president at Federal Reserve Bank of St. Louis — has a post up on his blog trying to defend the much criticised use of DSGE models. According to Andalfatto

‘to help organize our thinking’, it is often useful to construct mathematical representations of our theories — not as a substitute, but as a complement to the other tools in our tool kit (like basic intuition).
1643.Lebowski.jpg-610x0This is a useful exercise if for no other reason than it forces us to make our assumptions explicit, at least, for a particular thought experiment. We want to make the theory transparent (at least, for those who speak the trade language) and therefore easy to criticize.

We’ve heard this line of ‘defence’ before, and it’s as little convincing as ever. But as extra ammunition in defending DSGE for policy issues, Andolfatto refers us to an interview with Nobel laureate Tom Sargent.

So let’s see if there’s anything in that interview that would make us believe in this ‘help us organise our thinking’ fairytale. Sargent gives the following defense of ‘modern macro’ (my emphasis):

Sargent: I know that I’m the one who is supposed to be answering questions, but perhaps you can tell me what popular criticisms of modern macro you have in mind.

Rolnick: OK, here goes. Examples of such criticisms are that modern macroeconomics makes too much use of sophisticated mathematics to model people and markets; that it incorrectly relies on the assumption that asset markets are efficient in the sense that asset prices aggregate information of all individuals; that the faith in good outcomes always emerging from competitive markets is misplaced; that the assumption of “rational expectations” is wrongheaded because it attributes too much knowledge and forecasting ability to people; that the modern macro mainstay “real business cycle model” is deficient because it ignores so many frictions and imperfections and is useless as a guide to policy for dealing with financial crises; that modern macroeconomics has either assumed away or shortchanged the analysis of unemployment; that the recent financial crisis took modern macro by surprise; and that macroeconomics should be based less on formal decision theory and more on the findings of “behavioral economics.” Shouldn’t these be taken seriously?

Sargent: Sorry, Art, but aside from the foolish and intellectually lazy remark about mathematics, all of the criticisms that you have listed reflect either woeful ignorance or intentional disregard for what much of modern macroeconomics is about and what it has accomplished. That said, it is true that modern macroeconomics uses mathematics and statistics to understand behavior in situations where there is uncertainty about how the future will unfold from the past. But a rule of thumb is that the more dynamic, uncertain and ambiguous is the economic environment that you seek to model, the more you are going to have to roll up your sleeves, and learn and use some math. That’s life.

Are these the words of an ’empirical’ and ‘transparent’ macroeconomist? I’ll be dipped! To me it sounds like the same old axiomatic-deductivist mumbo-jumbo that parades as economic science of today.

Mainstream economic theory today is in the story-telling business whereby economic theorists create make-believe analogue models of the real economic system. This modeling activity is — as both Andolfatto and Sargent give ample evidence of — considered useful and essential. Since fully-fledged experiments on a societal scale as a rule are prohibitively expensive, ethically indefensible or unmanageable, economic theorists have to substitute experimenting with something else. To understand and explain relations between different entities in the real economy the predominant strategy is to build models and make things happen in these “analogue-economy models” rather than engineering things happening in real economies.

Formalistic deductive “Glasperlenspiel” can be very impressive and seductive. But in the realm of science it ought to be considered of little or no value to simply make claims about the model and lose sight of the other part of the model-target dyad.

Mainstream economics — and especially of the Chicago ilk — has since long given up on the real world and contents itself with proving things about thought up worlds. Empirical evidence only plays a minor role in economic theory, where models largely function as a substitute for empirical evidence. But, hopefully humbled by the manifest failure of its theoretical pretences, the one-sided, almost religious, insistence on axiomatic-deductivist modeling as the only scientific activity worthy of pursuing in economics will give way to methodological pluralism, relevance, and realism, rather than formalistic tractability.

pinnocI remember attending the first lecture in Tom Sargent’s evening macroeconomics class back when I was in undergraduate: very smart man from whom I have learned the enormous amount, and well deserving his Nobel Prize. But…

He said … we were going to build a rigorous, micro founded model of the demand for money: We would assume that everyone lived for two periods, worked in the first period when they were young and sold what they produced to the old, held money as they aged, and then when they were old use their money to buy the goods newly produced by the new generation of young. Tom called this “microfoundations” and thought it gave powerful insights into the demand for money that you could not get from money-in-the-utility-function models.

I thought that it was a just-so story, and that whatever insights it purchased for you were probably not things you really wanted to buy. I thought it was dangerous to presume that you understood something because you had “microfoundations” when those microfoundations were wrong. After all, Ptolemaic astronomy had microfoundations: Mercury moved more rapidly than Saturn because the Angel of Mercury left his wings more rapidly than the Angel of Saturn and because Mercury was lighter than Saturn…

Brad DeLong

Andolfatto — not to mention Sargent — seems to be impressed by the ‘rigour’ brought to macroeconomics by new classical DSGE models and its rational expectations, microfoundations and  ‘Lucas Critique’.

It is difficult to see why.

3634flimTake the rational expectations assumption for example. Rational expectations in the mainstream economists’s world implies that relevant distributions have to be time independent. This amounts to assuming that an economy is like a closed system with known stochastic probability distributions for all different events. In reality it is straining one’s beliefs to try to represent economies as outcomes of stochastic processes. An existing economy is a single realization tout court, and hardly conceivable as one realization out of an ensemble of economy-worlds, since an economy can hardly be conceived as being completely replicated over time. It is — to say the least — very difficult to see any similarity between these modelling assumptions and the expectations of real persons. In the world of the rational expectations hypothesis we are never disappointed in any other way than as when we lose at the roulette wheels. But real life is not an urn or a roulette wheel. And that’s also the reason why allowing for cases where agents make ‘predictable errors’ in DSGE models doesn’t take us any closer to a relevant and realist depiction of actual economic decisions and behaviours. If we really want to have anything of interest to say on real economies, financial crisis and the decisions and choices real people make we have to replace the rational expectations hypothesis with more relevant and realistic assumptions concerning economic agents and their expectations than childish roulette and urn analogies.

‘Rigorous’ and ‘precise’ DSGE models cannot be considered anything else than unsubstantiated conjectures as long as they aren’t supported by evidence from outside the theory or model. To my knowledge no in any way decisive empirical evidence has been presented.


No matter how precise and rigorous the analysis, and no matter how hard one tries to cast the argument in modern mathematical form, they do not push economic science forwards one single millimeter if they do not stand the acid test of relevance to the target. No matter how clear, precise, rigorous or certain the inferences delivered inside these models are, they do not per se say anything about real world economies.

Proving things ‘rigorously’ in DSGE models is at most a starting-point for doing an interesting and relevant economic analysis. Forgetting to supply export warrants to the real world makes the analysis an empty exercise in formalism without real scientific value.

Mainstream broken pieces models

15 Jun, 2016 at 15:06 | Posted in Economics | 3 Comments

In economic modeling, people often just assume an objective function for one agent or another, throw that into a larger model, and then look only at some subset of the model’s overall implications. But that’s throwing away data …

And in doing so, it dramatically lowers the empirical bar that a model has to clear. You’re essentially tossing a ton of broken, wrong structural assumptions into a model and then calibrating (or estimating) the parameters to match a fairly small set of things, then declaring victory. But because you’ve got the structure wrong, the model will fail and fail and fail as soon as you take it out of sample, or as soon as you apply it to any data other than the few things it was calibrated to match.

PiggyPieces.1.pngceca4f46-cf6c-428a-8a60-8a9f9f620d19LargeUse broken pieces, and you get a broken machine …

Dani Rodrik, when he talks about these issues, says that unrealistic assumptions are only bad if they’re ‘critical’ assumptions – that is, if changing them would change the model substantially. It’s OK to have non-critical assumptions that are unrealistic, just like a car will still run fine even if the cup-holder is cracked. That sounds good. In principle I agree. But in practice, how the heck do you know in advance which assumptions are critical? You’d have to go check them all, by introducing alternatives for each and every one (actually every combination of assumptions, since model features tend to interact). No one is actually going to do that. It’s a non-starter.

The real solution, as I see it, is not to put any confidence in models with broken pieces.

Noah Smith

No indeed, there’s no reason whatsoever to trust those models and their defenders.

51A1kO+7AoL._SX329_BO1,204,203,200_Dani Rodrik’s Economics Rules  describes economics as a more or less problem-free smorgasbord collection of models. Economics is portrayed as advancing through a judicious selection from a continually expanding library of models, models that are presented as ‘partial maps’ or ‘simplifications designed to show how specific mechanisms  work.’

But one of the things that’s missing in Rodrik’s view of economic models is the all-important distinction between core and auxiliary assumptions. Although Rodrik repeatedly speaks of ‘unrealistic’ or ‘critical’ assumptions, he basically — as Noah Smith rightly remarks — just lumps them all together without differentiating between different types of assumptions, axioms or theorems.

Modern mainstream (neoclassical) economists ground their models on a set of core assumptions (CA) — basically describing the agents as ‘rational’ actors — and a set of auxiliary assumptions (AA). Together CA and AA make up what I will call the ur-model (M) of all mainstream neoclassical economic models. Based on these two sets of assumptions, they try to explain and predict both individual (micro) and — most importantly — social phenomena (macro).

The core assumptions typically consist of:

CA1 Completeness — rational actors are able to compare different alternatives and decide which one(s) he prefers

CA2 Transitivity — if the actor prefers A to B, and B to C, he must also prefer A to C.

CA3 Non-satiation — more is preferred to less.

CA4 Maximizing expected utility — in choice situations under risk (calculable uncertainty) the actor maximizes expected utility.

CA4 Consistent efficiency equilibria — the actions of different individuals are consistent, and the interaction between them result in an equilibrium.

When describing the actors as rational in these models, the concept of rationality used is instrumental rationality – choosing consistently the preferred alternative, which is judged to have the best consequences for the actor given his in the model exogenously given wishes/interests/ goals. How these preferences/wishes/interests/goals are formed is typically not considered to be within the realm of rationality, and a fortiori not constituting part of economics proper.

The picture given by this set of core assumptions (rational choice) is a rational agent with strong cognitive capacity that knows what alternatives he is facing, evaluates them carefully, calculates the consequences and chooses the one — given his preferences — that he believes has the best consequences according to him.

Weighing the different alternatives against each other, the actor makes a consistent optimizing (typically described as maximizing some kind of utility function) choice, and acts accordingly.

Beside the core assumptions (CA) the model also typically has a set of auxiliary assumptions (AA) spatio-temporally specifying the kind of social interaction between ‘rational actors’ that take place in the model. These assumptions can be seen as giving answers to questions such as

AA1 who are the actors and where and when do they act

AA2 which specific goals do they have

AA3 what are their interests

AA4 what kind of expectations do they have

AA5 what are their feasible actions

AA6 what kind of agreements (contracts) can they enter into

AA7 how much and what kind of information do they possess

AA8 how do the actions of the different individuals/agents interact with each other.

So, the ur-model of all economic models basically consist of a general specification of what (axiomatically) constitutes optimizing rational agents and a more specific description of the kind of situations in which these rational actors act (making AA serve as a kind of specification/restriction of the intended domain of application for CA and its deductively derived theorems). The list of assumptions can never be complete, since there will always unspecified background assumptions and some (often) silent omissions (like closure, transaction costs, etc., regularly based on some negligibility and applicability considerations). The hope, however, is that the ‘thin’ list of assumptions shall be sufficient to explain and predict ‘thick’ phenomena in the real, complex, world.

But in Rodrik’s model depiction we are essentially given the following structure,

A1, A2, … An

where a set of undifferentiated assumptions are used to infer a theorem.

This is, however, to vague and imprecise to be helpful, and does not give a true picture of the usual mainstream modeling strategy, where there’s a differentiation between a set of law-like hypotheses (CA) and a set of auxiliary assumptions (AA), giving the more adequate structure

CA1, CA2, … CAn & AA1, AA2, … AAn


CA1, CA2, … CAn
(AA1, AA2, … AAn) → Theorem,

more clearly underlining the function of AA as a set of (empirical, spatio-temporal) restrictions on the applicability of the deduced theorems.

This underlines the fact that specification of AA restricts the range of applicability of the deduced theorem. In the extreme cases we get

CA1, CA2, … CAn

where the deduced theorems are analytical entities with universal and totally unrestricted applicability, or

AA1, AA2, … AAn

where the deduced theorem is transformed into an untestable tautological thought-experiment without any empirical commitment whatsoever beyond telling a coherent fictitious as-if story.

Not clearly differentiating between CA and AA means that Rodrik can’t make this all-important interpretative distinction, and so opens up for unwarrantedly ‘saving’ or ‘immunizing’ models from almost any kind of critique by simple equivocation between interpreting models as empirically empty and purely deductive-axiomatic analytical systems, or, respectively, as models with explicit empirical aspirations. Flexibility is usually something people deem positive, but in this methodological context it’s more troublesome than a sign of real strength. Models that are compatible with everything, or come with unspecified domains of application, are worthless from a scientific point of view.

Mainstream macro models are nothing but broken pieces models — and as Noah Smith puts it — there is no reason for us ‘to put any confidence in models with broken pieces.’

Economics — spending time doing silly things

14 Jun, 2016 at 19:26 | Posted in Economics | 3 Comments

Mainstream macro theory has not really helped out the finance industry, the Fed, or coffee house discussions. The reason … is basically that DSGE models don’t work …

But [if] DSGE models really don’t work, why do so many macroeconomists spend so much time on them? …

What if it’s signaling? …

no-sigThat suspicion was probably planted in 2005 … by a Japanese economist I knew … He gave me his advice on how to have an econ career: “First, do some hard math thing, like functional analysis. Then everyone will know you’re smart, and you can do easy stuff” …

I then watched a number of my grad school classmates go into macroeconomics. Their job market papers all were mainly theory papers, though – in keeping with typical macro practice – they had an empirical section that was usually closely related to the theory. The models all struck me as hopelessly unrealistic and silly, of course, and in private my classmates – the ones I talked to – agreed that this was the case, and said lots of mean things about DSGE modeling in general, basically saying “This is the game we have to play.” …

the_ministry_of_silly_walksIf macroeconomics research is a coordination game, and if the prevailing research paradigm is not really better than alternatives, then you probably want macroeconomists who are willing to “play the game”, as it were. So DSGE might be an expensive way of proving that you’re willing to spend a lot of time and effort doing silly stuff that the profession tells you to do.

Noah Smith

NAIRU — a long-run equilibrium slippery eel

14 Jun, 2016 at 17:09 | Posted in Economics | 1 Comment

The concept of equilibrium, of course, is an indispensable tool of analysis … But to use the equilibrium concept one has to keep it in place, and its place is strictly in the preliminary stages of an analytical argument, not in the framing of hypotheses to be tested against the facts, for we know perfectly well that we shall not find facts in a state of equilibrium. Yet many writers seem to conceive the long-period as a date somewhere in the future that we shall get to some day …

210578Long-run equilibrium is a slippery eel. Marshall evidently intended to mean by the long period a horizon which is always at a certain distance in the future, and this is a useful metaphor; but he slips into discussing a position of equilibrium which is shifted by the very process of approaching it and he got himself into a thorough tangle by drawing three-dimensional positions on a plane diagram.

No one would deny that to speak of a tendency towards equilibrium that itself shifts the position towards which it is tending is a contradiction in terms. And yet it still persists. It is for this reason that we must attribute its survival to some kind of psychological appeal that transcends reason.


The existence of long-run equilibrium is a very handy modeling assumption to use. But that does not make it easily applicable to real-world economies. Why? Because it is basically a timeless concept utterly incompatible with real historical events. In the real world it is the second law of thermodynamics and historical — not logical — time that rules.

Is long-run equilibrium really a good guide to macroeconomic policy? Friedman’s NAIRUvian long run and the more strictly classical natural rate, based on rational expectations, are certainly beguiling. But are they relevant? Information may be asymmetric. Competition may be monopolistic. Nonlinearities and even chaos are possible. Equilibria may be multiple or continuous. In such cases, the long-run equilibrium may be undetermined or incalculable or beyond achievement. To put it another way, the future may be inherently unpredictable. Here, the political scientists with their concept of “rational ignorance” may have something to teach economists.

James K. Galbraith

Paul Krugman — mistaking the map for the territory

14 Jun, 2016 at 09:56 | Posted in Economics | 1 Comment

Paul Krugman has — together with Robin Wells — written an economics textbook that is used all over the world. As all the rest of mainstream economics textbooks, it stresses from the first pages the importance of supplying the student with a systematic way of thinking through economic problems with the help of simple models.

aaaaModeling is all about simplification …

A model is a simplified representation of reality that is used to better understand real-life situations …

The importance of models is that they allow economists to focus on the effects of only one change at a time …

For many purposes, the most effective form of economic modeling is the construction of ‘thought experiments’: simplified, hypothetical versions of real-life situations …

And these kind of rather vacuous ‘simplicity’ and ‘understanding’ statements get repeated — almost ad nauseam — over and over again in the book.

For someone genuinely interested in economic methodology and science theory it is definitely difficult to swallow Krugman’s methodological stance, and especially his non-problematized acceptance of the need for simple models.

To Krugman modeling is a logical way to analytically isolate different variables/causes/mechanisms operating in an economic system. Simplifying a complex world makes it possible for him to ‘tell a story’ about the economy.

Is not the use of abstractions a legitimate tool of economics? No doubt — it is only that all abstractions are not equally correct. An abstraction consists of isolating a part of reality, not in making it disappear.

Emile Durkheim

What is missing in Krugman’s model picture is an explanation of how and in what way his simplifications increase our understanding — and of what. If a model is good or bad is mostly not a question of simplicity, but rather if the assumptions on which it builds are valid and sound, or just something we choose, to make the model (mathematically) tractable.

Assumptions may make the model rigorous and consistent from a logical point of view, but that is of little avail if the consistency is bought at the price of not giving a truthful representation of the real economic system.

The model may not only be simple but oversimplified, making it quite unuseful for explanations and predictions.

The theories economists typically put forth about how the whole economy works are too simplistic.

George Akerlof & Robert Shiller

Throughout his discussion of models, Krugman assumes that they ‘allow economists to focus on the effects of only one change at a time.’ This assumption is of paramount importance and really ought to be much more argued for — on both epistemological and ontological grounds — if at all being used.

Limiting model assumptions in economic science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we ‘export’ them to our ‘target systems,’ we have to be able to show that they do not only hold under ceteris paribus conditions and a fortiori only are of limited value to our understanding, explanations or predictions of real economic systems.

The rather one-sided emphasis on usefulness and its concomitant instrumentalist justification cannot hide that neither Krugman, nor the legions of other mainstream economics textbooks authors,  give supportive evidence for their considering it fruitful to believe in the possibility of analyzing complex and interrelated economic system ‘one part at a time.’ For although this atomistic hypothesis may have been useful in the natural sciences, it usually breaks down completely when applied to the social sciences. Dubious simplifying approximations do not take us one single iota closer to understanding or explaining open social and economic systems.

The kind of relations that Krugman and other mainstream economists establish with their ‘thought experimental’ modeling strategy are only relations about entities in models that presuppose causal mechanisms being atomistic and additive. When causal mechanisms operate in real world social target systems they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made ‘nomological machines’ they are rare, or even non-existant. Unfortunately that also makes most of the mainstream modeling achievements rather useless.

All empirical sciences use simplifying or ‘unrealistic’ assumptions in their modeling activities. That is not the issue – as long as the assumptions made are not unrealistic in the wrong way or for the wrong reasons.

Theories are difficult to directly confront with reality. Economists therefore build models of their theories. Those models are representations that are directly examined and manipulated to indirectly say something about the target systems. But models do not only face theory. They also have to look to the world. Being able to model a ‘credible world’ — Krugman’s ‘thought experiment’– a world that somehow could be considered real or similar to the real world, is not the same as investigating the real world. Even though all theories are false, since they simplify, they may still possibly serve our pursuit of truth. But then they cannot be unrealistic or false in any way. The falsehood or unrealisticness has to be qualified.

Some of the standard assumptions made in mainstream economic theory – on rationality, information handling and types of uncertainty – are not possible to make more realistic by ‘de-idealization’ or ‘successive approximations’ without altering the theory and its models fundamentally. And still there is not a single mentioning of this limitation in Krugman’s textbook!

From a methodological perspective, Krugman’s economic textbook — as are those of Mankiw et consortes — is a rather unimpressive attempt at legitimizing using fictitious idealizations for reasons more to do with model tractability than with a genuine interest of understanding and explaining features of real economies.

Krugman’s textbook and its simplicity preaching shows that mainstream economics has become increasingly irrelevant to the understanding of the real world. The main reason for this irrelevance is the failure of mainstream economists to match their deductive-axiomatic methods with their subject.

It is — sad to say — a fact that within mainstream economics internal validity is everything and external validity nothing. Why anyone should be interested in that kind of theories and models — as long as mainstream economists do not come up with any export licenses for their theories and models to the real world in which we live — is beyond my imagination. Sure, the simplicity that axiomatics and analytical arguments bring to economics is attractive to most economists, but simplicity obviously has its perils. Although simplicity is great when solving models, it’s quite another thing to assume that reality conforms to that tractability prerequisite.

Krugman’s and other mainstream economists’ textbooks are sad readings. Both theoretically and methodologically they are exponents of an ideology that seems to say that as long as theories and hypotheses are possible to transform into simple mathematical models, everything is just fine. As yours truly has tried to argue, there is actually no reason — other than pure hope — for believing this. The lack of methodological reflection in these books not only makes things wrong, but even worse, makes economics absolutely irrelevant when it comes to explaining and understanding real economies.

The anatomy of stock market bubbles

14 Jun, 2016 at 09:28 | Posted in Economics | 1 Comment


NAIRU religion

13 Jun, 2016 at 09:45 | Posted in Economics | 1 Comment

Having concluded seven years as chief economist at IMF, Olivier Blanchard is now considering rewriting his undergraduate macroeconomics textbook:

How should we teach macroeconomics to undergraduates after the crisis? Here are some of my conclusions …

Turning to the supply side, the contraption known as the aggregate demand–aggregate supply model should be eliminated. It is clunky and, for good reasons, undergraduates find it difficult to understand … These difficulties are avoided if one simply uses a Phillips Curve (PC) relation to characterize the supply side. Potential output, or equivalently, the natural rate of unemployment, is determined by the interaction between wage setting and price setting. Output above potential, or unemployment below the natural rate, puts upward pressure on inflation. The nature of the pressure depends on the formation of expectations, an issue central to current developments. jobs%20cartoonIf people expect inflation to be the same as in the recent past, pressure takes the form of an increase in the inflation rate. If people expect inflation to be roughly constant as seems to be the case today, then pressure takes the form of higher—rather than increasing—inflation. What happens to the economy, whether it returns to its historical trend, then depends on how the central bank adjusts the policy rate in response to this inflation pressure.

Hmm …

One of the main problems with NAIRU — what Blanchard is referring to as ‘the natural rate of unemployment’ — is that if it is essentially seen as a timeless long-run equilibrium attractor to which actual unemployment (allegedly) has to adjust, then if that equilibrium is itself changing — and in ways that depend on the process of getting to the equilibrium — well, then we can’t really be sure what that equlibrium will be without contextualizing unemployment in real historical time. And when we do, we will see how seriously wrong we go if we omit demand from the analysis. Demand policy has long-run effects and matters also for structural unemployment — and governments and central banks can’t just look the other way and legitimize their passivity re unemployment by refering to NAIRU.

NAIRU does not hold water simply because it does not exist, and to base economic policy — or textbook models — on such a weak theoretical and empirical construct is nothing short of writing out a prescription for self-inflicted economic havoc.

According to the  [NAIRU theory], unemployment differs from its natural rate only if expected inflation differs from actual inflation. If expectations are rational, we should see as many quarters when inflation is above expected inflation as quarters when it is below expected inflation. That suggests the following test of the [NAIRU theory]..

74-7495-LTNQ100ZBecause a decade contains 40 quarters, the probability that average expected inflation over a decade will be different from naverage actual inflation should be small. If the [NAIRU theory] and rational expectations are both true simultaneously, a plot of decade averages of inflation against unemployment should reveal a vertical line at the natural rate of unemployment … This prediction fails dramatically.

There is no tendency for the points to lie around a vertical line and, if anything, the long-run Phillips is upward sloping, and closer to being horizontal than vertical. Since it is unlikely that expectations are systematically biased over decades, I conclude that the  [NAIRU theory] is false.

Defenders of the [NAIRU theory] might choose to respond to these empirical findings by arguing that the natural rate of unemployment is time varying. But I am unaware of any theory which provides us, in advance, with an explanation of how the natural rate of unemployment varies over time. In the absence of such a theory the [NAIRU theory] has no predictive content. A theory like this, which cannot be falsified by any set of observations, is closer to religion than science.

Roger Farmer

Is ‘Cauchy logic’ applicable to economics?

12 Jun, 2016 at 11:26 | Posted in Statistics & Econometrics | Comments Off on Is ‘Cauchy logic’ applicable to economics?

What is 0.999 …, really?

It appears to refer to a kind of sum:

.9 + + 0.09 + 0.009 + 0.0009 + …

9781594205224M1401819961But what does that mean? That pesky ellipsis is the real problem. There can be no controversy about what it means to add up two, or three, or a hundred numbers. But infinitely many? That’s a different story. In the real world, you can never have infinitely many heaps. What’s the numerical value of an infinite sum? It doesn’t have one — until we give it one. That was the great innovation of Augustin-Louis Cauchy, who introduced the notion of limit into calculus in the 1820s.

The British number theorist G. H. Hardy … explains it best: “It is broadly true to say that mathematicians before Cauchy asked not, ‘How shall we define 1 – 1 – 1 + 1 – 1 …’ but ‘What is 1 -1 + 1 – 1 + …?'”

No matter how tight a cordon we draw around the number 1, the sum will eventually, after some finite number of steps, penetrate it, and never leave. Under those circumstances, Cauchy said, we should simply define the value of the infinite sum to be 1.

I have no problem with solving problems in mathematics by defining them away. But how about the real world? Maybe that ought to be a question to consider even for economists all to fond of uncritically following the mathematical way when applying their models to the real world, where indeed ‘you can never have infinitely many heaps’ …

In econometrics we often run into the ‘Cauchy logic’ —the data is treated as if it were from a larger population, a ‘superpopulation’ where repeated realizations of the data are imagined. Just imagine there could be more worlds than the one we live in and the problem is fixed …

Accepting Haavelmo’s domain of probability theory and sample space of infinite populations – just as Fisher’s ‘hypothetical infinite population,’ of which the actual data are regarded as constituting a random sample, von Mises’s ‘collective’ or Gibbs’s ‘ensemble’ – also implies that judgments are made on the basis of observations that are actually never made!

Infinitely repeated trials or samplings never take place in the real world. So that cannot be a sound inductive basis for a science with aspirations of explaining real-world socio-economic processes, structures or events. It’s — just as the Cauchy mathematical logic of defining away problems — not tenable.

In economics it’s always wise to remember C. S. Peirce’s remark that universes are not as common as peanuts …

Via con me

12 Jun, 2016 at 10:37 | Posted in Varia | Comments Off on Via con me


Ergodicity and the wrong way to calculate expectations (wonkish)

11 Jun, 2016 at 15:54 | Posted in Economics | Comments Off on Ergodicity and the wrong way to calculate expectations (wonkish)

If there was one thing I believed was a reasonable implicit assumption of economics, it was determining the expectation value upon which agents base their decisions as the “ensemble mean” of a large number of draws from a distribution …

But now I’m not so sure …

pT7raRkecRolling a dice is a good example. The expected distribution of outcomes from rolling a single dice in a 10,000 roll sequence is the same as the expected distribution of rolling 10,000 dice once each. That process is ergodic.

But many processes are not like this. You cannot just keep playing over time and expect to converge to the mean …

You start with a $100 balance. You flip a coin. Heads means you win 50% of your current balance. Tails means you lose 40%. Then repeat.

Taking the ensemble mean entails reasoning by way of imagining a large number coin flips at each time period and taking the mean of these fictitious flips. That means the expectation value based on the ensemble mean of the first coin toss is (0.5x$50 + 0.5*$-40) = $5, or a 5% gain. Using this reasoning, the expectation for the second sequential coin toss is (0.5*52.5 + 0.5 * $-42) = $5.25 (another 5% gain).

The ensemble expectation is that this process will generate a 5% compound growth rate over time.

But if I start this process and keep playing long enough over time, I will never converge to that 5% expectation. The process is non-ergodic …

In fact, out of the 20,000 runs in my simulation, 17,000 lost money over the 100 time periods, having a final balance less than their $100 starting balance. Even more starkly, more than half the runs had less than $1 after 100 time periods …

So if almost everybody losses from this process, how can the ensemble mean of 5% compound growth be a reasonable expectation value? It cannot. For someone who is only going to experience a single path through a non-ergodic process, basing your behaviour on an expectation using the ensemble mean probably won’t be an effective way to navigate economic variations.

Cameron Murray

Cameron Murray is absolute right — and the issue of ergodicity is of fundamental importance in economics to have a clear view on.

flipping-a-coin-gives-you-the-truth-of-the-matter-21350026Let’s say we have a stationary process. That does not guarantee that it is also ergodic. The long-run time average of a single output function of the stationary process may not converge to the expectation of the corresponding variables — and so the long-run time average may not equal the probabilistic (expectational) average. Say we have two coins, where coin A has a probability of 1/2 of coming up heads, and coin B has a probability of 1/4 of coming up heads. We pick either of these coins with a probability of 1/2 and then toss the chosen coin over and over again. Now let H1, H2, … be either one or zero as the coin comes up heads or tales. This process is obviously stationary, but the time averages — [H1 + … + Hn]/n — converges to 1/2 if coin A is chosen, and 1/4 if coin B is chosen. Both these time averages have a probability of 1/2 and so their expectational average is 1/2 x 1/2 + 1/2 x 1/4 = 3/8, which obviously is not equal to 1/2 or 1/4. The time averages depend on which coin you happen to choose, while the probabilistic (expectational) average is calculated for the whole “system” consisting of both coin A and coin B.

Time is what prevents everything from happening at once. To simply assume that economic processes are ergodic and concentrate on ensemble averages – and a fortiori in any relevant sense timeless – is not a sensible way for dealing with the kind of genuine uncertainty that permeates open systems such as economies.

Ergodicity and the all-important difference between time averages and ensemble averages are difficult concepts that many students of economics have problems with understanding. So let me just try to give yet one other explanation of the meaning of these concepts by means of a couple of simple examples.

Let’s say you’re offered a gamble where on a roll of a fair die you will get €10  billion if you roll a six, and pay me €1 billion if you roll any other number.

Would you accept the gamble?

If you’re an economics students you probably would, because that’s what you’re taught to be the only thing consistent with being rational. You would arrest the arrow of time by imagining six different “parallel universes” where the independent outcomes are the numbers from one to six, and then weight them using their stochastic probability distribution. Calculating the expected value of the gamble – the ensemble average – by averaging on all these weighted outcomes you would actually be a moron if you didn’t take the gamble (the expected value of the gamble being 5/6*€0 + 1/6*€10 billion = €1.67 billion)

If you’re not an economist you would probably trust your common sense and decline the offer, knowing that a large risk of bankrupting one’s economy is not a very rosy perspective for the future. Since you can’t really arrest or reverse the arrow of time, you know that once you have lost the €1 billion, it’s all over. The large likelihood that you go bust weights heavier than the 17% chance of you becoming enormously rich. By computing the time average – imagining one real universe where the six different but dependent outcomes occur consecutively – we would soon be aware of our assets disappearing, and a fortiori that it would be irrational to accept the gamble.

[From a mathematical point of view you can  (somewhat non-rigorously) describe the difference between ensemble averages and time averages as a difference between arithmetic averages and geometric averages. Tossing a fair coin and gaining 20% on the stake (S) if winning (heads) and having to pay 20% on the stake (S) if loosing (tails), the arithmetic average of the return on the stake, assuming the outcomes of the coin-toss being independent, would be [(0.5*1.2S + 0.5*0.8S) – S)/S]  = 0%. If considering the two outcomes of the toss not being independent, the relevant time average would be a geometric average return of  squareroot[(1.2S *0.8S)]/S – 1 = -2%.]

Why is the difference between ensemble and time averages of such importance in economics? Well, basically, because when assuming the processes to be ergodic,ensemble and time averages are identical.

Assume we have a market with an asset priced at €100 . Then imagine the price first goes up by 50% and then later falls by 50%. The ensemble average for this asset would be €100 – because we here envision two parallel universes (markets) where the assetprice falls in one universe (market) with 50% to €50, and in another universe (market) it goes up with 50% to €150, giving an average of 100 € ((150+50)/2). The time average for this asset would be 75 € – because we here envision one universe (market) where the asset price first rises by 50% to €150, and then falls by 50% to €75 (0.5*150).

From the ensemble perspective nothing really, on average, happens. From the time perspective lots of things really, on average, happen. Assuming ergodicity there would have been no difference at all.

The difference between ensemble and time averages also highlights — as Murray’s post shows — the problems concerning the neoclassical theory of expected utility (something I have touched upon e. g. in Why expected utility theory is wrong).

When applied to the neoclassical theory of expected utility, one thinks in terms of “parallel universe” and asks what is the expected return of an investment, calculated as an average over the “parallel universe”? In our coin tossing example, it is as if one supposes that various “I” are tossing a coin and that the loss of many of them will be offset by the huge profits one of these “I” does. But this ensemble average does not work for an individual, for whom a time average better reflects the experience made in the “non-parallel universe” in which we live.

Time averages gives a more realistic answer, where one thinks in terms of the only universe we actually live in, and ask what is the expected return of an investment, calculated as an average over time.

Since we cannot go back in time – entropy and the arrow of time make this impossible – and the bankruptcy option is always at hand (extreme events and “black swans” are always possible) we have nothing to gain from thinking in terms of ensembles.

Actual events follow a fixed pattern of time, where events are often linked in a multiplicative process (as e. g. investment returns with “compound interest”) which is basically non-ergodic.

Instead of arbitrarily assuming that people have a certain type of utility function – as in the neoclassical theory – time average considerations show that we can obtain a less arbitrary and more accurate picture of real people’s decisions and actions by basically assuming that time is irreversible. When are assets are gone, they are gone. The fact that in a parallel universe it could conceivably have been refilled, are of little comfort to those who live in the one and only possible world that we call the real world.

Our coin toss example can be applied to more traditional economic issues. If we think of an investor, we can basically describe his situation in terms of our coin toss. What fraction of his assets should an investor – who is about to make a large number of repeated investments – bet on his feeling that he can better evaluate an investment (p = 0.6) than the market (p = 0.5)? The greater the fraction, the greater is the leverage. But also – the greater is the risk. Letting p be the probability that his investment valuation is correct and (1 – p) is the probability that the market’s valuation is correct, it means that he optimizes the rate of growth on his investments by investing a fraction of his assets that is equal to the difference in the probability that he will “win” or “lose”. This means that he at each investment opportunity (according to the so called Kelly criterion) is to invest the fraction of  0.6 – (1 – 0.6), i.e. about 20% of his assets (and the optimal average growth rate of investment can be shown to be about 2% (0.6 log (1.2) + 0.4 log (0.8))).

Time average considerations show that because we cannot go back in time, we should not take excessive risks. High leverage increases the risk of bankruptcy. This should also be a warning for the financial world, where the constant quest for greater and greater leverage – and risks – creates extensive and recurrent systemic crises. A more appropriate level of risk-taking is a necessary ingredient in a policy to come to curb excessive risk taking.

Keynes in Finland

11 Jun, 2016 at 12:10 | Posted in Economics | Comments Off on Keynes in Finland


s-l300Visiting one of Helsinki’s many nice cafés and restaurants the other day, I read the following inscription on a mirror and thought Keynes must have been here …



syllhelsinkiAnyhow — slides from yours truly’s keynote presentation at the Kalevi Sorsa Foundation celebration of the 80th anniversary of Keynes’ General Theory is available here.

The Spirit of Tallis

11 Jun, 2016 at 09:54 | Posted in Economics | Comments Off on The Spirit of Tallis


Heavenly beautiful!

Flimflam Chicago economics

9 Jun, 2016 at 09:21 | Posted in Economics | Comments Off on Flimflam Chicago economics

flimflamThe people inside the model have much more knowledge about the system they are operating in than is available to the economist or econometrician who is using the model to try to understand their behavior. In particular, an econometrician faces the problem of estimating probability distributions and laws of motion that the agents in the model are assumed to know. Further the formal estimation and inference procedures of rational expectations econometricians assumes that the agents in the model already know many of the objects the econometrician is estimating.

Thomas Sargent

Making utterly ridiculous restrictive model assumptions on rationality, information, and cognition, makes it possible for Chicago economists to describe the world as an instantiation of a FORTRAN program.

Yes, indeed. And at asylums there are people who think they are Napoleon and that the moon is made of green cheese …

« Previous PageNext Page »

Blog at
Entries and Comments feeds.