## Why economists need to study history

24 May, 2022 at 12:01 | Posted in Economics | Leave a comment.

## Axel Leijonhufvud In Memoriam

23 May, 2022 at 14:27 | Posted in Economics | 3 CommentsI first came across Axel’s writing as an undergraduate student at Manchester University where we were taught from his doctoral dissertation, Keynes and the Keynesians; a book that shot Axel to intellectual rock stardom. He argued in that book that Keynes’ General Theory had nothing to do with sticky wages and prices but was instead about inter-temporal coordination failure …

Although Axel recognized the importance of mathematical methods, he also recognized the limitations of formalism and he insisted that a good economist must be aware of the past. To this end he was a strong supporter of the teaching of economic history and the history of thought as part of the core curriculum.

Economic history disappeared as a core subject from many economics departments in the 1980s. UCLA was an exception largely due to Axel’s insistence that a good macro economist needs to understand the past before she can understand the present or the future …

Axel’s support of the history of thought was deemed anachronistic by many of his contemporaries who viewed economics through the lens of a linear progression of knowledge. In contrast, Axel viewed science — and particularly economics as a non experimental science — as a tree in which herd behavior led often to persistent treks down roads to nowhere …

Axel was a follower of Imre Lakatos and his view of the progression of economics was heavily influenced by Lakatos’ methodology and the concept of progressive and degenerative scientific research programs. In his view, modern macroeconomics is a degenerative research program that took a wrong turn in the 1950s. Axel was right about this and it is a theme that has influenced my own research agenda. Time will tell if the profession will eventually agree.

In physics, it may possibly not be straining credulity too much to model processes as ergodic — where time and history do not really matter — but in social and historical sciences it is obviously ridiculous. If societies and economies were ergodic worlds, why do econometricians fervently discuss things such as structural breaks and regime shifts? That they do is an indication of the unrealisticness of treating open systems as analyzable with ergodic concepts.

The future is not reducible to a known set of prospects. It is not like sitting at the roulette table and calculating what the future outcomes of spinning the wheel will be. Reading leading mainstream economists today one comes to think of Robert Clower’s apt remark that

much economics is so far removed from anything that remotely resembles the real world that it’s often difficult for economists to take their own subject seriously.

Anyone can construct models. To be seriously interesting, models have to come with an aim. They have to have an intended use. If the intention of a model is to help us explain real economies, it has to be evaluated from that perspective. A model or hypothesis without specific applicability is not really deserving our interest.

Without strong evidence, all kinds of absurd claims and nonsense may pretend to be science. We have to demand more of a justification than a rather watered-down version of ‘anything goes’ when it comes to the postulates on which mainstream economics is founded. If one proposes things like ‘rational expectations’ one also has to support its underlying assumptions. None is given, which makes it rather puzzling how rational expectations have become the standard modelling assumption made in much of modern macroeconomics. Perhaps the reason is that economists often mistake mathematical beauty for truth.

## Axel Leijonhufvud (1933-2022)

18 May, 2022 at 07:41 | Posted in Economics | 5 CommentsThe orthodox Keynesianism of the time did have a theoretical explanation for recessions and depressions. Proponents saw the economy as a self-regulating machine in which individual decisions typically lead to a situation of full employment and healthy growth. The primary reason for periods of recession and depression was because wages did not fall quickly enough. If wages could fall rapidly and extensively enough, then the economy would absorb the unemployed. Orthodox Keynesians also took Keynes’ approach to monetary economics to be similar to the classical economists.

Leijonhufvud got something entirely different from reading the General Theory. The more he looked at his footnotes, originally written in puzzlement at the disparity between what he took to be the Keynesian message and the orthodox Keynesianism of his time, the confident he felt. The implications were amazing. Had the whole discipline catastrophically misunderstood Keynes’ deeply revolutionary ideas? Was the dominant economics paradigm deeply flawed and a fatally wrong turn in macroeconomic thinking? And if this was the case, what was Keynes actually proposing?

Leijonhufvud’s “Keynesian Economics and the Economics of Keynes” exploded onto the academic stage the following year; no mean feat for an economics book that did not contain a single equation. The book took no prisoners and aimed squarely at the prevailing metaphor about the self-regulating economy and the economics of the orthodoxy. He forcefully argued that the free movement of wages and prices can sometimes be destabilizing and could move the economy away from full employment.

Fortunately, when you’ve got tired of the kind of macroeconomic apologetics produced by ‘New Keynesian’ macroeconomics, there are still some real Keynesian macroeconomists to read. One of them will always be Axel Leijonhufvud.

## Formalizing economic theory

16 May, 2022 at 08:18 | Posted in Economics | 5 CommentsWhat guarantee is there … that economic concepts can be mapped unambiguously into mathematical concepts? The belief in the power and necessity of formalizing economic theory mathematically has thus obliterated the distinction between cognitively perceiving and understanding concepts from different domains and mapping them into each other. Whether the age-old problem of the equality between supply and demand should be mathematically formalized as a system of inequalities or equalities is not something that should be decided by mathematical knowledge or convenience. Surely it would be considered absurd, bordering on the insane, if a surgical procedure was implemented because a tool for its implementation was devised by a medical doctor who knew and believed in topological fixed-point theorems? Yet, weighty propositions about policy are decided on the basis of formalizations based on ignorance and belief in the veracity of one kind of one-dimensional mathematics

## Economics phrasebook

7 May, 2022 at 08:53 | Posted in Economics | 2 CommentsIn 1990, two economics PhD students at the University of Chicago, Jeffrey Smith and Kermit Daniel … composed “

Economics to Sociology Phrase Book” in order, as they put it, “to help economists adjust their way of speaking in a manner that will make it comprehensible to Sociologists” … Concerning economics terminology, by the way, one can see that not much has changed since then.

## Economic modelling

4 May, 2022 at 16:19 | Posted in Economics | 2 CommentsA couple of years ago, Paul Krugman had a piece up on his blog arguing that the ‘discipline of modeling’ is a *sine qua non* for tackling politically and emotionally charged economic issues:

In my experience, modeling is a helpful tool (among others) in avoiding that trap, in being self-aware when you’re starting to let your desired conclusions dictate your analysis. Why? Because when you try to write down a model, it often seems to lead some place you weren’t expecting or wanting to go. And if you catch yourself fiddling with the model to get something else out of it, that should set off a little alarm in your brain.

So when ‘modern’ mainstream economists use their models — standardly assuming rational expectations, Walrasian market clearing, unique equilibria, time invariance, linear separability and homogeneity of both inputs/outputs and technology, infinitely lived intertemporally optimizing representative agents with homothetic and identical preferences, etc. — and standardly ignoring complexity, diversity, uncertainty, coordination problems, non-market clearing prices, real aggregation problems, emergence, expectations formation, etc. — we are supposed to believe that this somehow helps them ‘to avoid motivated reasoning that validates what you want to hear.’

Yours truly is, to say the least, far from convinced. The alarm that sets off in *my* brain is that this, rather than being helpful for understanding real-world economic issues, is more of an ill-advised *plaidoyer* for voluntarily taking on a methodological straight-jacket of unsubstantiated and known to be false assumptions.

Let me just give two examples to illustrate my point.

In 1817 David Ricardo presented — in *Principles* — a theory that was meant to explain why countries trade and, based on the concept of opportunity cost, how the pattern of export and import is ruled by countries exporting goods in which they have a comparative advantage and importing goods in which they have a comparative disadvantage.

Ricardo’s theory of comparative advantage, however, didn’t explain *why *the comparative advantage was the way it was. At the beginning of the 20th century, two Swedish economists — Eli Heckscher and Bertil Ohlin — presented a theory/model/theorem according to which the comparative advantages arose from differences in factor endowments between countries. Countries have comparative advantages in producing goods that use up production factors that are most abundant in the different countries. Countries would *a fortiori* mostly export goods that used the abundant factors of production and import goods that mostly used factors of productions that were scarce.

The Heckscher-Ohlin theorem — as do the elaborations on in it by e.g. Vanek, Stolper and Samuelson — builds on a series of restrictive and unrealistic assumptions. The most critically important — besides the standard market-clearing equilibrium assumptions — are

(1) Countries use identical production technologies.

(2) Production takes place with constant returns to scale technology.

(3) Within countries the factor substitutability is more or less infinite.

(4) Factor prices are equalised (the Stolper-Samuelson extension of the theorem).

*These assumptions are, as almost all empirical testing of the theorem has shown, totally unrealistic. That is, they are empirically false. *

That said, one could indeed wonder why on earth anyone should be interested in applying this theorem to real-world situations. Like so many other mainstream mathematical models taught to economics students today, this theorem has very little to do with the real world.

From a methodological point of view, one can, of course, also wonder, how we are supposed to evaluate tests of a theorem building on known to be false assumptions. What is the point of such tests? What can those tests possibly teach us? From falsehoods, *anything* logically follows.

Modern (expected) utility theory is a good example of this. Leaving the specification of preferences without almost any restrictions whatsoever, every imaginable evidence is safely made compatible with the all-embracing ‘theory’ — and a theory without informational content never risks being empirically tested and found falsified. Used in mainstream economics ‘thought experimental’ activities, it may of course be very ‘handy’, but totally void of any empirical value.

Utility theory has like so many other economic theories morphed into an empty theory of everything. And a theory of everything explains nothing — just as Gary Becker’s ‘economics of everything’ it only makes nonsense out of economic science.

Some people have trouble with the fact that by allowing false assumptions mainstream economists can generate whatever conclusions they want in their models. But that’s really nothing very deep or controversial. What I’m referring to is the well-known ‘principle of explosion,’ according to which if both a statement and its negation are considered true, any statement whatsoever can be inferred.

Whilst tautologies, purely existential statements and other nonfalsiﬁable statements assert, as it were, too little about the class of possible basic statements, self-contradictory statements assert too much. From a self-contradictory statement, any statement whatsoever can be validly deduced. Consequently, the class of its potential falsiﬁers is identical with that of all possible basic statements: it is falsiﬁed by any statement whatsoever.

On the question of tautology, I think it is only fair to say that the way axioms and theorems are formulated in mainstream (neoclassical) economics, they are often made tautological and informationally totally empty.

Using false assumptions, mainstream modellers can derive whatever conclusions they want. Wanting to show that ‘all economists consider austerity to be the right policy,’ just e.g. assume ‘all economists are from Chicago’ and ‘all economists from Chicago consider austerity to be the right policy.’ The conclusions follow by deduction — but is of course factually totally wrong. Models and theories building on that kind of reasoning are nothing but a pointless waste of time.

## On mathematics and economics

26 Apr, 2022 at 17:12 | Posted in Economics | Comments Off on On mathematics and economicsStudying mathematics and logic is interesting and fun. It sharpens the mind. But economics is not pure mathematics or logic. It’s about society. The real world. Forgetting that, economics becomes nothing but an irrelevant and uninteresting ‘Glasperlenspiel.’ Or as Knut Wicksell put it already a century ago:

One must, of course, beware of expecting from this method more than it can give. Out of the crucible of calculation comes not an atom more truth than was put in. The assumptions being hypothetical, the results obviously cannot claim more than a very limited validity. The mathematical expression ought to facilitate the argument, clarify the results, and so guard against possible faults of reasoning — that is all.

It is, by the way, evident that the

economicaspects must be the determining ones everywhere: economic truth must never be sacrificed to the desire for mathematical elegance.

## The main insight of MMT

19 Apr, 2022 at 11:05 | Posted in Economics | 44 CommentsMMT is, first and foremost, a balance sheet approach to macroeconomics. At its very core lie reserve accounting, then deposit accounting, and then sectoral balances accounting. There is very little behaviour in any of this. Equilibrium rules as all balances balance – in both flows and stocks – and there are no assumptions apart from the existence of a central bank, a Treasury, a banking system and some households and firms. MMT can only be learned by mastering its balance sheet approach. It can only be engaged by discussing the balance sheet operations it puts forward. It is here where value is added …

First of all, the main insight of MMT is that the mainstream has the sequence wrong. Whereas they assume that government expenditure is financed by taxes, MMT assumes that government spending is financed by money creation. MMT stresses that the central bank, empowered by the law and serving the state, is the monopoly issuer of currency … This logically means that the state has to spend before taxes can be paid … When taxpayers pay their taxes (or banks buy government bonds on the primary market), they first need to have state money.

Fiscal deficits always lead to an increase in the supply of financial assets held in the nongovernmental sector of the economy. This real-world fact, of course, constitutes a huge problem for mainstream (textbook) macroeconomic theory with its models building on ‘money multipliers’ and ‘loanable funds.’

The loanable funds theory is in many regards nothing but an approach where the ruling rate of interest in society is — pure and simple — conceived as nothing else than the price of loans or credit, determined by supply and demand — as Bertil Ohlin put it — “in the same way as the price of eggs and strawberries on a village market.”

In the traditional loanable funds theory — as presented in mainstream macroeconomics textbooks — the amount of loans and credit available for financing investment is constrained by how much saving is available. Saving *is* the supply of loanable funds, investment *is* the demand for loanable funds and assumed to be negatively related to the interest rate.

As argued by Kelton, there are many problems with the standard presentation and formalization of the loanable funds theory. And more can be added to the list:

**1** As already noticed by James Meade decades ago, the causal story told to explicate the accounting identities used gives the picture of “a dog called saving wagged its tail labelled investment.” In Keynes’s view — and later over and over again confirmed by empirical research — it’s not so much the interest rate at which firms can borrow that causally determines the amount of investment undertaken, but rather their internal funds, profit expectations and capacity utilization.

**2** As is typical of most mainstream macroeconomic formalizations and models, there is pretty little mention of real-world phenomena, like e. g. real money, credit rationing and the existence of multiple interest rates, in the loanable funds theory. Loanable funds theory essentially reduces modern monetary economies to something akin to barter systems — something they definitely are *not*. As emphasized especially by Minsky, to understand and explain how much investment/loaning/crediting is going on in an economy, it’s much more important to focus on the working of financial markets than staring at accounting identities like S = Y – C – G. The problems we meet on modern markets today have more to do with inadequate financial institutions than with the size of loanable-funds-savings.

**3** The loanable funds theory in the “New Keynesian” approach means that the interest rate is endogenized by assuming that Central Banks can (try to) adjust it in response to an eventual output gap. This, of course, is essentially nothing but an assumption of Walras’ law being valid and applicable, and that *a fortiori* the attainment of equilibrium is secured by the Central Banks’ interest rate adjustments. From a realist Keynes-Minsky point of view, this can’t be considered anything else than a belief resting on nothing but sheer hope. [Not to mention that more and more Central Banks actually choose not to follow Taylor-like policy rules.] The age-old belief that Central Banks control the money supply has more and more come to be questioned and replaced by an “endogenous” money view, and I think the same will happen to the view that Central Banks determine “the” rate of interest.

**4** A further problem in the traditional loanable funds theory is that it assumes that saving and investment can be treated as independent entities. To Keynes this was seriously wrong:

The classical theory of the rate of interest [the loanable funds theory] seems to suppose that, if the demand curve for capital shifts or if the curve relating the rate of interest to the amounts saved out of a given income shifts or if both these curves shift, the new rate of interest will be given by the point of intersection of the new positions of the two curves. But this is a nonsense theory. For the assumption that income is constant is inconsistent with the assumption that these two curves can shift independently of one another. If either of them shifts, then, in general, income will change; with the result that the whole schematism based on the assumption of a given income breaks down … In truth, the classical theory has not been alive to the relevance of changes in the level of income or to the possibility of the level of income being actually a function of the rate of the investment.

There are always (at least) two parts to an economic transaction. Savers and investors have different liquidity preferences and face different choices — and their interactions usually only take place intermediated by financial institutions. This, importantly, also means that there is no “direct and immediate” automatic interest mechanism at work in modern monetary economies. What this ultimately boils done to is — *iter *— that what happens at the microeconomic level — both in and out of equilibrium — is not always compatible with the macroeconomic outcome. The *fallacy of composition* (the “atomistic fallacy” of Keynes) has many faces — loanable funds is one of them.

**5** Contrary to the loanable funds theory, finance in the world of Keynes and Minsky precedes investment and saving. Highlighting the loanable funds fallacy, Keynes wrote in “The Process of Capital Formation” (1939):

Increased investment will always be accompanied by increased saving, but it can never be preceded by it. Dishoarding and credit expansion provides not an

alternativeto increased saving, but a necessary preparation for it. It is the parent, not the twin, of increased saving.

What is “forgotten” in the loanable funds theory, is the insight that finance — in all its different shapes — has its own dimension, and if taken seriously, its effect on an analysis must modify the whole theoretical system and not just be added as an unsystematic appendage. Finance is fundamental to our understanding of modern economies and acting like the baker’s apprentice who, having forgotten to add yeast to the dough, throws it into the oven afterwards, simply isn’t enough.

All real economic activities nowadays depend on a functioning financial machinery. But institutional arrangements, states of confidence, fundamental uncertainties, asymmetric expectations, the banking system, financial intermediation, loan granting processes, default risks, liquidity constraints, aggregate debt, cash flow fluctuations, etc., etc. — things that play decisive roles in channelling money/savings/credit — are more or less left in the dark in modern formalizations of the loanable funds theory.

It should be emphasized that the equality between savings and investment … will be valid under all circumstances. In particular, it will be independent of the level of the rate of interest which was customarily considered in economic theory to be the factor equilibrating the demand for and supply of new capital. In the present conception investment, once carried out, automatically provides the savings necessary to finance it. Indeed, in our simplified model, profits in a given period are the direct outcome of capitalists’ consumption and investment in that period. If investment increases by a certain amount, savings out of profits are

pro tantohigher …One important consequence of the above is that the rate of interest cannot be determined by the demand for and supply of new capital because investment ‘finances itself.’

## Has economics — really — become an empirical science?

13 Apr, 2022 at 09:57 | Posted in Economics | 1 CommentIn *Economics Rules *(Oxford University Press, 2015), Dani Rodrik maintains that ‘imaginative empirical methods’ — such as game theoretical applications, natural experiments, field experiments, lab experiments, RCTs — can help us to answer questions concerning the external validity of economic models. In Rodrik’s view, they are more or less tests of ‘an underlying economic model’ and enable economists to make the right selection from the ever-expanding ‘collection of potentially applicable models.’ Writes Rodrik:

Another way we can observe the transformation of the discipline is by looking at the new areas of research that have flourished in recent decades. Three of these are particularly noteworthy: behavioral economics, randomized controlled trials (RCTs), and institutions … They suggest that the view of economics as an insular, inbred discipline closed to the outside influences is more caricature than reality.

I beg to differ. When looked at carefully, there are in fact not that many real reasons to share Rodrik’s optimism on this ’empirical turn’ in economics.

Field studies and experiments face the same basic problem as theoretical models — they are built on rather artificial conditions and have difficulties with the ‘trade-off’ between internal and external validity. The more artificial conditions, the more internal validity, but also less external validity. The more we rig experiments/field studies/models to avoid the ‘confounding factors’, the less the conditions are reminiscent of the real ‘target system.’ You could of course discuss the field vs. experiments vs. theoretical models in terms of realism — but the nodal issue is not about that, but basically about how economists using different isolation strategies in different ‘nomological machines’ attempt to learn about causal relationships. I have strong doubts about the generalizability of *all three* research strategies because the probability is high that causal mechanisms are different in different contexts and that lack of homogeneity/stability/invariance doesn’t give us warranted export licenses to the ‘real’ societies or economies.

If we see experiments or field studies as theory tests or models that ultimately aspire to say something about the real ‘target system,’ then the problem of external validity is central (and was for a long time also a key reason why behavioural economists had trouble getting their research results published).

Assume that you have examined how the work performance of Chinese workers A is affected by B (‘treatment’). How can we extrapolate/generalize to new samples outside the original population (e.g. to the US)? How do we know that any replication attempt ‘succeeds’? How do we know when these replicated experimental results can be said to justify inferences made in samples from the original population? If, for example, P(A|B) is the conditional density function for the original sample, and we are interested in doing an extrapolative prediction of E [P(A|B)], how can we know that the new sample’s density function is identical with the original? Unless we can give some really good argument for this being the case, inferences built on P(A|B) are not really saying anything on that of the target system’s P'(A|B).

As I see it this is the heart of the matter. External validity/extrapolation/generalization is founded on the assumption that we could make inferences based on P(A|B) that is exportable to other populations for which P'(A|B) applies. Sure, if one can convincingly show that P and P’are similar enough, the problems are perhaps surmountable. But arbitrarily just introducing functional specification restrictions of the type invariance/stability /homogeneity, is, at least for an epistemological realist far from satisfactory. And often it is – unfortunately – exactly this that I see when I take part of mainstream neoclassical economists’ models/experiments/field studies.

By this, I do not mean to say that empirical methods *per se* are so problematic that they can never be used. On the contrary, I am basically — though not without reservations — in favour of the increased use of experiments and field studies within economics. Not least as an alternative to completely barren ‘bridge-less’ axiomatic-deductive theory models. My criticism is more about aspiration levels and what we believe that we can achieve with our mediational epistemological tools and methods in the social sciences.

Many ‘experimentalists’ claim that it is easy to replicate experiments under different conditions and therefore *a fortiori* easy to test the robustness of experimental results. But is it really that easy? If in the example given above, we run a test and find that our predictions were not correct – what can we conclude? The B ‘works’ in China but not in the US? Or that B ‘works’ in a backward agrarian society, but not in a post-modern service society? That B ‘worked’ in the field study conducted in the year 2008 but not in the year 2014? Population selection is almost never simple. Had the problem of external validity only been about inference from sample to population, this would be no critical problem. But the really interesting inferences are those we try to make from specific labs/experiments/fields to specific real-world situations/institutions/ structures that we are interested in understanding or (causally) explaining. And then the population problem is more difficult to tackle.

The increasing use of natural and quasi-natural experiments in economics during the last couple of decades has led, not only Rodrik, but several other prominent economists to triumphantly declare it as a major step on a recent path toward empirics, where instead of being a deductive philosophy, economics is now increasingly becoming an inductive science.

In randomized trials the researchers try to find out the causal effects that different variables of interest may have by changing circumstances randomly — a procedure somewhat (‘on average’) equivalent to the usual ceteris paribus assumption).

Besides the fact that ‘on average’ is not always ‘good enough,’ it amounts to nothing but hand waving to *simpliciter* assume, without argumentation, that it is tenable to treat social agents and relations as homogeneous and interchangeable entities.

Randomization is used to basically allow the econometrician to treat the population as consisting of interchangeable and homogeneous groups (‘treatment’ and ‘control’). The regression models one arrives at by using randomized trials tell us the average effect that variations in variable X have on the outcome variable Y, without having to explicitly control for effects of other explanatory variables R, S, T, etc., etc. Everything is assumed to be essentially equal except the values taken by variable X.

In a usual regression context one would apply an ordinary least squares estimator (OLS) in trying to get an unbiased and consistent estimate:

Y = α + βX + ε,

where α is a constant intercept, β a constant ‘structural’ causal effect and ε an error term.

The problem here is that although we may get an estimate of the ‘true’ average causal effect, this may ‘mask’ important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are ‘treated'( X=1) may have causal effects equal to – 100 and those ‘not treated’ (X=0) may have causal effects equal to 100. Contemplating being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the OLS average effect particularly enlightening.

Limiting model assumptions in economic science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we ‘export’ them to our ‘target systems,’ we have to be able to show that they do not only hold under *ceteris paribus* conditions and *a fortiori* only are of limited value to our understanding, explanations or predictions of real economic systems.

Real-world social systems are not governed by stable causal mechanisms or capacities. The kinds of ‘laws’ and relations that econometrics has established, are laws and relations about entities in models that presuppose causal mechanisms being atomistic and additive. When causal mechanisms operate in real-world social target systems they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made ‘nomological machines’ they are rare, or even non-existent.

I also think that most ‘randomistas’ really underestimate the heterogeneity problem. It does not just turn up as an *external* validity problem when trying to ‘export’ regression results to different times or different target populations. It is also often an *internal* problem to the millions of regression estimates that economists produce every year.

Just as econometrics, randomization promises more than it can deliver, basically because it requires assumptions that in practice are not possible to maintain.

Like econometrics, randomization is basically a deductive method. Given the assumptions (such as manipulability, transitivity, separability, additivity, linearity, etc.) these methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. And although randomization may contribute to controlling for confounding, it does not guarantee it, since genuine randomness presupposes infinite experimentation and we know all real experimentation is finite. And even if randomization may help to establish average causal effects, it says nothing of individual effects unless homogeneity is added to the list of assumptions. Real target systems are seldom epistemically isomorphic to our axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by randomization procedures may be valid in “closed” models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.

When does a conclusion established in population X hold for target population Y? Only under very restrictive conditions!

‘Ideally controlled experiments’ tell us with certainty what causes what effects — but only given the right ‘closures.’ Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. “It works there” is no evidence for “it will work here”. Causes deduced in an experimental setting still have to show that they come with an export warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of ‘rigorous’ and ‘precise’ methods — and ‘on-average-knowledge’ — is despairingly small.

So, no, I find it hard to share Rodrik’s and others’ enthusiasm and optimism on the value of (quasi)natural experiments and all the statistical-econometric machinery that comes with it. Guess I’m still waiting for the export warrant…

Taking assumptions like utility maximization or market equilibrium as a matter of course leads to the ‘standing presumption in economics that, if an empirical statement is deduced from standard assumptions then that statement is reliable’ …

The ongoing importance of these assumptions is especially evident in those areas of economic research, where empirical results are challenging standard views on economic behaviour like experimental economics or behavioural finance … From the perspective of Model-Platonism, these research-areas are still framed by the ‘superior insights’ associated with early 20th century concepts, essentially because almost all of their results are framed in terms of rational individuals, who engage in optimizing behaviour and, thereby, attain equilibrium. For instance, the attitude to explain cooperation or fair behaviour in experiments by assuming an ‘inequality aversion’ integrated in (a fraction of) the subjects’ preferences is strictly in accordance with the assumption of rational individuals, a feature which the authors are keen to report …

So, while the mere emergence of research areas like experimental economics is sometimes deemed a clear sign for the advent of a new era … a closer look at these fields allows us to illustrate the enduring relevance of the Model-Platonism-topos and, thereby, shows the pervasion of these fields with a traditional neoclassical style of thought.

*Re* game theory, yours truly remembers when back in 1991, earning my first PhD with a dissertation on decision making and rationality in social choice theory and game theory, I concluded that

repeatedly it seems as though mathematical tractability and elegance — rather than realism and relevance — have been the most applied guidelines for the behavioural assumptions being made. On a political and social level it is doubtful if the methodological individualism, ahistoricity and formalism they are advocating are especially valid.

This, of course, was like swearing in church. My mainstream neoclassical colleagues were — to say the least — not exactly überjoyed. Listening to what one of the world’s most renowned game theorists — Ariel Rubinstein — has to say on the rather limited applicability of game theory in this interview (emphasis added), I basically think he confirms my doubts about how well-founded is Rodrik’s ‘optimism:’

Is game theory useful in a concrete sense or not? …

I believe that game theory is very interesting. I’ve spent a lot of my life thinking about it, but I don’t respect the claims that it has direct applications.The analogy I sometimes give is from logic. Logic is a very interesting field in philosophy, or in mathematics. But I don’t think anybody has the illusion that logic helps people to be better performers in life. A good judge does not need to know logic. It may turn out to be useful – logic was useful in the development of the computer sciences, for example – but it’s not directly practical in the sense of helping you figure out how best to behave tomorrow, say in a debate with friends, or when analysing data that you get as a judge or a citizen or as a scientist …

Game theory is about a collection of fables. Are fables useful or not? In some sense, you can say that they are useful, because good fables can give you some new insight into the world and allow you to think about a situation differently. But fables are not useful in the sense of giving you advice about what to do tomorrow, or how to reach an agreement between the West and Iran. The same is true about game theory …

In general, I would say there were too many claims made by game theoreticians about its relevance.Every book of game theory starts with “Game theory is very relevant to everything that you can imagine, and probably many things that you can’t imagine.” In my opinion that’s just a marketing device …

So — contrary to Rodrik’s optimism — I would argue that although different ’empirical’ approaches have been — more or less — integrated into mainstream economics, there is still a long way to go before economics has become a truly empirical science.

## Interview with Josh Angrist

6 Apr, 2022 at 18:20 | Posted in Economics | Comments Off on Interview with Josh Angrist.

## Expected utility theory — a severe case of transmogrifying truth

6 Apr, 2022 at 13:37 | Posted in Economics | 4 CommentsAlthough the expected utility theory is obviously both theoretically and descriptively inadequate, colleagues and microeconomics textbook writers all over the world gladly continue to use it, as though its deficiencies were unknown or unheard of.

**Daniel Kahneman** writes — in *Thinking, Fast and Slow — *that expected utility theory is seriously flawed since it doesn’t take into consideration the basic fact that people’s choices are influenced by *changes* in their wealth. Where standard microeconomic theory assumes that preferences are stable over time, Kahneman and other behavioural economists have forcefully again and again shown that preferences aren’t fixed, but vary with different reference points. How can a theory that doesn’t allow for people having different reference points from which they consider their options have an almost axiomatic status within economic theory?

The mystery is how a conception of the utility of outcomes that is vulnerable to such obvious counterexamples survived for so long. I can explain it only by a weakness of the scholarly mind … I call it theory-induced blindness: once you have accepted a theory and used it as a tool in your thinking it is extraordinarily difficult to notice its flaws … You give the theory the benefit of the doubt, trusting the community of experts who have accepted it … But they did not pursue the idea to the point of saying, “This theory is seriously wrong because it ignores the fact that utility depends on the history of one’s wealth, not only present wealth.”

On a more economic-theoretical level, information theory — and especially the so-called **Kelly criterion** — also highlights the problems concerning the neoclassical theory of expected utility.

Suppose I want to play a game. Let’s say we are tossing a coin. If heads come up, I win a dollar, and if tails come up, I lose a dollar. Suppose further that I believe I know that the coin is asymmetrical and that the probability of getting heads (**p**) is greater than 50% – say 60% (0.6) – while the bookmaker assumes that the coin is totally symmetric. How much of my bankroll (T) should I optimally invest in this game?

A strict neoclassical utility-maximizing economist would suggest that my goal should be to maximize the expected value of my bankroll (wealth), and according to this view, I ought to bet my entire bankroll.

Does that sound rational? Most people would answer no to that question. The risk of losing is so high, that I already after few games played — the expected time until my first loss arises is 1/(1-p), which in this case is equal to 2.5 — with a high likelihood would be losing and thereby become bankrupt. The expected-value maximizing economist does not seem to have a particularly attractive approach.

So what’s the alternative? One possibility is to apply the so-called Kelly criterion — after the American physicist and information theorist **John L. Kelly**, who in the article *A New Interpretation of Information Rate* (1956) suggested this criterion for how to optimize the size of the bet — under which the optimum is to invest a specific fraction (**x**) of wealth (**T**) in each game. How do we arrive at this fraction?

When I win, I have (1 + x) times as much as before, and when I lose (1 – x) times as much. After** n** rounds, when I have won **v** times and lost **n – v** times, my new bankroll (**W**) is

(1) W = (1 + x)^{v}(1 – x)^{n – v }T

[A technical note: The bets used in these calculations are of the “quotient form” (Q), where you typically keep your bet money until the game is over, and* a fortiori*, in the win/lose expression it’s not included that you get back what you bet when you win. If you prefer to think of odds calculations in the “decimal form” (D), where the bet money typically is considered lost when the game starts, you have to transform the calculations according to Q = D – 1.]

The bankroll increases multiplicatively — “compound interest” — and the long-term average growth rate for my wealth can then be easily calculated by taking the logarithms of (1), which gives

(2) log (W/ T) = v log (1 + x) + (n – v) log (1 – x).

If we divide both sides by n we get

(3) [log (W / T)] / n = [v log (1 + x) + (n – v) log (1 – x)] / n

The left-hand side now represents the *average* growth rate (**g**) in each game. On the right-hand side the ratio v/n is equal to the percentage of bets that I won, and when n is large, this fraction will be close to p. Similarly, (n – v)/n is close to (1 – p). When the number of bets is large, the average growth rate is

(4) g = p log (1 + x) + (1 – p) log (1 – x).

Now we can easily determine the value of x that maximizes g:

(5) d [p log (1 + x) + (1 – p) log (1 – x)]/d x = p/(1 + x) – (1 – p)/(1 – x) =>

p/(1 + x) – (1 – p)/(1 – x) = 0 =>

(6) x = p – (1 – p)

Since p is the probability that I will win, and (1 – p) is the probability that I will lose, the Kelly strategy says that to optimize the growth rate of your bankroll (wealth) you should invest a fraction of the bankroll equal to the difference of the likelihood that you will win or lose. In our example, this means that I have in each game to bet the fraction of x = 0.6 – (1 – 0.6) ≈ 0.2 — that is, 20% of my bankroll. Alternatively, we see that the Kelly criterion implies that we have to choose x so that E[log(1+x)] — which equals p log (1 + x) + (1 – p) log (1 – x) — is maximized. Plotting E[log(1+x)] as a function of x we see that the value maximizing the function is 0.2:

The optimal average growth rate becomes

(7) 0.6 log (1.2) + 0.4 log (0.8) ≈ 0.02.

If I bet 20% of my wealth in tossing the coin, I will after 10 games on average have 1.02^{10 }times more than when I started (≈ 1.22).

This game strategy will give us an outcome in the long run that is better than if we use a strategy building on the neoclassical economic theory of choice under uncertainty (risk) – expected value maximization. If we bet all our wealth in each game we will most likely lose our fortune, but because with low probability we will have a very large fortune, the expected value is still high. For a real-life player – for whom there is very little to benefit from this type of ensemble-average – it is more relevant to look at time-average of what he may be expected to win (in our game the averages are the same only if we assume that the player has a logarithmic utility function). What good does it do me if my tossing the coin maximizes an expected value when I might have gone bankrupt after four games played? If I try to maximize the expected value, the probability of bankruptcy soon gets close to one. Better then to invest 20% of my wealth in each game and maximize my long-term average wealth growth!

When applied to the neoclassical theory of expected utility, one thinks in terms of a “parallel universe” and asks what is the expected return of an investment, calculated as an average over the “parallel universe”? In our coin toss example, it is as if one supposes that various “I” are tossing a coin and that the loss of many of them will be offset by the huge profits one of these “I” does. But this ensemble-average does not work for an individual, for whom a time-average better reflects the experience made in the “non-parallel universe” in which we live.

The Kelly criterion gives a more realistic answer, where one thinks in terms of the only universe we actually live in and ask what is the expected return of an investment, calculated as an average over time.

Since we cannot go back in time — entropy and the “arrow of time ” make this impossible — and the bankruptcy option is always at hand (extreme events and “black swans” are always possible) we have nothing to gain from thinking in terms of ensembles and “parallel universe.”

Actual events follow a fixed pattern of time, where events are often linked in a multiplicative process (as e. g. investment returns with “compound interest”) which is basically non-ergodic.

Instead of arbitrarily assuming that people have a certain type of utility function – as in the neoclassical theory – the Kelly criterion shows that we can obtain a less arbitrary and more accurate picture of real people’s decisions and actions by basically assuming that time is irreversible. When the bankroll is gone, it’s gone. The fact that in a parallel universe it could conceivably have been refilled, is of little comfort to those who live in the one and only possible world that we call the real world.

Our coin toss example can be applied to more traditional economic issues. If we think of an investor, we can basically describe his situation in terms of our coin toss. What fraction (**x**) of his assets (**T**) should an investor – who is about to make a large number of repeated investments – bet on his feeling that he can better evaluate an investment (p = 0.6) than the market (p = 0.5)? The greater the x, the greater is the leverage. But also – the greater is the risk. Since p is the probability that his investment valuation is correct and (1 – p) is the probability that the market’s valuation is correct, it means the Kelly criterion says he optimizes the rate of growth on his investments by investing a fraction of his assets that is equal to the difference in the probability that he will “win” or “lose.” In our example, this means that he at each investment opportunity is to invest the fraction of x = 0.6 – (1 – 0.6), i.e. about 20% of his assets. The optimal average growth rate of investment is then about 2 % (0.6 log (1.2) + 0.4 log (0.8)).

Kelly’s criterion shows that because we cannot go back in time, we should not take excessive risks. High leverage increases the risk of bankruptcy. This should also be a warning for the financial world, where the constant quest for greater and greater leverage – and risks – creates extensive and recurrent systemic crises. A more appropriate level of risk-taking is a necessary ingredient in a policy to come to curb excessive risk-taking.

The works of people like Kelly and Kahneman show that expected utility theory is indeed transmogrifying truth.

## Det finanspolitiska ramverket — ett allvarligt feltänk

31 Mar, 2022 at 10:37 | Posted in Economics | Comments Off on Det finanspolitiska ramverket — ett allvarligt feltänkDen ekonomiska krisen i kölvattnet på coronapandemin är inte en vanlig konjunkturnedgång utan en exceptionell situation. Väldiga summor satsas nu för att hålla företag och löntagare under armarna och ge nödhjälp till sjukvården. I coronakrisen spår har majoriteten av drabbade länder i världen ökat sin skuldkvot betydligt. Även i Sverige har skuldkvoten ökat. Denna typ av unika händelse blir därför en möjlighet att justera det svenska skuldankaret – som redan innan krisen låg långt under EU:s skuldgräns och nästan omatchat lågt i jämförelse med andra europeiska länders skuldkvoter …

Att försöka ”ta tillbaka” de pengar dagens krissatsningar kostat och återgå till en skuldkvot om 35% av BNP vore både onödigt och kontraproduktivt för återhämtningen .. De skulder som krissatsningarna ger upphov till bör istället genom ett förändrat finanspolitiskt ramverk sakta ”smälta bort” genom tillväxt och inflation, såsom varit fallet efter flera tidigare allvarliga kriser. I slutändan bör även statsskulden tillåtas att landa på en högre nivå än tidigare skuldankare.

Ett av de grundläggande feltänken i dagens diskussion om statsskuld och budgetunderskott är att man inte skiljer på skuld och skuld. Även om det på makroplanet av nödvändighet är så att skulder och tillgångar balanserar varandra, så är det inte oväsentligt *vem* som har tillgångarna och *vem* som har skulderna.

Länge har man varit motvillig att öka de offentliga skulderna eftersom ekonomiska kriser i mångt och mycket fortfarande uppfattas som förorsakade av för mycket skulder. Men det är här fördelningen av skulder kommer in. Om staten i en lågkonjunktur ‘lånar’ pengar för att bygga ut järnvägar, skola och hälsovård, så är ju de samhälleliga kostnaderna för detta minimala eftersom resurserna annars legat oanvända. När hjulen väl börjar snurra kan både de offentliga och de privata skulderna betalas av.

I stället för att ”värna om statsfinanserna” — med det finanspolitiska ramverkets i grunden feltänkta överskottsmål, skuldankare och utgiftstak –bör man se till att värna om samhällets framtid. Problemet med en statsskuld i en situation med historiskt låga räntor är inte att den är för stor, utan för liten.

Vad många politiker och mediala “experter” inte verkar (vilja) förstå är att det finns en avgörande skillnad mellan privata och offentliga skulder. Om en individ försöker spara och dra ner på sina skulder, så kan det mycket väl vara rationellt. Men om alla försöker göra det, blir följden att den aggregerade efterfrågan sjunker och arbetslösheten riskerar ökar.

En enskild individ måste alltid betala sina skulder. Men en stat kan alltid betala tillbaka sina gamla skulder med nya skulder. Staten är inte en individ. Statliga skulder är inte som privata skulder. En stats skulder är väsentligen en skuld till den själv, till dess medborgare (den offentliga sektorns finansiella nettoposition är positiv).

En statsskuld — idag ligger den i Sverige på drygt 20% av BNP — är varken bra eller dålig. Den ska vara ett medel att uppnå två övergripande makroekonomiska mål — full sysselsättning och prisstabilitet. Vad som är ‘heligt’ är inte att ha en balanserad budget eller att hålla nere den konsoliderade bruttoskulden (‘Maastrichtskulden’) till 35 % av BNP på medellång sikt. Om idén om ‘sunda’ statsfinanser leder till ökad arbetslöshet och instabila priser borde det vara självklart att den överges.

Den svenska utlandsskulden och den konsoliderade statsskulden är historiskt låga. Som framgår av ‘Maastrichtskulden’ hör Sverige till de EU-länder som har allra lägst offentlig skuldsättning. Med tanke på de stora utmaningar som Sverige står inför i efterdyningarna av coronaviruset — hur kriget i Ukraina påverkar global och svensk ekonomi är i dagsläget, för att citera Ekonomistyrningsverkets senaste prognos, “ytterst oklart” — är fortsatt tal om “ansvar” för statsbudgeten minst sagt oansvarigt. I stället för att ”värna” om statsfinanserna bör en ansvarsfull regering se till att värna om samhällets framtid.

Budgetunderskott och statsskuld är inte Sveriges problem idag. Och att fortsatt prata om att “spara i ladorna” är bara ren dumhet.

## Expected utility theory — nothing but an ex-hypothesis

28 Mar, 2022 at 19:19 | Posted in Economics | Comments Off on Expected utility theory — nothing but an ex-hypothesisIn mainstream theory, preferences are standardly expressed in the form of a utility function. But although the expected utility theory has been known for a long time to be both theoretically and descriptively inadequate, mainstream economists gladly continue to use it, as though its deficiencies were unknown or unheard of.

What most of them try to do in face of the obvious theoretical and behavioral inadequacies of the expected utility theory, is to marginally mend it. But that cannot be the right attitude when facing scientific anomalies. When models are plainly wrong, you’d better replace them! As Matthew Rabin and Richard Thaler have it in *Risk Aversion*:

It is time for economists to recognize that expected utility is an ex-hypothesis, so that we can concentrate our energies on the important task of developing better descriptive models of choice under uncertainty.

In his modern classic *Risk Aversion and Expected-Utility Theory: **A Calibration Theorem *Matthew Rabin* * writes:

Using expected-utility theory, economists model risk aversion as arising solely because the utility function over wealth is concave. This diminishing-marginal-utility-of-wealth theory of risk aversion is psychologically intuitive, and surely helps explain some of our aversion to large-scale risk: We dislike vast uncertainty in lifetime wealth because a dollar that helps us avoid poverty is more valuable than a dollar that helps us become very rich.

Yet this theory also implies that people are approximately risk neutral when stakes are small … While most economists understand this formal limit result, fewer appreciate that the approximate risk-neutrality prediction holds not just for negligible stakes, but for quite sizable and economically important stakes. Economists often invoke expected-utility theory to explain substantial (observed or posited) risk aversion over stakes where the theory actually predicts virtual risk neutrality.While not broadly appreciated, the inability of expected-utility theory to provide a plausible account of risk aversion over modest stakes has become oral tradition among some subsets of researchers, and has been illustrated in writing in a variety of different contexts using standard utility functions …

Expected-utility theory is manifestly not close to the right explanation of risk attitudes over modest stakes. Moreover, when the specific structure of expected-utility theory is used to analyze situations involving modest stakes — such as in research that assumes that large-stake and modest-stake risk attitudes derive from the same utility-for-wealth function — it can be very misleading.

In a similar vein, Daniel Kahneman writes — in *Thinking, Fast and Slow — *that expected utility theory is seriously flawed since it doesn’t take into consideration the basic fact that people’s choices are influenced by *changes* in their wealth. Where standard microeconomic theory assumes that preferences are stable over time, Kahneman and other behavioral economists have forcefully again and again shown that preferences aren’t fixed, but vary with different reference points. How can a theory that doesn’t allow for people to have different reference points from which they consider their options have an almost axiomatic status within economic theory?

The mystery is how a conception of the utility of outcomes that is vulnerable to such obvious counterexamples survived for so long. I can explain it only by a weakness of the scholarly mind … I call it theory-induced blindness: once you have accepted a theory and used it as a tool in your thinking it is extraordinarily difficult to notice its flaws … You give the theory the benefit of the doubt, trusting the community of experts who have accepted it … But they did not pursue the idea to the point of saying, “This theory is seriously wrong because it ignores the fact that utility depends on the history of one’s wealth, not only present wealth.”

The works of people like Rabin, Thaler, and Kahneman, show that expected utility theory is indeed transmogrifying truth. It’s an “ex-hypothesis” — or as Monty Python has it:

This parrot is no more! He has ceased to be! ‘E’s expired and gone to meet ‘is maker! ‘E’s a stiff! Bereft of life, ‘e rests in peace! If you hadn’t nailed ‘im to the perch ‘e’d be pushing up the daisies! ‘Is metabolic processes are now ‘istory! ‘E’s off the twig! ‘E’s kicked the bucket, ‘e’s shuffled off ‘is mortal coil, run down the curtain and joined the bleedin’ choir invisible!! THIS IS AN EX-PARROT!!

## Friskolefusket — nu får det vara nog!

27 Mar, 2022 at 18:29 | Posted in Economics | 1 CommentSwedish school system has somewhat oddly combined market principles such as decentralization, choice, competition, and corporate providers with an evaluation system that is highly trust-based and where teacher-set school grades are high-stakes for the students. This means that both students and schools have incentives to game a system that is easy to game and the findings suggest that the integrity of the evaluation system has been compromised. The results show that all groupings of free schools set higher grades than municipal schools when controlling for student achievement on national tests. As the national tests are locally graded, they are not fully reliable and the differences between public and private providers are more pronounced when more reliable tests are used to control for achievement.

To some extent, the differences in grading standards between municipal and free schools can be accounted for by differences in location and student demographics. Even after holding such factors constant, however, grading standards among private providers appear lenient, in particular among schools that belong to two of the large corporate groups (IES and Kunskapsskolan).

Vi som arbetar inom det svenska utbildningssystemet tycker att det är självklart att det är vi som lärare som ska sätta betyg på det våra elever presterar. Men tyvärr har vi i vårt land — som det enda i världen idag — infört ett friskolesystem som fullständigt kullkastar människors tillit till vårt betygssystem. För att stärka sina positioner på marknaden väljer friskolor systematiskt att sätta glädjebetyg för att falskeligen ge sken av att det i dessa skolor går ut elever med högre kunskaper än i andra. I friskolekoncernernas värld urholkas betygen för att istället bli ett sätt att fuska till sig fördelar på. Värst av alla dessa friskolor har Kunskapsskolan och Internationella Engelska skolan varit .

Dessa skojare och fifflare har alldeles för länge tillåtits underminera svensk skola. Nu är det dags för politikerna — som aktivt och/eller genom ren flathet har gjort denna skandal möjlig i trettio år — att visa lite samhällsansvar och se till att Sverige blir av med den skamfläck som stavas friskolor.

## The irrelevance of the Ramsey growth model

20 Mar, 2022 at 16:08 | Posted in Economics | 6 CommentsSo in what sense is this “dynamic stochastic general equilibrium” model firmly grounded in the principles of economic theory? I do not want to be misunderstood. Friends have reminded me that much of the effort of “modern macro” goes into the incorporation of important deviations from the Panglossian assumptions that underlie the simplistic application of the Ramsey model to positive macroeconomics. Research focuses on the implications of wage and price stickiness, gaps and asymmetries of information, long-term contracts, imperfect competition, search, bargaining and other forms of strategic behavior, and so on. That is indeed so, and it is how progress is made.

But this diversity only intensifies my uncomfortable feeling that something is being put over on us, by ourselves. Why do so many of those research papers begin with a bow to the Ramsey model and cling to the basic outline? Every one of the deviations that I just mentioned was being studied by macroeconomists before the “modern” approach took over. That research was dismissed as “lacking microfoundations.” My point is precisely that attaching a realistic or behavioral deviation to the Ramsey model does not confer microfoundational legitimacy on the combination. Quite the contrary: a story loses legitimacy and credibility when it is spliced to a simple, extreme, and on the face of it, irrelevant special case. This is the core of my objection: adding some realistic frictions does not make it any more plausible that an observed economy is acting out the desires of a single, consistent, forward-looking intelligence …

For completeness, I suppose it could also be true that the bow to the Ramsey model is like wearing the school colors or singing the Notre Dame fight song: a harmless way of providing some apparent intellectual unity, and maybe even a minimal commonality of approach. That seems hardly worthy of grown-ups, especially because there is always a danger that some of the in-group come to believe the slogans, and it distorts their work …

There has always been a purist streak in economics that wants everything to follow neatly from greed, rationality, and equilibrium, with no ifs, ands, or buts. Most of us have felt that tug. Here is a theory that gives you just that, and this

time “everything” means everything: macro, not micro. The theory is neat, learnable, not terribly difficult, but just technical enough to feel like “science.”

Blog at WordPress.com.

Entries and Comments feeds.