Economics education needs a revolution

25 May, 2019 at 17:36 | Posted in Economics | 4 Comments

Economics has become a rather quaint and highly guarded discipline. We urgently need to update economics education to change this – because economics, as taught in universities, does not reflect or speak to many of the issues of the real world, be they political, environmental or social.

retTake the tricky entanglement between politics and economics, which economists tend to try to avoid. Such an attempt is futile. Sidelining politics, history and broader ideas while teaching economics, as most professors do, is like studying the “natural” flows of water in the Netherlands without taking into account that there are people living there who are steering it, building dikes, reclaiming land and channelling the water – and ignoring that they have been doing this for thousands of years already. You can’t study the system while ignoring the people who make it …

Economists speak in numbers only, clinging to statistical data and quantitative models. We do so in the hope of looking objective. But this is counter-productive – “data” cannot tell us everything. Other social sciences such as sociology and anthropology use a broader range of methods, and consequently have a broader perspective on society …

Undergraduate economists all over the world learn theories from textbooks that have barely changed since the 1950s. Those theories are based on individual agents, competing in markets to maximise narrowly defined “economic utility” (for people) or profit (for firms). The principles are taught with the same certainty as Newtonian physics, and are as devoid of value judgements.

This is absurd. Clearly, there are values; mainstream economics values efficiency, markets and growth – and puts individuals over collectives. Yet undergraduates are not taught to recognise, let alone question, these values – and the consequences are serious.

Joris Tieleman

A couple of years ago we could see the author in action in this must-see video:

Nowadays there is almost no place whatsoever in economics education for courses in the history of economic thought and economic methodology. This is deeply worrying. A science that doesn’t self-reflect and asks important methodological and science-theoretical questions about the own activity, is a science in dire straits. How did we end up in this sad state?

Already back in 1991, a commission chaired by Anne Krueger and including people like Kenneth Arrow, Edward Leamer, and Joseph Stiglitz, reported from own experience “that it is an underemphasis on the ‘linkages’ between tools, both theory and econometrics, and ‘real world problems’ that is the weakness of graduate education in economics,” and that both students and faculty sensed “the absence of facts, institutional information, data, real-world issues, applications, and policy problems.” And in conclusion, they wrote that “graduate programs may be turning out a generation with too many idiot savants skilled in technique but innocent of real economic issues.”

Not much is different today. Economics — and economics education — is still in dire need of a remake.

rethinkMore and more young economics students want to see a real change in economics and the way it’s taught. They want something other than the same old mainstream catechism. They don’t want to be force-fed with useless mainstream theories and models.

Nothing compares (personal)

25 May, 2019 at 14:58 | Posted in Varia | Leave a comment

 

Tomorrow is Mother’s Day here in Sweden. This one is in loving memory of my mother Lisbeth, and of Kristina, beloved wife and mother of David and Tora.

Those whom the gods love die young.

But in dreams,
I can hear your name.
And in dreams,
We will meet again.

When the seas and mountains fall
And we come to end of days,
In the dark I hear a call
Calling me there
I will go there
And back again.

Adorno: Jargon der Eigentlichkeit

25 May, 2019 at 12:16 | Posted in Politics & Society | Leave a comment

 

10 Jahre Finanzkrise: Die Party der Banker geht weiter

25 May, 2019 at 10:46 | Posted in Economics | Leave a comment

 

Was wollen Europas Rechtspopulisten?

25 May, 2019 at 10:20 | Posted in Politics & Society | 1 Comment

 

Friday on my mind

24 May, 2019 at 16:19 | Posted in Varia | Leave a comment

 

Leader of the pack

24 May, 2019 at 16:04 | Posted in Varia | Leave a comment

 

Itchycoo Park

24 May, 2019 at 15:52 | Posted in Varia | 2 Comments

 

Expected utility theory

24 May, 2019 at 15:26 | Posted in Economics | 2 Comments

Although expected utility theory is both theoretically and descriptively inadequate, mainstream economists gladly continue to use it, as though its deficiencies were unknown or unheard of.

Daniel Kahneman writes — in Thinking, Fast and Slow — that expected utility theory is seriously flawed since it doesn’t take into consideration the basic fact that people’s choices are influenced by changes in their wealth. Where standard microeconomic theory assumes that preferences are stable over time, Kahneman and other behavioural economists have forcefully again and again shown that preferences aren’t fixed, but vary with different reference points. How can a theory that doesn’t allow for people having different reference points from which they consider their options have an almost axiomatic status within economic theory?

41kgYr0Fs2L._SY344_BO1,204,203,200_The mystery is how a conception of the utility of outcomes that is vulnerable to such obvious counterexamples survived for so long. I can explain it only by a weakness of the scholarly mind … I call it theory-induced blindness: once you have accepted a theory and used it as a tool in your thinking it is extraordinarily difficult to notice its flaws … You give the theory the benefit of the doubt, trusting the community of experts who have accepted it … But they did not pursue the idea to the point of saying, “This theory is seriously wrong because it ignores the fact that utility depends on the history of one’s wealth, not only present wealth.”

On a more economic-theoretical level, information theory — and especially the so called ‘Kelly criterion’ — also highlights the problems concerning the neoclassical theory of expected utility.
Suppose I want to play a game. Let’s say we are tossing a coin. If heads comes up, I win a dollar, and if tails comes up, I lose a dollar. Suppose further that I believe I know that the coin is asymmetrical and that the probability of getting heads (p) is greater than 50% – say 60% (0.6) – while the bookmaker assumes that the coin is totally symmetric. How much of my bankroll (T) should I optimally invest in this game?

A strict neoclassical utility-maximizing economist would suggest that my goal should be to maximize the expected value of my bankroll (wealth), and according to this view, I ought to bet my entire bankroll.

Does that sound rational? Most people would answer no to that question. The risk of losing is so high, that I already after few games played — the expected time until my first loss arises is 1/(1-p), which in this case is equal to 2.5 — with a high likelihood would be losing and thereby become bankrupt. The expected-value maximizing economist does not seem to have a particularly attractive approach.

So what’s the alternative? One possibility is to apply the so-called Kelly criterion — after the American physicist and information theorist John L. Kelly, who in the article A New Interpretation of Information Rate (1956) suggested this criterion for how to optimize the size of the bet — under which the optimum is to invest a specific fraction (x) of wealth (T) in each game. How do we arrive at this fraction?

When I win, I have (1 + x) times as much as before, and when I lose (1 – x) times as much. After n rounds, when I have won v times and lost n – v times, my new bankroll (W) is

(1) W = (1 + x)v(1 – x)n – v T

(A technical note: The bets used in these calculations are of the “quotient form” (Q), where you typically keep your bet money until the game is over, and a fortiori, in the win/lose expression it’s not included that you get back what you bet when you win. If you prefer to think of odds calculations in the “decimal form” (D), where the bet money typically is considered lost when the game starts, you have to transform the calculations according to Q = D – 1.)

The bankroll increases multiplicatively — “compound interest” — and the long-term average growth rate for my wealth can then be easily calculated by taking the logarithms of (1), which gives

(2) log (W/ T) = v log (1 + x) + (n – v) log (1 – x).

If we divide both sides by n we get

(3) [log (W / T)] / n = [v log (1 + x) + (n – v) log (1 – x)] / n

The left hand side now represents the average growth rate (g) in each game. On the right-hand side the ratio v/n is equal to the percentage of bets that I won, and when n is large, this fraction will be close to p. Similarly, (n – v)/n is close to (1 – p). When the number of bets is large, the average growth rate is

(4) g = p log (1 + x) + (1 – p) log (1 – x).

Now we can easily determine the value of x that maximizes g:

(5) d [p log (1 + x) + (1 – p) log (1 – x)]/d x = p/(1 + x) – (1 – p)/(1 – x) =>
p/(1 + x) – (1 – p)/(1 – x) = 0 =>

(6) x = p – (1 – p)

Since p is the probability that I will win, and (1 – p) is the probability that I will lose, the Kelly strategy says that to optimize the growth rate of your bankroll (wealth) you should invest a fraction of the bankroll equal to the difference of the likelihood that you will win or lose. In our example, this means that I have in each game to bet the fraction of x = 0.6 – (1 – 0.6) ≈ 0.2 — that is, 20% of my bankroll. Alternatively, we see that the Kelly criterion implies that we have to choose x so that E[log(1+x)] — which equals p log (1 + x) + (1 – p) log (1 – x) — is maximized. Plotting E[log(1+x)] as a function of x we see that the value maximizing the function is 0.2:

kelly2

The optimal average growth rate becomes

(7) 0.6 log (1.2) + 0.4 log (0.8) ≈ 0.02.

If I bet 20% of my wealth in tossing the coin, I will after 10 games on average have 1.0210 times more than when I started (≈ 1.22).

This game strategy will give us an outcome in the long run that is better than if we use a strategy building on the neoclassical economic theory of choice under uncertainty (risk) – expected value maximization. If we bet all our wealth in each game we will most likely lose our fortune, but because with low probability we will have a very large fortune, the expected value is still high. For a real-life player – for whom there is very little to benefit from this type of ensemble-average – it is more relevant to look at time-average of what he may be expected to win (in our game the averages are the same only if we assume that the player has a logarithmic utility function). What good does it do me if my tossing the coin maximizes an expected value when I might have gone bankrupt after four games played? If I try to maximize the expected value, the probability of bankruptcy soon gets close to one. Better then to invest 20% of my wealth in each game and maximize my long-term average wealth growth!

When applied to the neoclassical theory of expected utility, one thinks in terms of “parallel universe” and asks what is the expected return of an investment, calculated as an average over the “parallel universe”? In our coin toss example, it is as if one supposes that various “I” are tossing a coin and that the loss of many of them will be offset by the huge profits one of these “I” does. But this ensemble-average does not work for an individual, for whom a time-average better reflects the experience made in the “non-parallel universe” in which we live.

The Kelly criterion gives a more realistic answer, where one thinks in terms of the only universe we actually live in, and ask what is the expected return of an investment, calculated as an average over time.

Since we cannot go back in time — entropy and the “arrow of time ” make this impossible — and the bankruptcy option is always at hand (extreme events and “black swans” are always possible) we have nothing to gain from thinking in terms of ensembles and “parallel universe.”

Actual events follow a fixed pattern of time, where events are often linked in a multiplicative process (as e. g. investment returns with “compound interest”) which is basically non-ergodic.

Instead of arbitrarily assuming that people have a certain type of utility function – as in the neoclassical theory – the Kelly criterion shows that we can obtain a less arbitrary and more accurate picture of real people’s decisions and actions by basically assuming that time is irreversible. When the bankroll is gone, it’s gone. The fact that in a parallel universe it could conceivably have been refilled, are of little comfort to those who live in the one and only possible world that we call the real world.

Our coin toss example can be applied to more traditional economic issues. If we think of an investor, we can basically describe his situation in terms of our coin toss. What fraction (x) of his assets (T) should an investor – who is about to make a large number of repeated investments – bet on his feeling that he can better evaluate an investment (p = 0.6) than the market (p = 0.5)? The greater the x, the greater is the leverage. But also – the greater is the risk. Since p is the probability that his investment valuation is correct and (1 – p) is the probability that the market’s valuation is correct, it means the Kelly criterion says he optimizes the rate of growth on his investments by investing a fraction of his assets that is equal to the difference in the probability that he will “win” or “lose.” In our example this means that he at each investment opportunity is to invest the fraction of x = 0.6 – (1 – 0.6), i.e. about 20% of his assets. The optimal average growth rate of investment is then about 2 % (0.6 log (1.2) + 0.4 log (0.8)).

Kelly’s criterion shows that because we cannot go back in time, we should not take excessive risks. High leverage increases the risk of bankruptcy. This should also be a warning for the financial world, where the constant quest for greater and greater leverage — and risks – creates extensive and recurrent systemic crises. A more appropriate level of risk-taking is a necessary ingredient in a policy to come to curb excessive risk-taking .

The works of people like Kelly and Kahneman show that expected utility theory is indeed transmogrifying truth.

Randomization and experimental design in the social sciences

22 May, 2019 at 18:59 | Posted in Economics | Leave a comment

du2 Thad Dunning’s book Natural Experiments in the Social Sciences is a very useful guide for social scientists interested in research methodology in general and natural experiments in specific. Dunning argues that since random or as-if random assignment in natural experiments obviates the need for controlling potential confounders, this kind of “simple and transparent” design-based research method is preferable to more traditional multivariate regression analysis where the controlling only comes in ex post via statistical modelling.

But — there is always a but …

The point of making a randomized experiment is often said to be that it ‘ensures’ that any correlation between a supposed cause and effect indicates a causal relation. This is believed to hold since randomization (allegedly) ensures that a supposed causal variable does not correlate with other variables that may influence the effect.

The problem with that simplistic view on randomization is that the claims made are exaggerated and sometimes even false:

• Even if you manage to do the assignment to treatment and control groups ideally random, the sample selection certainly is — except in extremely rare cases — not random. Even if we make a proper randomized assignment, if we apply the results to a biased sample, there is always the risk that the experimental findings will not apply. What works ‘there,’ does not work ‘here.’ Randomization hence does not ‘guarantee ‘ or ‘ensure’ making the right causal claim. Although randomization may help us rule out certain possible causal claims, randomization per se does not guarantee anything!

• Even if both sampling and assignment are made in an ideal random way, performing standard randomized experiments only give you averages. The problem here is that although we may get an estimate of the ‘true’ average causal effect, this may ‘mask’ important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are ‘treated’  may have causal effects equal to -100 and those ‘not treated’ may have causal effects equal to 100. Contemplating being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the average effect particularly enlightening.

• There is almost always a trade-off between bias and precision. In real-world settings, a little bias often does not overtrump greater precision. And — most importantly — in case we have a population with sizeable heterogeneity, the average treatment effect of the sample may differ substantially from the average treatment effect in the population. If so, the value of any extrapolating inferences made from trial samples to other populations is highly questionable.

• Since most real-world experiments and trials build on performing a single randomization, what would happen if you kept on randomizing forever, does not help you to ‘ensure’ or ‘guarantee’ that you do not make false causal conclusions in the one particular randomized experiment you actually do perform. It is indeed difficult to see why thinking about what you know you will never do, would make you happy about what you actually do.

• And then there is also the problem that ‘Nature’ may not always supply us with the random experiments we are most interested in. If we are interested in X, why should we study Y only because design dictates that? Method should never be prioritized over substance!

Randomization is not a panacea. It is not the best method for all questions and circumstances. Proponents of randomization make claims about its ability to deliver causal knowledge that are simply wrong. There are good reasons to be sceptical of the now popular — and ill-informed — view that randomization is the only valid and the best method on the market. It is not.

Money, law and democracy

22 May, 2019 at 16:12 | Posted in Economics | Leave a comment

Warren Mosler und die Modern Monetary Theory

22 May, 2019 at 15:09 | Posted in Economics | 1 Comment

ZEIT ONLINE: Eine Regierung kann also Geld ausgeben, so lange es keine Inflation gibt?

warenMosler: So ungefähr. Etwas technischer formuliert: So lange in der Volkswirtschaft freie Kapazitäten vorhanden sind, es also zum Beispiel Menschen gibt, die eine Arbeit suchen und in der Privatwirtschaft nicht fündig werden.

ZEIT ONLINE: Gibt es die in Europa?

Mosler: Mit Sicherheit. Schauen Sie sich doch die Arbeitslosenquoten, die Erwerbsbeteiligungsquoten und den niedrigen Auslastungsgrad der Wirtschaft an. Die EU könnte in Italien die verfallenden Denkmäler reparieren, sie könnte in Deutschland die Schulen und Straßen sanieren, sie könnte dafür sorgen, dass Kohlekraftwerke schneller abgeschaltet und der Umstieg auf erneuerbare Energien finanziert wird. Sie könnte einen Green New Deal auf die Beine stellen. Das wäre überhaupt kein Problem.

ZEIT ONLINE: Wie erklären Sie es sich, dass Politiker sagen: Wir können uns das nicht leisten, wir haben kein Geld.

Mosler: Wenn jemand das sagt, dann hat er entweder keine Ahnung, wie Geld funktioniert, oder er lügt. Ich weiß nicht, was schlimmer ist.

ZEIT ONLINE: Es gibt noch eine dritte Möglichkeit: Vielleicht halten es die Politiker für wichtig, dass es bestimmte institutionelle Vorkehrungen gibt, die den Zugriff der Politik auf die Zentralbank begrenzen. Macht lädt immer auch zu Machtmissbrauch ein.

Mosler: Meine Antwort lautet: Entweder wir glauben an die Demokratie oder nicht. Wenn wir an die Demokratie glauben, dann sehe ich nicht ein, warum wir die Wähler nicht darüber aufklären, wie das Geldsystem tatsächlich funktioniert, statt sie zu täuschen. Im Moment tun wir nämlich genau das.

Die Zeit

Sonnenschein-Mantel-Debreu at fifty

21 May, 2019 at 15:45 | Posted in Economics | Leave a comment

SMD theory means that assumptions guaranteeing good behavior at the microeconomic level do not carry over to the aggregate level or to qualitative features of the equilibrium. It has been difficult to make progress on the elaborations of general equilibrium theory that were put forth in Arrow and Hahn 1971 …

24958274Fifteen years after General Competitive Analysis, Arrow (1986) stated that the hypothesis of rationality had few implications at the aggregate level. Kirman (1989) held that general equilibrium theory could not generate falsifiable propositions, given that almost any set of data seemed consistent with the theory. These views are widely shared. Bliss (1993, 227) wrote that the “near emptiness of general equilibrium theory is a theorem of the theory.” Mas-Colell, Michael Whinston, and Jerry Green (1995) titled a section of their graduate microeconomics textbook “Anything Goes: The Sonnenschein-Mantel-Debreu Theorem.”

S. Abu Turab Rizvi

And so what? Why should we care about Sonnenschein-Mantel-Debreu?

Because  Sonnenschein-Mantel-Debreu ultimately explains why New Classical, Real Business Cycles, Dynamic Stochastic General Equilibrium (DSGE) and New Keynesian microfounded macromodels are such bad substitutes for real macroeconomic analysis!

These models try to describe and analyze complex and heterogeneous real economies with a single rational-expectations-robot-imitation-representative-agent. That is, with something that has absolutely nothing to do with reality. And — worse still — something that is not even amenable to the kind of general equilibrium analysis that they are thought to give a foundation for, since Hugo Sonnenschein (1972),​ Rolf Mantel (1976) and Gerard Debreu (1974) unequivocally showed that there did not exist any condition by which assumptions on individuals would guarantee neither stability nor uniqueness of the equilibrium​ solution. A century and a half after Léon Walras founded neoclassical general equilibrium theory, modern mainstream economics hasn’t been able to show that markets move economies to equilibria. This if anything shows that the whole Bourbaki-Debreu project of axiomatizing​ economics was nothing but a delusion.

You enquire whether or not Walras was supposing that exchanges actually take place at the prices originally proposed when the prices are not equilibrium prices. The footnote which you quote convinces me that he assuredly supposed that they did not take place except at the equilibrium prices … All the same, I shall hope to convince you some day that Walras’ theory and all the others along those lines are little better than nonsense!

Letter from J. M. Keynes to N. Georgescu-Roegen, December 9, 1934

Opting for cloned representative agents that are all identical is of course not a real solution to the fallacy of composition that the Sonnenschein-Mantel-Debreu theorem points to. Representative agent models are — as I have argued at length in my On the use and misuse of theories and models in mainstream economics — rather an evasion whereby issues of distribution, coordination, heterogeneity — everything that really defines macroeconomics — are swept under the rug.

Of course, most macroeconomists know that to use a representative agent is a flagrantly illegitimate method of ignoring real aggregation issues. They keep on with their business, nevertheless, just because it significantly simplifies what they are doing. It reminds — not so little — of the drunkard who has lost his keys in some dark place and deliberately chooses to look for them under a neighbouring street light just because it is easier to see there …

Language is an amazing thing …

21 May, 2019 at 15:13 | Posted in Varia | Leave a comment

 

Comment éviter la prochaine crise financière mondiale ?

20 May, 2019 at 16:43 | Posted in Economics | 1 Comment

 

Next Page »

Blog at WordPress.com.
Entries and comments feeds.