How Richard Posner became a Keynesian

22 Jan, 2021 at 16:18 | Posted in Economics | 5 Comments

Until [2008], when the banking industry came crashing down and depression loomed for the first time in my lifetime, I had never thought to read The General Theory of Employment, Interest, and Money, despite my interest in economics … I had heard that it was a very difficult book and that the book had been refuted by Milton Friedman, though he admired Keynes’s earlier work on monetarism. I would not have been surprised by, or inclined to challenge, the claim made in 1992 by Gregory Mankiw, a prominent macroeconomist at Harvard, that “after fifty years of additional progress in economic science, The General Theory is an outdated book. . . . We are in a much better position than Keynes was to figure out how the economy works.”

adaWe have learned since [2008] that the present generation of economists has not figured out how the economy works …

Baffled by the profession’s disarray, I decided I had better read The General Theory. Having done so, I have concluded that, despite its antiquity, it is the best guide we have to the crisis …

It is an especially difficult read for present-day academic economists, because it is based on a conception of economics remote from theirs. This is what made the book seem “outdated” to Mankiw — and has made it, indeed, a largely unread classic … The dominant conception of economics today, and one that has guided my own academic work in the economics of law, is that economics is the study of rational choice … Keynes wanted to be realistic about decision-making rather than explore how far an economist could get by assuming that people really do base decisions on some approximation to cost-benefit analysis …

Economists may have forgotten The General Theory and moved on, but economics has not outgrown it, or the informal mode of argument that it exemplifies, which can illuminate nooks and crannies that are closed to mathematics. Keynes’s masterpiece is many things, but “outdated” it is not.

Richard Posner

Leontief’s devastating critique of econom(etr)ics

18 Jan, 2021 at 17:57 | Posted in Economics | 2 Comments

Much of current academic teaching and research has been criticized for its lack of relevance, that is, of immediate practical impact … I submit that the consistently indifferent performance in practical applications is in fact a symptom of a fundamental imbalance in the present state of our discipline. The weak and all too slowly growing empirical foundation clearly cannot support the proliferating superstructure of pure, or should I say, speculative economic theory …

004806Uncritical enthusiasm for mathematical formulation tends often to conceal the ephemeral substantive content of the argument behind the formidable front of algebraic signs … In the presentation of a new model, attention nowadays is usually centered on a step-by-step derivation of its formal properties. But if the author — or at least the referee who recommended the manuscript for publication — is technically competent, such mathematical manipulations, however long and intricate, can even without further checking be accepted as correct. Nevertheless, they are usually spelled out at great length. By the time it comes to interpretation of the substantive conclusions, the assumptions on which the model has been based are easily forgotten. But it is precisely the empirical validity of these assumptions on which the usefulness of the entire exercise depends.

What is really needed, in most cases, is a very difficult and seldom very neat assessment and verification of these assumptions in terms of observed facts. Here mathematics cannot help and because of this, the interest and enthusiasm of the model builder suddenly begins to flag: “If you do not like my set of assumptions, give me another and I will gladly make you another model; have your pick.” …

But shouldn’t this harsh judgment be suspended in the face of the impressive volume of econometric work? The answer is decidedly no. This work can be in general characterized as an attempt to compensate for the glaring weakness of the data base available to us by the widest possible use of more and more sophisticated statistical techniques. Alongside the mounting pile of elaborate theoretical models we see a fast-growing stock of equally intricate statistical tools. These are intended to stretch to the limit the meager supply of facts … Like the economic models they are supposed to implement, the validity of these statistical tools depends itself on the acceptance of certain convenient assumptions pertaining to stochastic properties of the phenomena which the particular models are intended to explain; assumptions that can be seldom verified.

Wassily Leontief

A salient feature of modern mainstream economics is the idea of science advancing through the use of “successive approximations” whereby ‘small-world’ models become more and more relevant and applicable to the ‘large world’ in which we live. Is this really a feasible methodology? Yours truly thinks not.

Most models in science are representations of something else. Models “stand for” or “depict” specific parts of a “target system” (usually the real world). And all empirical sciences use simplifying or unrealistic assumptions in their modelling activities. That is not the issue — as long as the assumptions made are not unrealistic in the wrong way or for the wrong reasons.

Theories are difficult to directly confront with reality. Economists therefore build models of their theories. Those models are representations that are directly examined and manipulated to indirectly say something about the target systems.

But models do not only face theory. They also have to look to the world. Being able to model a “credible world,” a world that somehow could be considered real or similar to the real world, is not the same as investigating the real world. Even though all theories are false, since they simplify, they may still possibly serve our pursuit of truth. But then they cannot be unrealistic or false in any way. The falsehood or unrealisticness has to be qualified.

If we cannot show that the mechanisms or causes we isolate and handle in our models are stable, in the sense that what when we export them from are models to our target systems they do not change from one situation to another, then they only hold under ceteris paribus conditions and a fortiori are of limited value for our understanding, explanation and prediction of our real world target system. No matter how many convoluted refinements of concepts made in the model, if the “successive approximations” do not result in models similar to reality in the appropriate respects (such as structure, isomorphism etc), the surrogate system becomes a substitute system that does not bridge to the world but rather misses its target.

So, I have to conclude that constructing “minimal economic models” — or using microfounded macroeconomic models as “stylized facts” or “stylized pictures” somehow “successively approximating” macroeconomic reality — is a rather unimpressive attempt at legitimizing using ‘small-world’ models and fictitious idealizations for reasons more to do with mathematical tractability than with a genuine interest of understanding and explaining features of real economies.

As noticed by Leontief, there is no reason to suspend this harsh judgment when facing econometrics. When it comes to econometric modelling one could, of course, choose to treat observational or experimental data as random samples from real populations. I have no problem with that (although it has to be noted that most ‘natural experiments’ are not based on random sampling from some underlying population — which, of course, means that the effect-estimators, strictly seen, only are unbiased for the specific samples studied). But econometrics does not content itself with that kind of populations. Instead, it creates imaginary populations of ‘parallel universes’ and assume that our data are random samples from that kind of  ‘infinite super populations.’ This is actually nothing else but hand-waving! And it is inadequate for real science. As David Freedman writes:

With this approach, the investigator does not explicitly define a population that could in principle be studied, with unlimited resources of time and money. The investigator merely assumes that such a population exists in some ill-defined sense. And there is a further assumption, that the data set being analyzed can be treated as if it were based on a random sample from the assumed population. These are convenient fictions … Nevertheless, reliance on imaginary populations is widespread. Indeed regression models are commonly used to analyze convenience samples … The rhetoric of imaginary populations is seductive because it seems to free the investigator from the necessity of understanding how data were generated.

Skolval och segregation

17 Jan, 2021 at 23:09 | Posted in Economics | Leave a comment

download3Vi undersöker hur skolsegregationen rent hypotetiskt skulle ha utvecklats om alla elever gått i den närmsta kommunala skolan, och inte haft möjlighet att välja skola. I figuren visas denna utveckling med den svarta linjen. I början av 1990-talet gick nästan alla elever i den närmsta skolan och därför är skillnaden mellan verklig skolsegregation (blå linje) och hypotetisk skolsegregation (svart linje) inte så stor. Utvecklingen av den svarta linjen visar att skolsegregationen skulle ha ökat betydligt även utan möjlighet att välja skola, en ökning som kan hänföras till att boendet blivit allt mer segregerat. Skillnaden mellan den blåa och den svarta linjen visar skolvalets bidrag till skolsegregationen, det vill säga den del av skolsegregationen som beror på att elever inte går på den närmsta kommunala skolan. Figuren visar att drygt en fjärdedel av den ökade skolsegregationen kan hänföras till skolvalet. Även om ökad bostadssegregation har haft störst betydelse för den ökade skolsegregationen, är skolvalets bidrag alltså inte marginellt och måste därför tas på allvar.

Helena Holmlund & Anna Sjögren & Björn Öckert

MMT-perspektiv på pengar och skatter

11 Jan, 2021 at 18:14 | Posted in Economics | 2 Comments

Om stater inte alls behöver sina medborgares pengar, varför betalar vi då överhuvudtaget skatt?

Heikki Patomäki: Modern Monetary Theory - Populist Rhetoric or a Credible  Alternative? | Brave New EuropeStephanie Kelton: – Ponera att den amerikanska staten skulle slopa alla skatter utan att samtidigt skära ned på sina utgifter. Om jag inte längre behöver betala någon skatt kan jag såklart göra av med mer pengar – problemet är bara att ekonomin och dess samlade arbetsstyrka samtidigt endast har en begränsad mängd extra varor och tjänster att erbjuda. Förr eller senare är kapaciteten förbrukad och då kommer inte utbudet längre kunna hålla jämna steg med den växande efterfrågan. I det läget blir varorna och tjänsterna dyrare och mina dollar tappar i värde. Om staten i en sådan situation inte ikläder sig rollen som skatteindrivare och suger ut pengarna ur ekonomin igen så stiger inflationstrycket.

Skatter har alltså en inflationshämmande funktion?

Stephanie Kelton: – Ja. I Modern monetary theory står alltid inflationen i centrum. Att jag kämpar mot artificiella begränsningar – som budget­disciplinen – beror på att jag i stället vill kunna fokusera på det som verkligen begränsar vår budget – risken för inflation. Jag har jobbat i den amerikanska senatens budgetutskott, och under hela min tid där hörde jag inte en enda senator eller någon av senatorernas med­arbetare ta ordet ”inflation” i sin mun. De funderar överhuvudtaget inte över den saken. Men det måste man göra om man strör så astronomiska summor omkring sig!

Måste man verkligen fundera så mycket över inflationen? Det räcker väl med lite sunt förnuft för att inse att en valuta förlorar i värde om man trycker mer pengar?

Stephanie Kelton: – Det är mer komplicerat än så. Sambandet mellan penningskapande och inflation är långtifrån så entydigt som många tror. För det första kan man öka penningmängden utan att det leder till inflation – det är ett fenomen som många centralbanker fått erfara på sistone. Alldeles oavsett om vi befinner oss i euro­zonen, i USA eller i Japan: på alla dessa platser försöker man sedan flera år tillbaka nå upp till det officiella inflationsmålet på knappt två procent – men utan att lyckas. Varför det blivit så är oklart. Och för det andra kan inflationen skjuta i höjden utan att det går att hänvisa till en växande penningmängd som enda orsak. Ett välkänt exempel är oljekrisen på 1970-­talet, som ledde till stigande priser. Detta kallas för kostnads­inflation. Rent generellt borde vi nationalekonomer visa prov på betydligt mer ödmjukhet. Våra kunskaper om olika typer av inflation och deras orsaker är alltjämt alldeles för bristfälliga. Den vetenskapliga kunskaps­nivån är på detta område pinsamt låg.

Flamman / Die Zeit

Interview mit Stephanie Kelton

11 Jan, 2021 at 11:50 | Posted in Economics | 6 Comments

Kelton: Die Vorstellung, dass Staaten nur eine begrenzte Menge an Geld zur Verfügung hätten, kommt aus einer Zeit, in der die Währung in den meisten Ländern in der einen oder anderen Form an Edelmetalle wie Gold oder Silber gekoppelt war. Heute ist das nicht mehr so. Geld wird einfach gedruckt – genauer gesagt: im Computer erzeugt. Es lässt sich beliebig vermehren.

defZEIT: Das klingt jetzt so, als würden Sie einem Kind sagen: Süßigkeiten machen nicht dick. Nimm dir, so viel du willst!

Kelton: Nein, nein! Es gibt eine Grenze für die Staatsausgaben. Aber diese Grenze wird nicht durch die Höhe der Verschuldung bestimmt, sondern durch die Inflationsrate.

ZEIT: Wie meinen Sie das denn genau? Wir in Deutschland denken beim Thema Inflation normalerweise an Massenarbeitslosigkeit und staatlichen Kontrollverlust.

Kelton: Ich meine etwas anderes. Die Inflation ist auch eine Begleiterscheinung des Wirtschaftens. Um im Bild zu bleiben: Sie entsteht, wenn die Restaurants nicht mehr halb leer sind, sondern voll und sich in den Läden die Menschen drängeln. Denn dann werden irgendwann die Arbeitskräfte knapp. Die Folge: Die Restaurantangestellten können höhere Löhne durchsetzen, die Restaurantbesitzer erhöhen die Preise. In einer solchen Situation wäre es falsch, die Wirtschaft durch staatliche Ausgaben zusätzlich anzuheizen, denn dann würde sie heißlaufen. Wenn aber viele Menschen keine Arbeit haben, liegen Ressourcen brach, die der Staat nutzbar machen kann. In den meisten Industrieländern ist genau das derzeit der Fall.

Die Zeit

Garbage-can econometrics

7 Jan, 2021 at 22:23 | Posted in Economics | 2 Comments

When no formal theory is available, as is often the case, then the analyst needs to justify statistical specifications by showing that they fit the data. That means more than just “running things.” It means careful graphical and crosstabular analysis …

garbageWhen I present this argument … one or more scholars say, “But shouldn’t I control for every-thing I can? If not, aren’t my regression coefficients biased due to excluded variables?” But this argument is not as persuasive as it may seem initially.

First of all, if what you are doing is mis-specified already, then adding or excluding other variables has no tendency to make things consistently better or worse. The excluded variable argument only works if you are sure your specification is precisely correct with all variables included. But no one can know that with more than a handful of explanatory variables. 

Still more importantly, big, mushy regression and probit equations seem to need a great many control variables precisely because they are jamming together all sorts of observations that do not belong together. Countries, wars, religious preferences, education levels, and other variables that change people’s coefficients are “controlled” with dummy variables that are completely inadequate to modeling their effects. The result is a long list of independent variables, a jumbled bag of nearly unrelated observations, and often, a hopelessly bad specification with meaningless (but statistically significant with several asterisks!) results.

Christopher H. Achen

This article is one of my absolute favourites. Why? Because it reaffirms yours truly’s view that since there is no absolutely certain knowledge at hand in social sciences — including economics — explicit argumentation and justification ought to play an extremely important role if purported knowledge claims are to be sustainably warranted. As Achen puts it — without careful supporting arguments, “just dropping variables into SPSS, STATA, S or R programs accomplishes nothing.”

Overconfident economists

7 Jan, 2021 at 09:11 | Posted in Economics | 1 Comment

Worst of all, when we feel pumped up with our progress, a tectonic shift can occur, like the Panic of 2008, making it seem as though our long journey has left us disappointingly close to the State of Complete Ignorance whence we began …

overconfidence It often takes years down the Path, but sooner or later, someone articulates the concerns that gnaw away in each of us and asks if the Assumptions are valid …

It would be much healthier for all of us if we could accept our fate, recognize that perfect knowledge will be forever beyond our reach and find happiness with what we have …

Can we economists agree that it is extremely hard work to squeeze truths from our data sets and what we genuinely understand will remain uncomfortably limited? We need words in our methodological vocabulary to express the limits … Those who think otherwise should be required to wear a scarlet-letter O around their necks, for “overconfidence.”

Ed Leamer

Many economists regularly pretend to know more than they do. Often this is a conscious strategy to promote their authority in politics and among policy makers. When economists present their models it should be mandatory that the models have warning labels to alert readers to the limited real-world relevance of models building on assumptions known to be absurdly unreal.

Economics may be an informative tool for research. But if its practitioners do not investigate and make an effort of providing a justification for the credibility of the assumptions on which they erect their building, it will not fullfil its task. There is a gap between its aspirations and its accomplishments, and without more supportive evidence to substantiate its claims, critics like yours truly will continue to consider its ultimate arguments as a mixture of rather unhelpful metaphors and metaphysics.

Nowadays it has almost become a self-evident truism among economists that you cannot expect people to take your arguments seriously unless they are based on or backed up by advanced econometric modelling​. So legions of mathematical-statistical theorems are proved — and heaps of fiction are being produced, masquerading as science. The rigour​ of the econometric modelling and the far-reaching assumptions they are built on is frequently simply not supported by data. This is a dire warning of the need to change direction of economics.

NAIRU — closer to religion than science

5 Jan, 2021 at 10:55 | Posted in Economics | 5 Comments

phillips-curve-lr-1Once we see how weak the foundations for the natural rate of unemployment are, other arguments for pursuing rates of unemployment economists once thought impossible become more clear. Wages can increase at the expense of corporate profits without causing inflation …

The harder we push on improving output and employment, the more we learn how much we can achieve on those two fronts. That hopeful idea is the polar opposite of a natural, unalterable rate of unemployment. And it’s an idea and attitude that we need to embrace if we’re to have a shot at fully recovering from the wreckage of the Great Recession.

Mike Konczal / Vox

NAIRU does not hold water simply because it has not existed for the last 50 years. But still today ‘New Keynesian’ macroeconomists use it — and its cousin the Phillips curve — as a fundamental building block in their models. Why? Because without it ‘New Keynesians’ have to give up their — again and again empirically falsified — neoclassical view of the long-run neutrality of money and the simplistic idea of inflation as an excess-demand phenomenon.

The NAIRU approach is not only of theoretical interest. Far from it.

The real damage done is that policymakers that take decisions based on NAIRU models systematically tend to implement austerity measures and kill off economic expansion. Peddling this flawed illusion only gives rise to unnecessary and costly stagnation and unemployment.

Defenders of the [NAIRU theory] might choose to respond to these empirical findings by arguing that the natural rate of unemployment is time varying. But I am unaware of any theory which provides us, in advance, with an explanation of how the natural rate of unemployment varies over time. In the absence of such a theory the [NAIRU theory] has no predictive content. A theory like this, which cannot be falsified by any set of observations, is closer to religion than science.

Roger Farmer

Mainstream economics finally made it …

3 Jan, 2021 at 12:54 | Posted in Economics | Leave a comment

out of frame

Wooh! So this is reality!

NAIRU — a harmful fairy tale

2 Jan, 2021 at 10:24 | Posted in Economics | 19 Comments

The NAIRU story has always had a very clear policy implication — attempts to promote full employment is doomed to fail, since governments and central banks can’t push unemployment below the critical NAIRU threshold without causing harmful runaway inflation.

jobs%20cartoon
Althouigh a lot of mainstream economists and politicians have a touching faith in the NAIRU fairy tale, it doesn’t hold water when scrutinized.

One of the main problems with NAIRU is that it is essentially a timeless long-run equilibrium attractor to which actual unemployment (allegedly) has to adjust. If that equilibrium is itself changing — and in ways that depend on the process of getting to the equilibrium — well, then we can’t really be sure what that equlibrium will be without contextualizing unemployment in real historical time. And when we do, we will see how seriously wrong we go if we omit demand from the analysis. Demand policy has long-run effects and matters also for structural unemployment — and governments and central banks can’t just look the other way and legitimize their passivity re unemployment by refering to NAIRU.

NAIRU does not hold water simply because it does not exist — and to base economic policy on such a weak theoretical and empirical construct is nothing short of writing out a prescription for self-inflicted economic havoc.

NAIRU wisdom holds that a rise in the (real) interest rate will only affect inflation, not structural unemployment. We argue instead that higher interest rates slow down technological progress – directly by depressing demand growth and indirectly by creating additional unemployment and depressing wage growth.

As a result, productivity growth will fall, and the NAIRU must increase. In other words, macroeconomic policy has permanent effects on structural unemployment and growth – the NAIRU as a constant “natural” rate of unemployment does not exist.

This means we cannot absolve central bankers from charges that their anti-inflation policies contribute to higher unemployment. They have already done so. Our estimates suggest that overly restrictive macro policies in the OECD countries have actually and unnecessarily thrown millions of workers into unemployment by a policy-induced decline in productivity and output growth. This self-inflicted damage must rest on the conscience of the economics profession.

The Coase Theorem at 60

19 Dec, 2020 at 23:01 | Posted in Economics | Comments Off on The Coase Theorem at 60

Steven Medema — who knows more about the theories of Ronald Coase than any other economist yours truly is familiar with — has written an incisive and learned article about the history of the Coase theorem —  The Coase Theorem at Sixty — in the latest issue of Journal of Economic Literature. Medema concludes :

The Coase theorem is, by any number of measures, one of the most curious results in the history of economic ideas. Its development has been shrouded in misremembrances, political controversies, and all manner of personal and communal confusions and serves as an exemplar of the messy process by which new ideas become scientific knowledge. There is no unique statement of the Coase theorem; there are literally dozens of different statements of it, many of which are inconsistent with others and appear to mark significant departures from what Coase had argued in 1960.

Applying the theorem (nota bene, as emphasized by Medema, the theorem does not in its standard rendering even come close to Coase’s own thoughts on the issue) in the real-world certainly can be dangerous:

misWillingness​ to walk away from an ‘unfair’ offer is [one] reason why the predictions of the Coase theorem often fail. I had discovered this firsthand many years earlier in Rochester. Our home there had a willow tree in the backyard … The neighbor hated that tree. He asked me to have the tree removed …

I knew the Coase theorem … So I went to talk to my neighbor and told him that, while the tree did not bother me, if he felt strongly about it, I would let him arrange to remove it at his own expense. He thought this was the most outrageous suggestion he ever heard, slammed the door in my face, and never broached the subject again.

When people are given what they consider to be unfair offers, they can get angry enough to punish the other party, even at some cost to themselves. That is the basic lesson of the Ultimatum Game. As the willow tree story illustrates, the same can occur in situations in which the Coase theorem is often applied.

For yours truly’s own take on the Coase theorem — in Swedish only — see here or my “Dr Pangloss, Coase och välfärdsteorins senare öden,” Zenit 4/1996.

Why everything we know about modern economics is wrong

19 Dec, 2020 at 20:12 | Posted in Economics | 7 Comments

The proposition is about as outlandish as it sounds: Everything we know about modern economics is wrong. And the man who says he can prove it doesn’t have a degree in economics. But Ole Peters is no ordinary crank. A physicist by training, his theory draws on research done in close collaboration with the late Nobel laureate Murray Gell-Mann, father of the quark …

His beef is that all too often, economic models assume something called “ergodicity.” That is, the average of all possible outcomes of a given situation informs how any one person might experience it. But that’s often not the case, which Peters says renders much of the field’s predictions irrelevant in real life. In those instances, his solution is to borrow math commonly used in thermodynamics to model outcomes using the correct average …

If Peters is right — and it’s a pretty ginormous if — the consequences are hard to overstate. Simply put, his “fix” would upend three centuries of economic thought, and reshape our understanding of the field as well as everything it touches …

Peters asserts his methods will free economics from thinking in terms of expected values over non-existent parallel universes and focus on how people make decisions in this one. His theory will also eliminate the need for the increasingly elaborate “fudges” economists use to explain away the inconsistencies between their models and reality.

  .                                                                                                                          Brandon Kochkodin / BlombergQuint

sfiOle Peters’ fundamental critique of (mainstream) economics involves arguments about ergodicity and the all-important difference between time averages and ensemble averages. These are difficult concepts that many students of economics have problems with understanding. So let me just try to explain the meaning of these concepts by means of a couple of simple examples.

Let’s say you’re offered a gamble where on a roll of a fair die you will get €10  billion if you roll a six, and pay me €1 billion if you roll any other number.

Would you accept the gamble?

If you’re an economics student you probably would because that’s what you’re taught to be the only thing consistent with being rational. You would arrest the arrow of time by imagining six different ‘parallel universes’ where the independent outcomes are the numbers from one to six, and then weigh them using their stochastic probability distribution. Calculating the expected value of the gamble — the ensemble average — by averaging on all these weighted outcomes you would actually be a moron if you didn’t take the gamble (the expected value of the gamble being 5/6*€0 + 1/6*€10 billion = €1.67 billion)

If you’re not an economist you would probably trust your common sense and decline the offer, knowing that a large risk of bankrupting one’s economy is not a very rosy perspective for the future. Since you can’t really arrest or reverse the arrow of time, you know that once you have lost the €1 billion, it’s all over. The large likelihood that you go bust weights heavier than the 17% chance of you becoming enormously rich. By computing the time average — imagining one real universe where the six different but dependent outcomes occur consecutively — we would soon be aware of our assets disappearing, and a fortiori that it would be irrational to accept the gamble.

From a mathematical point of view, you can  (somewhat non-rigorously) describe the difference between ensemble averages and time averages as a difference between arithmetic averages and geometric averages. Tossing a fair coin and gaining 20% on the stake (S) if winning (heads) and having to pay 20% on the stake (S) if losing (tails), the arithmetic average of the return on the stake, assuming the outcomes of the coin-toss being independent, would be [(0.5*1.2S + 0.5*0.8S) – S)/S]  = 0%. If considering the two outcomes of the toss not being independent, the relevant time average would be a geometric average return of squareroot [(1.2S *0.8S)]/S – 1 = -2%.

Why is the difference between ensemble and time averages of such importance in economics? Well, basically, because when assuming the processes to be ergodic, ensemble and time averages are identical.

Assume we have a market with an asset priced at €100. Then imagine the price first goes up by 50% and then later falls by 50%. The ensemble average for this asset would be €100 – because we here envision two parallel universes (markets) where the asset-price falls in one universe (market) with 50% to €50, and in another universe (market) it goes up with 50% to €150, giving an average of 100 € ((150+50)/2). The time average for this asset would be 75 € – because we here envision one universe (market) where the asset price first rises by 50% to €150 and then falls by 50% to €75 (0.5*150).

From the ensemble perspective nothing really, on average, happens. From the time perspective lots of things really, on average, happen. Assuming ergodicity there would have been no difference at all.

On a more economic-theoretical level, the difference between ensemble and time averages also highlights the problems concerning the neoclassical theory of expected utility that I have raised before (e. g.  here).

When applied to the mainstream theory of expected utility, one thinks in terms of ‘parallel universe’ and asks what is the expected return of an investment, calculated as an average over the ‘parallel universe’? In our coin tossing example, it is as if one supposes that various ‘I’ are tossing a coin and that the loss of many of them will be offset by the huge profits one of these ‘I’ does. But this ensemble average does not work for an individual, for whom a time average better reflects the experience made in the ‘non-parallel universe’ in which we live.

Time averages give a more realistic answer, where one thinks in terms of the only universe we actually live in and ask what is the expected return of an investment, calculated as an average over time.

Since we cannot go back in time – entropy and the arrow of time make this impossible – and the bankruptcy option is always at hand (extreme events and ‘black swans’ are always possible) we have nothing to gain from thinking in terms of ensembles.

Actual events follow a fixed pattern of time, where events are often linked to a multiplicative process (as e. g. investment returns with ‘compound interest’) which is basically non-ergodic.

flaw-of-averages-1

Instead of arbitrarily assuming that people have a certain type of utility function – as in the neoclassical theory – time average considerations show that we can obtain a less arbitrary and more accurate picture of real people’s decisions and actions by basically assuming that time is irreversible. When our assets are gone, they are gone. The fact that in a parallel universe they could conceivably have been refilled, is of little comfort to those who live in the one and only possible world that we call the real world.

What can RCTs tell us?

16 Dec, 2020 at 17:22 | Posted in Economics | 1 Comment

Using Randomised Controlled Trials in Education (BERA/SAGE Research Methods  in Education): Amazon.co.uk: Connolly, Paul, Biggart, Andy, Miller, Dr  Sarah, O'hare, Liam, Thurston, Allen: 9781473902831: BooksWe seek to promote an approach to RCTs that is tentative in its claims and that avoids simplistic generalisations about causality and replaces these with more nuanced and grounded accounts that acknowledge uncertainty, plausibility and statistical probability …

Whilst promoting the use of RCTs in education we also need to be acutely aware of their limitations … Whilst the strength of an RCT rests on strong internal validity, the Achilles heel of the RCT is external validity … Within education and the social sciences a range of cultural conditions is likely to influence the external validity of trial results across different contexts. It is precisely​ for this reason that qualitative components of an evaluation, and particularly the development of plausible accounts of generative mechanisms are so important …

Highly recommended reading.

Nowadays it is widely believed among mainstream economists that the scientific value of randomisation — contrary to other methods — is totally uncontroversial and that randomised experiments are free from bias. When looked at carefully, however, there are in fact few real reasons to share this optimism on the alleged ’experimental turn’ in economics. Strictly seen, randomisation does not guarantee anything.

‘Ideally controlled experiments’ tell us with certainty what causes what effects — but only given the right ‘closures.’ Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. Causes deduced in an experimental setting still have to show that they come with an export-warrant to their target populations.

the-right-toolThe almost religious belief with which its propagators — like last year’s ‘Nobel prize’ winners Duflo, Banerjee and Kremer — portray it, cannot hide the fact that RCTs cannot be taken for granted to give generalisable results. That something works somewhere is no warranty for us to believe it to work for us here or even that it works generally.

The present RCT idolatry is dangerous. Believing there is only one really good evidence-based method on the market — and that randomisation is the only way to achieve scientific validity — blinds people to searching for and using other methods that in many contexts are better. RCTs are simply not the best method for all questions and in all circumstances. Insisting on using only one tool often means using the wrong tool.

‘Nobel prize’ winners like Duflo et consortes think that economics should be based on evidence from randomised experiments and field studies. They want to give up on ‘big ideas’ like political economy and institutional reform and instead go for solving more manageable problems the way plumbers do. But that modern time ‘marginalist’ approach sure can’t be the right way to move economics forward and make it a relevant and realist science. A plumber can fix minor leaks in your system, but if the whole system is rotten, something more than good old fashion plumbing is needed. The big social and economic problems we face today is not going to be solved by plumbers performing RCTs.

The elite school illusion

15 Dec, 2020 at 19:28 | Posted in Economics | 1 Comment

 

A great set of lectures — but yours truly still warns his students that regression-based averages is something we have reasons to be cautious about.

Suppose we want to estimate the average causal effect of a dummy variable (T) on an observed outcome variable (O). In a usual regression context one would apply an ordinary least squares estimator (OLS) in trying to get an unbiased and consistent estimate:

O = α + βT + ε,

where α is a constant intercept, β a constant ‘structural’ causal effect and ε an error term.

The problem here is that although we may get an estimate of the ‘true’ average causal effect, this may ‘mask’ important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are ‘treated’ (T=1) may have causal effects equal to -100 and those ‘not treated’ (T=0) may have causal effects equal to 100. Contemplating being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the OLS average effect particularly enlightening.

The heterogeneity problem does not just turn up as an external validity problem when trying to ‘export’ regression results to different times or different target populations. It is also often an internal problem to the millions of OLS estimates that economists produce every year.

MMT perspectives on rising interest rates

14 Dec, 2020 at 12:24 | Posted in Economics | 34 Comments

The Bank of England is today wholly-owned by the UK government, and no other body is allowed to create UK pounds. It can create digital pounds in the payments system that it runs, thus marking up and down the accounts of banks, the government and other public institutions. It also acts as the bank of the government, facilitating its payments. The Bank of England also determines the bank rate, which is the interest rate it pays to commercial banks that hold money (reserves) at the Bank of England …

The Great Unwind: What Will Rising Interest Rates Mean for Bank Risk  Exposures? The interest rate that the UK government pays is a policy variable determined by the Bank of England. Furthermore, it is not the Bank of England’s remit to bankrupt the government that owns it. The institutional setup ensures that the Bank of England supports the liquidity and solvency of the government to the extent that it becomes an issuer of currency itself. Selling government bonds, it can create whatever amount of pounds it deems necessary to fulfil its functions. Given that the Bank of England stands ready to purchase huge amounts of gilts on the secondary market (for “used” gilts), it is clear to investors that gilts are just as good as reserves. There is no risk of default …

The government of the UK cannot “run out of money”. When it spends more into the economy than it collects through taxes, a “public deficit” is produced. This means that the private sector saves a part of its monetary income which it has not spent on paying taxes (yet). When the government spends less than it collects in taxes, a “public surplus” results. This reduces public debt. That public debt to GDP ratio can be heavily influenced by GDP growth, which explains the fall in the public debt to GDP ratio in the second half of the 20th century …

So, do rising interest rates in the future create a problem for the UK government? No. The Bank of England is the currency issuer. There is nothing that stops it from paying what HM Treasury instructs it to pay. Gilts can be issued in this process as an option. The government’s ability to pay is not put into doubt since the Bank of England acts as a lender of last resort, offering to buy up gilts on the market so that the price of gilts can never crash. Higher interest rates cannot bankrupt the UK government.

Dirk Ehnts

One of the main reasons behind the lack of understanding that mainstream economists repeatedly demonstrate when it comes to these policy issues is related to the loanable funds theory and the view that governments — in analogy with individual households — have to have income before they can spend. This is, of course, totally wrong. Most governments nowadays are monopoly issuers of their own currencies, not users.

The loanable funds theory is in many regards nothing but an approach where the ruling rate of interest in society is — pure and simple — conceived as nothing else than the price of loans or credit, determined by supply and demand in the same way as the price of bread and butter on a village market. In the traditional loanable funds theory the amount of loans and credit available for financing investment is constrained by how much saving is available. Saving is the supply of loanable funds, investment is the demand for loanable funds and assumed to be negatively related to the interest rate.

There are many problems with this theory.

Loanable funds theory essentially reduces modern monetary economies to something akin to barter systems — something they definitely are not. As emphasised especially by Minsky, to understand and explain how much investment/loaning/ crediting is going on in an economy, it’s much more important to focus on the working of financial markets than staring at accounting identities like S = Y – C – G. The problems we meet on modern markets today have more to do with inadequate financial institutions than with the size of loanable-funds-savings.

A further problem in the traditional loanable funds theory is that it assumes that saving and investment can be treated as independent entities. This is seriously wrong.  There are always (at least) two parts in an economic transaction. Savers and investors have different liquidity preferences and face different choices — and their interactions usually only take place intermediated by financial institutions. This, importantly, also means that there is no “direct and immediate” automatic interest mechanism at work in modern monetary economies. What this ultimately boils down to is that what happens at the microeconomic level — both in and out of equilibrium —  is not always compatible with the macroeconomic outcome. The fallacy of composition has many faces — loanable funds is one of them.

All real economic activities nowadays depend on a functioning financial machinery. But institutional arrangements, states of confidence, fundamental uncertainties, asymmetric expectations, the banking system, financial intermediation, loan granting processes, default risks, liquidity constraints, aggregate debt, cash flow fluctuations, etc., etc. — things that play decisive roles in channeling​ money/savings/credit — are more or less left in the dark in modern formalisations of the loanable funds theory. Thanks to MMT that kind of evasion of the real policy issues we face today, are now met with severe questioning and justified critique.

Next Page »

Blog at WordPress.com.
Entries and comments feeds.