Mainstream economics — an obscurantist waste of time

30 November, 2017 at 12:54 | Posted in Economics | 4 Comments

One may perhaps, distinguish between obscure writers and obscurantist writers. The former aim at truth, but do not respect the norms for arriving at truth, such as focusing on causality, acting as the Devil’s Advocate, and generating falsifiable hypotheses. The latter do not aim at truth, and often scorn the very idea that there is such a thing as the truth …

obscurant-1The authors I have singled out are far from marginal, and in fact are at the core of the profession. Their numerous awards testify to this fact.

These writings have in common a somewhat uncanny combination of mathematical sophistication on the one hand and conceptual naiveté and empirical sloppiness on the other. The mathematics, which could have been a tool, is little more than toy. The steam engine was invented by Hero of Alexandria in the first century A. D., but he considered it mainly as a toy, not as a tool that could be put to productive use. He did apparently use it, though, for opening temple doors, so his engine wasn’t completely idling. Hard obscurantist models, too, may have some value as tools, but mostly they are toys.

I have pointed to the following objectionable practices:

2. Adopting huge simplifications that make the empirical relevance of the results essentially nil …

3. Assuming that the probabilities in a stochastic process are known to the agents … or even in some sense optimal …

7. Assuming that agents can choose optimal preferences …

11. Adhering to the instrumental Chicago-style philosophy of explanation, which emphasizes as-if rationality and denies that the realism of assumptions is a relevant issue.

Jon Elster

It’s hard not to agree with Elster’s critique of mainstream economics and its practice of letting models and procedures become ends in themselves, without considerations of their lack of explanatory value as regards real-world phenomena. For more on modern mainstream economics and this kind of wilfully silly obscurantism, yours truly self-indulgently recommend reading this article on RBC or this article on mainstream axiomatics.

Many mainstream economists working in the field of economic theory think that their task is to give us analytical truths. That is great — from a mathematical and formal logical point of view. In science, however, it is rather uninteresting and totally uninformative! The framework of the analysis is too narrow. Even if economic theory gives us ‘logical’ truths, that is not what we are looking for as scientists. We are interested in finding truths that give us new information and knowledge of the world in which we live.

Scientific theories are theories that ‘refer’ to the real-world, where axioms and definitions do not take us very far. To be of interest for an economist or social scientist that wants to understand, explain, or predict real-world phenomena, the pure theory has to be ‘interpreted’ — it has to be ‘applied’ theory. An economic theory that does not go beyond proving theorems and conditional ‘if-then’ statements — and do not make assertions and put forward hypotheses about real-world individuals and institutions — is of little consequence for anyone wanting to use theories to better understand, explain or predict real-world phenomena.

Mainstream theoretical economics has no empirical content whatsoever. And it certainly has no relevance whatsoever to a scientific endeavour of expanding real-world knowledge. This should come as no surprise. Building theories and models on unjustified patently ridiculous assumptions we know people never conform to, does not deliver real science. Real and reasonable people have no reason to believe in ‘as-if’ models of ‘rational’ robot-imitations acting and deciding in a Walt Disney-world characterised by ‘common knowledge,’ ‘full information,’ ‘rational expectations,’ zero  transaction costs, given stochastic probability distributions, risk-reduced genuine uncertainty, and other laughable nonsense assumptions of the same ilk. Science fiction is not science.

austerity22For decades now, economics students have been complaining about the way economics is taught. Their complaints are justified. Force-feeding young and open-minded people with unverified and useless autistic mainstream neoclassical theories and models cannot be the right way to develop a relevant and realistic economic science.

Much work done in mainstream theoretical economics is devoid of any explanatory interest. And not only that. Seen from a strictly scientific point of view, it has no value at all. It is a waste of time. And as so many have been experiencing in modern times of austerity policies and market fundamentalism — a very harmful waste of time.

Advertisements

The real harm done by Bayesianism

30 November, 2017 at 09:02 | Posted in Theory of Science & Methodology | 2 Comments

419Fn8sV1FL._SY344_BO1,204,203,200_The bias toward the superficial and the response to extraneous influences on research are both examples of real harm done in contemporary social science by a roughly Bayesian paradigm of statistical inference as the epitome of empirical argument. For instance the dominant attitude toward the sources of black-white differential in United States unemployment rates (routinely the rates are in a two to one ratio) is “phenomenological.” The employment differences are traced to correlates in education, locale, occupational structure, and family background. The attitude toward further, underlying causes of those correlations is agnostic … Yet on reflection, common sense dictates that racist attitudes and institutional racism must play an important causal role. People do have beliefs that blacks are inferior in intelligence and morality, and they are surely influenced by these beliefs in hiring decisions … Thus, an overemphasis on Bayesian success in statistical inference discourages the elaboration of a type of account of racial disadavantages that almost certainly provides a large part of their explanation.

For all scholars seriously interested in questions on what makes up a good scientific explanation, Richard Miller’s Fact and Method is a must read. His incisive critique of Bayesianism is still unsurpassed.

wpid-bilindustriella-a86478514bOne of yours truly’s favourite ‘problem situating lecture arguments’ against Bayesianism goes something like this: Assume you’re a Bayesian turkey and hold a nonzero probability belief in the hypothesis H that “people are nice vegetarians that do not eat turkeys and that every day I see the sunrise confirms my belief.” For every day you survive, you update your belief according to Bayes’ Rule

P(H|e) = [P(e|H)P(H)]/P(e),

where evidence e stands for “not being eaten” and P(e|H) = 1. Given that there do exist other hypotheses than H, P(e) is less than 1 and a fortiori P(H|e) is greater than P(H). Every day you survive increases your probability belief that you will not be eaten. This is totally rational according to the Bayesian definition of rationality. Unfortunately — as Bertrand Russell famously noticed — for every day that goes by, the traditional Christmas dinner also gets closer and closer …

For more on my own objections to Bayesianism, see my Bayesianism — a patently absurd approach to science and One of the reasons I’m a Keynesian and not a Bayesian.

The first speculative bubble

28 November, 2017 at 14:29 | Posted in Economics | 6 Comments

 

Carmina Burana

27 November, 2017 at 18:40 | Posted in Varia | Comments Off on Carmina Burana

 

Chicago economics delirium VSOP

26 November, 2017 at 13:24 | Posted in Economics | 3 Comments

2011-10-26-dumb_and_dumber-533x299

lucasbob-1Macroeconomics was born as a distinct field in the 1940s (sic!), as a part of the intellectual response to the Great Depression. The term then referred to the body of knowledge and expertise that we hoped would prevent the recurrence of that economic disaster. My thesis in this lecture is that macroeconomics in this original sense has succeeded: Its central problem of depression-prevention has been solved, for all practical purposes, and has in fact been solved for many decades.

Robert Lucas (2003)

In the past, I think you have been quoted as saying that you don’t even believe in the possibility of bubbles.

eugeneEugene Fama: I never said that. I want people to use the term in a consistent way. For example, I didn’t renew my subscription to The Economist because they use the world bubble three times on every page. Any time prices went up and down—I guess that is what they call a bubble. People have become entirely sloppy. People have jumped on the bandwagon of blaming financial markets. I can tell a story very easily in which the financial markets were a casualty of the recession, not a cause of it.

That’s your view, correct?

Fama: Yeah.

John Cassidy

The purported strength of Chicago — New Classical — macroeconomics is that it has firm anchorage in preference-based microeconomics, and especially that the decisions are taken by inter-temporal utility maximizing ‘forward-loooking’ individuals.

To some of us, however, this has come at too high a price. The almost quasi-religious insistence that macroeconomics has to have microfoundations — without ever presenting neither ontological nor epistemological justifications for this claim — has put a blind eye to the weakness of the whole enterprise of trying to depict a complex economy based on an all-embracing representative actor equipped with superhuman knowledge, forecasting abilities and forward-looking rational expectations.

That anyone should take that kind of stuff seriously is totally and unbelievably ridiculous. Or as Robert Solow has it:

4703325Suppose someone sits down where you are sitting right now and announces to me that he is Napoleon Bonaparte. The last thing I want to do with him is to get involved in a technical discussion of cavalry tactics at the battle of Austerlitz. If I do that, I’m getting tacitly drawn into the game that he is Napoleon. Now, Bob Lucas and Tom Sargent like nothing better than to get drawn into technical discussions, because then you have tacitly gone along with their fundamental assumptions; your attention is attracted away from the basic weakness of the whole story. Since I find that fundamental framework ludicrous, I respond by treating it as ludicrous – that is, by laughing at it – so as not to fall into the trap of taking it seriously and passing on to matters of technique.

Robert Solow

Chicago economics delirium

26 November, 2017 at 10:48 | Posted in Economics | 1 Comment

I believe there is no other proposition in economics which has more solid empirical evidence supporting it than the Efficient Market Hypothesis. That hypothesis has been tested and, with very few exceptions, found consistent with the data in a wide variety of markets …

Michael Jensen

I am skeptical about the argument that the subprime mortgage problem will contaminate the whole mortgage market, that housing construction will come to a halt, and that the economy will slip into a recession. Every step in this chain is questionable and none has been quantified. If we have learned anything from the past 20 years it is that there is a lot of stability built into the real economy.

Robert Lucas

La discipline économique et le mirage de la ‘vraie science’

26 November, 2017 at 10:05 | Posted in Economics | Comments Off on La discipline économique et le mirage de la ‘vraie science’

canguilLe mirage de la « vraie science », dont la puissance fantasmatique est immense chez les économistes, met sur la voie d’une autre catégorie canguilhemienne, qui permet peut-être de donner sa qualification la plus précise à la situation épistémologique de l’économie : il s’agit de la catégorie « d’idéologie scientifique » … La catégorie d’idéologie scientifique est d’abord purement interne au registre de l’histoire et de la philosophie des sciences, et désigne «l’ambition explicite d’être science à l’imitation de quelque modèle de science déjà constituée (…) L’idéologie scientifique (…) est une croyance qui louche du côté d’une science déjà instituée, dont elle reconnaît le prestige et dont elle cherche à imiter le style.»

Cette formidable intuition conceptuelle est d’ailleurs presque en dessous de la vérité s’agissant des économistes standard, dont bon nombre ne se contentent pas de se croire des physiciens de l’économie, mais croient bien sincèrement y voir plus droit que la science sur laquelle ils louchent – après tout, la théorie économique n’est-elle pas parfaitement unifiée et ne saisit-elle pas dans son modèle unique aussi bien les marchés de produits dérivés de Chicago, les comportements productifs des agriculteurs subsahariens ou bien l’économie des comportements criminels et addictifs, là où la physique, la pauvrette, peine encore à unifier mécanique quantique et relativité générale.

Au-delà même de ce que Georges Canguilhem avait imaginé, l’économie ne fait donc pas que bigler : elle y ajoute le délire. Or, sans vouloir trop jouer de la paronymie, c’est dans le désir, ou dans un certain désordre du désir, qu’il faut chercher l’origine du délire — en l’occurrence dans le désir caractéristique d’une idéologie scientifique : le désir de faire science. On comprend dans le cas de l’économie qu’il ait mal tourné — en fait à proportion de ce qu’il a été excité. Car l’économie a été soumise comme aucune autre science sociale au démon de la tentation galiléenne : n’est-elle pas par excellence science social du quantitatif et science des rapports sociaux nombrés ? C’est du fait d’être fondamentalement monétaire que l’économie tient d’avoir un substrat immédiatement quantifiable. Aussi s’est-elle laissé aller à croire que la quantité épuisait l’être économique pour en conclure plus vite que son domaine de faits était légalisable en principe, c’est-à-dire que les nombres de l’économie pouvaient être saisis dans la structure universelle de leurs rapports fonctionnels — alias les « lois de l’économie » : s’il y a du quantifiable, il y a du mathématisable, et s’il y a du mathématisable, il y a du légalisable, tel a été le fantasme galiléen de la science économique.

Frédéric Lordon

Richard Thaler et les fausses mouches imprimées

26 November, 2017 at 09:16 | Posted in Economics | 2 Comments

urinal fly Richard Thaler … part de l’idée que si les individus sont incapables pour tout un tas de «biais cognitifs» et culturels de prendre les meilleures décisions, il faut les y aider en les accompagnant dans leurs choix de «tous les jours». D’où son idée d’imposer aux individus des décisions tout en leur faisant croire qu’ils conservent leur pleine liberté de choix. Son application la plus connue citée dans l’ouvrage est celle des fausses mouches imprimées, au début des années 2000, dans les urinoirs de l’aéroport d’Amsterdam afin d’inciter ses usagers à viser juste de manière à réduire les dépenses de nettoyage. Et elles le furent de 80% ! Ces techniques de «paternalisme libertaire» ont alors été appliquées dans les pays anglo-saxons avec la création d’une «Nudge Squad» en 2009 par l’administration Obama suivie d’une «Nudge Unit» en 2010 en Angleterre par le Premier ministre David Cameron, sous la houlette de Richard Thaler.

Thaler explique par exemple que lorsque Barack Obama a mis en place un coup de pouce fiscal aux contribuables afin d’inciter les ménages à consommer après la crise de 2008, il avait le choix entre verser une prime de 1 200 dollars une seule fois dans l’année ou l’étaler, à raison de 100 dollars par mois. «Dans un monde d’Econs [d’individus rationnels, ndlr], ce choix serait sans importance. Mais si l’on considère que les contribuables de la classe moyenne dépensent déjà tout leur salaire chaque mois, en leur versant une seule fois par an cette prime, ils seront plus enclins à l’épargner ou à s’en servir pour payer leurs dettes. Comme l’objectif de cette réduction d’impôt était destiné à stimuler la dépense, l’administration a fait un choix judicieux en optant pour son étalement» …

Richard Thaler donnait un autre exemple par l’absurde de la manière dont nous devrions nous comporter si nous étions tous, comme le croient encore bon nombre d’économistes, «des Econs, ces êtres très intelligents capables de se livrer en toutes circonstances aux calculs les plus complexes mais totalement dénués d’affects». «Un Econ, écrit-il, ne s’attendrait pas par exemple à recevoir un cadeau le jour de son anniversaire. Quel intérêt à célébrer cette date arbitraire ? Un Econs serait même perplexe à l’idée de recevoir un cadeau et à choisir, il préférerait recevoir directement du cash car c’est ce qui permettra de satisfaire le mieux ce qui est optimal pour lui. Mais à moins que vous ne soyez marié à un(e) économiste, je ne vous conseille pas de vous contenter de faire un virement à votre conjoint(e) le jour de son anniversaire.» Et pour Richard Thaler, bien que les économistes savent très bien que personne ne se comporte de cette manière dans la vraie vie, cela ne les a pas empêchés de pondre des tonnes de théories basées sur ce type de raisonnements pendant des décennies.

Christophe Alix

Honour him

25 November, 2017 at 19:47 | Posted in Varia | Comments Off on Honour him

 

Swedish housing bubble soon to burst

25 November, 2017 at 14:59 | Posted in Economics | 4 Comments

High and rising household indebtedness poses the greatest risk to the Swedish economy. Household indebtedness has been increasing in Sweden since the mid- 1990s. Home ownership financed by high levels of mortgage debt with variable interest rates makes households vulnerable to falling house prices and increasing interest rates …

swedish-household-debt-as-perc-of-disp-income-to-2013In the present Economic Commentary, we extend the earlier analysis by using updated data covering the period up to September 2017 … Our main findings can be summarised as follows:

1. Household debt continues to increase faster than income. The average DTI ratio increased from 326 per cent in September 2016 to 338 per cent in September 2017.
2. More households have high debts relative to their income. In 2017, 260 000 households had a DTI ratio exceeding 600 per cent. This is an increase of 27 000 households compared to 2016.
3. Household indebtedness is increasing for all income groups and age groups.

Sveriges Riksbank

House prices are increasing fast in EU. And more so in Sweden than in any other member state. Sweden’s house price boom started in mid-1990s, and looking at the development of real house prices during the last three decades there are reasons to be deeply worried. As even The Riksbank now admits, the indebtedness of the Swedish household sector has risen to alarmingly high levels.

Yours truly has been trying to argue with ‘very serious people’ that it’s really high time to ‘take away the punch bowl.’ Mostly I have felt like the voice of one calling in the desert.

Housing-bubble-markets-flatten-a-bit-530

The Swedish housing market is living on borrowed time. It’s really high time to take away the punch bowl. What is especially worrying is that although the aggregate net asset position of the Swedish households is still on the solid side, an increasing proportion of those assets is illiquid. When the inevitable drop in house prices hits the banking sector and the rest of the economy, the consequences will be enormous.

It hurts when bubbles burst …

Emma Frans och konsten att skilja vetenskap från trams

24 November, 2017 at 19:15 | Posted in Theory of Science & Methodology | 2 Comments

fransEmma Frans’ med rätta prisbelönta bok Larmrapporten är en rolig, kunnig och ack så nödvändig uppgörelse med allehanda pseudo-vetenskapligt trams som sköljer över oss i media nuförtiden. Inte minst i sociala media sprids en massa ‘alternativa fakta’ och nonsens.

Även om jag varmt rekommederat studenter, vänner och bekanta att läsa boken, kan jag dock inte låta bli att här — bland mestadels akademiskt skolade läsare — påpeka att det finns en liten svaghet i boken. Det gäller behandlingen av evidens-baserad kunskap och då speciellt bilden av det som brukar kallas den vetenskapliga evidensens ‘gold standard’ — randomiserade kontrollerade studier (RCT).

Frans skriver:

gold-standard RCT är den typ av studier som överlag anses ha högst bevisvärde. Detta beror på att slumpen avgör vem som utsätts för interventionen och vem som får vara kontroll. Om studien är tillräckligt stor kommer slumpen se till att den enda betydelsefulla skillnaden mellan grupperna som jämförs är om de utsatts för interventionen eller inte. Om det senare går att se en skillnad mellan grupperna med avseende på utfallet så kan vi känna oss säkra på att detta beror på interventionen.

Detta är en rätt standardmässig presentation av vilka (påstådda) fördelar RCT har (bland dess förespråkare).

Problemet är bara att det ur strikt vetenskaplig synpunkt är fel!

Låt mig förklara varför jag anser detta med ett belysande exempel från skolvärlden.

Continue Reading Emma Frans och konsten att skilja vetenskap från trams…

Randomization — a philosophical device gone astray

23 November, 2017 at 10:30 | Posted in Theory of Science & Methodology | 1 Comment

When giving courses in the philosophy of science yours truly has often had David Papineau’s book Philosophical Devices (OUP 2012) on the reading list. Overall it is a good introduction to many of the instruments used when performing methodological and science theoretical analyses of economic and other social sciences issues.

Unfortunately, the book has also fallen prey to the randomization hype that scourges sciences nowadays.

philosophical-devices-proofs-probabilities-possibilities-and-sets The hard way to show that alcohol really is a cause of heart disease is to survey the population … But there is an easier way … Suppose we are able to perform a ‘randomized experiment.’ The idea here is not to look at correlations in the population at large, but rather to pick out a sample of individuals, and arrange randomly for some to have the putative cause and some not.

The point of such a randomized experiment is to ensure that any correlation between the putative cause and effect does indicate a causal connection. This works​ because the randomization ensures that the putative cause is no longer itself systematically correlated with any other properties that exert a causal influence on the putative effect … So a remaining correlation between the putative cause and effect must mean that they really are causally connected.

The problem with this simplistic view on randomization is that the claims made by Papineau on behalf of randomization are both exaggerated and invalid:

• Even if you manage to do the assignment to treatment and control groups ideally random, the sample selection certainly is — except in extremely rare cases — not random. Even if we make a proper randomized assignment, if we apply the results to a biased sample, there is always the risk that the experimental findings will not apply. What works ‘there,’ does not work ‘here.’ Randomization a fortiori does not ‘guarantee ‘ or ‘ensure’ making the right causal claim. Although randomization may help us rule out certain possible causal claims, randomization per se does not guarantee anything!

• Even if both sampling and assignment are made in an ideal random way, performing standard randomized experiments only give you averages. The problem here is that although we may get an estimate of the ‘true’ average causal effect, this may ‘mask’ important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are ‘treated’  may have causal effects equal to -100 and those ‘not treated’ may have causal effects equal to 100. Contemplating being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the average effect particularly enlightening.

• Since most real-world experiments and trials build on performing a single randomization, what would happen if you kept on randomizing forever, does not help you to ‘ensure’ or ‘guarantee’ that you do not make false causal conclusions in the one particular randomized experiment you actually do perform. It is indeed difficult to see why thinking about what you know you will never do, would make you happy about what you actually do.

Randomization is not a panacea — it is not the best method for all questions and circumstances. Papineau and other proponents of randomization make claims about its ability to deliver causal knowledge that are simply wrong. There are good reasons to be sceptical of the now popular — and ill-informed — view that randomization is the only valid and best method on the market. It is not.

Top 10 RCT critiques

22 November, 2017 at 15:17 | Posted in Theory of Science & Methodology | Comments Off on Top 10 RCT critiques

top-10-retail-news-thumb-610xauto-79997-600x240-1

•Basu, Kaushik (2014) Randomisation, Causality and the Role of Reasoned Intuition

Fammi abbracciare una donna (personal)

22 November, 2017 at 09:43 | Posted in Varia | Comments Off on Fammi abbracciare una donna (personal)


jeanett in berlinAs always, for you, Jeanette Meyer.

Though I speak with the tongues of angels,
If I have not love …
My words would resound with but a tinkling cymbal.
And though I have the gift of prophecy ​…
And understand all mysteries …
and all knowledge …
And though I have all faith
So that I could remove mountains,
If I have not love …
I am nothing.

Vienna

21 November, 2017 at 22:20 | Posted in Varia | Comments Off on Vienna

 

Randomized experiments — a dangerous idolatry

21 November, 2017 at 19:08 | Posted in Theory of Science & Methodology | 1 Comment

Hierarchy-of-EvidenceNowadays many mainstream economists maintain that ‘imaginative empirical methods’ — especially randomized experiments (RCTs) — can help us to answer questions concerning the external validity of economic models. In their view, they are, more or less, tests of ‘an underlying economic model’ and enable economists to make the right selection from the ever-expanding ‘collection of potentially applicable models.’

It is widely believed among economists that the scientific value of randomization — contrary to other methods — is totally uncontroversial and that randomized experiments are free from bias. When looked at carefully, however, there are in fact few real reasons to share this optimism on the alleged ’experimental turn’ in economics. Strictly seen, randomization does not guarantee anything.

Assume that you are involved in an experiment where we examine how the work performance of Chinese workers (A) is affected by a specific ‘treatment’ (B). How can we extrapolate/generalize to new samples outside the original population (e.g. to the US)? How do we know that any replication attempt ‘succeeds’? How do we know when these replicated experimental results can be said to justify inferences made in samples from the original population? If, for example, P(A|B) is the conditional density function for the original sample, and we are interested in doing an extrapolative prediction of E [P(A|B)], how can we know that the new sample’s density function is identical with the original? Unless we can give some really good argument for this being the case, inferences built on P(A|B) is not really saying anything on that of the target system’s P(A|B).

External validity and extrapolation are founded on the assumption that we could make inferences based on P(A|B) that is exportable to other populations for which P(A|B) applies. Sure, if one can convincingly show that P and P’ are similar enough, the problems are perhaps surmountable. But arbitrarily just introducing functional specification restrictions of the type invariance or homogeneity, is, at least for an epistemological realist far from satisfactory.

Many ‘experimentalists claim that it is easy to replicate experiments under different conditions and therefore a fortiori easy to test the robustness of experimental results. But is it really that easy? Population selection is almost never simple. Had the problem of external validity only been about inference from sample to population, this would be no critical problem. But the really interesting inferences are those we try to make from specific experiments to specific real-world structures and situations that we are interested in understanding or (causally) to explain. And then the population problem is more difficult to tackle.

In randomized trials the researchers try to find out the causal effects that different variables of interest may have by changing circumstances randomly — a procedure somewhat (‘on average’) equivalent to the usual ceteris paribus assumption.

Besides the fact that ‘on average’ is not always ‘good enough,’ it amounts to nothing but hand waving to simpliciter assume, without argumentation, that it is tenable to treat social agents and relations as homogeneous and interchangeable entities.

Randomization is used to basically allow the econometrician to treat the population as consisting of interchangeable and homogeneous groups (‘treatment’ and ‘control’). The regression models one arrives at by using randomized trials tell us the average effect that variations in variable X has on the outcome variable Y, without having to explicitly control for effects of other explanatory variables R, S, T, etc., etc. Everything is assumed to be essentially equal except the values taken by variable X.

In a usual regression context one would apply an ordinary least squares estimator (OLS) in trying to get an unbiased and consistent estimate:

Y = α + βX + ε,

where α is a constant intercept, β a constant ‘structural’ causal effect and ε an error term.

The problem here is that although we may get an estimate of the ‘true’ average causal effect, this may ‘mask’ important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are ‘treated'( X=1) may have causal effects equal to – 100 and those ‘not treated’ (X=0) may have causal effects equal to 100. Contemplating being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the OLS average effect particularly enlightening.

Limiting model assumptions in economic science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we ‘export’ them to our ‘target systems,’ we have to be able to show that they do not only hold under ceteris paribus conditions and a fortiori only are of limited value to our understanding, explanations or predictions of real economic systems.

Most ‘randomistas’ underestimate the heterogeneity problem. It does not just turn up as an external validity problem when trying to ‘export’ regression results to different times or different target populations. It is also often an internal problem to the millions of regression estimates that economists produce every year.

‘Ideally controlled experiments’ tell us with certainty what causes what effects — but only given the right ‘closures.’ Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. “It works there” is no evidence for “it will work here”. Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of ‘rigorous’ and ‘precise’ methods — and ‘on-average-knowledge’ — is despairingly small.

the-right-toolRCTs usually do not provide evidence that the results are exportable to other target systems. The almost religious belief with which its propagators portray it, cannot hide the fact that RCTs cannot be taken for granted to give generalizable results. That something works somewhere is no warranty for us to believe it to work for us here or even that it works generally.

The present RCT idolatry is dangerous. Believing there is only one really good evidence-based method on the market — and that randomization is the only way to achieve scientific validity — blinds people to searching for and using other methods that in many contexts are better. RCTs are simply not the best method for all questions and in all circumstances. Insisting on using only one tool often means using the wrong tool.

DSGE models are missing the point

20 November, 2017 at 13:41 | Posted in Economics | 3 Comments

dsgeIn a recent attempt to defend DSGE modelling, Lawrence Christiano, Martin Eichenbaum and Mathias Trabandt have to admit that DSGE models have failed to predict financial crises. The reason they put forward for this is that the models did not “integrate the shadow banking system into their analysis.” That certainly is true — but the DSGE problems go much deeper than that:

A typical modern approach to writing a paper in DSGE macroeconomics is as follows:

o to establish “stylized facts” about the quantitative interrelationships of certain macroeconomic variables (e.g. moments of the data such as variances, autocorrelations, covariances, …) that have hitherto not been jointly explained;

o to write down a DSGE model of an economy subject to a defined set of shocks that aims to capture the described interrelationships; and

o to show that the model can “replicate” or “match” the chosen moments when it is fed with stochastic shocks generated by the assumed shock process …

reality-check_600_441_80However, the test imposed by matching DSGE models to the data is problematic in at least three respects:

First, the set of moments chosen to evaluate the model is largely arbitrary …

Second, for a given set of moments, there is no well-defined statistic to measure the goodness of fit of a DSGE model or to establish what constitutes an improvement in such a framework …

Third, the evaluation is complicated by the fact that, at some level, all economic models are rejected by the data … In addition, DSGE models frequently impose a number of restrictions that are in direct conflict with micro evidence. If a model has been rejected along some dimensions, then a statistic that measures the goodness-of-fit along other dimensions is meaningless …

Focusing on the quantitative fit of models also creates powerful incentives for researchers (i) to introduce elements that bear little resemblance to reality for the sake of achieving a better fit (ii) to introduce opaque elements that provide the researcher with free (or almost free) parameters and (iii) to introduce elements that improve the fit for the reported moments but deteriorate the fit along other unreported dimensions.

Albert Einstein observed that “not everything that counts can be counted, and not everything that can be counted counts.” DSGE models make it easy to offer a wealth of numerical results by following a well-defined set of methods (that requires one or two years of investment in graduate school, but is relatively straightforward to apply thereafter). There is a risk for researchers to focus too much on numerical predictions of questionable reliability and relevance that absorb a lot of time and effort rather than focusing on deeper conceptual questions that are of higher relevance for society.

Anton Korinek

Great essay, showing that ‘rigorous’ and ‘precise’ DSGE models cannot be considered anything else than unsubstantiated conjectures as long as they aren’t supported by evidence from outside the theory or model. To my knowledge no in any way decisive empirical evidence has been presented.

To reply to Korinek’s devastating critique — as do Christiano et al.  — with pie-in-the-sky formulations such as ‘young cutting-edge researchers having promising extensions of the model in the pipeline’ or claiming that “there is no credible alternative,” cannot be the right scientific attitude. No matter how precise and rigorous the analysis, and no matter how hard one tries to cast the argument in modern mathematical form, DSGE models do not push economic science forwards one single millimetre if they do not stand the acid test of relevance to the target. No matter how clear, precise, rigorous or certain the inferences delivered inside these models are, they do not say anything about real-world economies.

Proving things ‘rigorously’ in DSGE models is at most a starting point for doing an interesting and relevant economic analysis. Forgetting to supply export warrants to the real world makes the analysis an empty exercise in formalism without real scientific value.

Mainstream economists think there is a gain from the DSGE style of modelling in its capacity to offer some kind of structure around which to organise discussions. To me, that sounds more like a religious theoretical-methodological dogma, where one paradigm rules in divine hegemony. That’s not progress. That’s the death of economics as a science.

As Korinek argues, using DSGE models “creates a bias towards models that have a well-behaved ergodic steady state.” Since we know that most real-world processes do not follow an ergodic distribution, this is, to say the least, problematic. To understand real world ‘non-routine’ decisions and unforeseeable changes in behaviour, stationary probability distributions are of no avail. In a world full of genuine uncertainty — where real historical time rules the roost — the probabilities that ruled the past are not those that will rule the future. Imposing invalid probabilistic assumptions on the data make all DSGE models statistically misspecified.

Advocates of DSGE modelling want to have deductively automated answers to fundamental causal questions. But to apply ‘thin’ methods we have to have ‘thick’ background knowledge of what’s going on in the real world, and not in idealized models. Conclusions can only be as certain as their premises — and that also applies to the quest for causality and forecasting predictability in DSGE models.

If substantive questions about the real world are being posed, it is the formalistic-mathematical representations utilized that have to match reality, not the other way around. The modelling convention used when constructing DSGE models makes it impossible to fully incorporate things that we know are of paramount importance for understanding modern economies — such as income and wealth inequality, asymmetrical power relations and information, liquidity preference, just to mention a few.

Given all these fundamental problems for the use of these models and their underlying methodology, it is beyond understanding how the DSGE approach has come to be the standard approach in ‘modern’ macroeconomics. DSGE models are based on assumptions profoundly at odds with what we know about real-world economies. That also makes them little more than overconfident story-telling devoid of real scientific value. Macroeconomics would do much better with more substantive diversity and plurality.

Pickwickian economics

19 November, 2017 at 11:23 | Posted in Economics | 1 Comment

pickwick_0.featureMill provides a good illustration of the tension between fallibilism and anti-foundationalism​. Mill’s first principles are supposed to be empirical and not necessary truths, but for economics to be an empirical subject at all, they have to be beyond genuine doubt, since they provide the only empirical element in an otherwise deductive system. The certainty that Mill claims for the results of scientific economics are purchased with deep uncertainty about the significance of those results – in particular, how important economic outcomes are relative to countervailing noneconomic outcomes. And the modern economist or philosopher surely would regard Mill’s economics as empirical only in a Pickwickian sense, as Mill does not leave open the possibility that anything could count as evidence against its first principles.

Kevin Hoover

Since modern mainstream economists do not even bother to argue for their foundational assumptions being neither necessary nor “beyond genuine doubt,” mainstream economics is arguably even more Pickwickian than John Stuart Mill’s methodological ruminations …

Trump’s robber baron presidency

18 November, 2017 at 10:49 | Posted in Politics & Society | Comments Off on Trump’s robber baron presidency

In Trump’s world, ​the rich in the US obviously are not rich enough. So he has set out to lower the corporate tax rate to 20 percent and abolish the estate tax.

Trump’s vision for the US​ is an unregulated and immensely unequal​ country. The working and middle classes are, of course, überjoyed …

Why Krugman and Stiglitz are no real alternatives to mainstream economics

17 November, 2017 at 20:38 | Posted in Economics | 8 Comments

verso_978-1-781683026_never_let_a_serious_crisis__pb_edition__large_300_cmyk-dc185356d27351d710223aefe6ffad0cLittle in the discipline has changed in the wake of the crisis. Mirowski thinks that this is at least in part a result of the impotence of the loyal opposition — those economists such as Joseph Stiglitz or Paul Krugman who attempt to oppose the more viciously neoliberal articulations of economic theory from within the camp of neoclassical economics. Though Krugman and Stiglitz have attacked concepts like the efficient markets hypothesis … Mirowski argues that their attempt to do so while retaining the basic theoretical architecture of neoclassicism has rendered them doubly ineffective.

First, their adoption of the battery of assumptions that accompany most neoclassical theorizing — about representative agents, treating information like any other commodity, and so on — make it nearly impossible to conclusively rebut arguments like the efficient markets hypothesis. Instead, they end up tinkering with it, introducing a nuance here or a qualification there … Stiglitz’s and Krugman’s arguments, while receiving circulation through the popular press, utterly fail to transform the discipline.

Paul Heideman

Despite all their radical rhetoric, Krugman and Stiglitz are — where it really counts — nothing but die-hard mainstream neoclassical economists. Just like Milton Friedman, Robert Lucas or Greg Mankiw.

The only economic analysis that Krugman and Stiglitz  — like other mainstream economists — accept is the one that takes place within the analytic-formalistic modelling strategy that makes up the core of mainstream economics. All models and theories that do not live up to the precepts of the mainstream methodological canon are pruned. You’re free to take your models — not using (mathematical) models at all is considered totally unthinkable —  and apply them to whatever you want — as long as you do it within the mainstream approach and its modelling strategy. If you do not follow this particular mathematical-deductive analytical formalism you’re not even considered doing economics. ‘If it isn’t modelled, it isn’t economics.’

straight-jacketThat isn’t pluralism.

That’s a methodological reductionist straightjacket.

So, even though we have seen a proliferation of models, it has almost exclusively taken place as a kind of axiomatic variation within the standard ‘urmodel’, which is always used as a self-evident bench-mark.

Krugman and Stiglitz want to purvey the view that the proliferation of economic models during the last twenty-thirty years is a sign of great diversity and abundance of new ideas.

But, again, it’s not, really, that simple.

Although mainstream economists like to portray mainstream economics as an open and pluralistic ‘let a hundred flowers bloom,’ in reality it is rather ‘plus ça change, plus c’est la même chose.’

Applying closed analytical-formalist-mathematical-deductivist-axiomatic models, built on atomistic-reductionist assumptions to a world assumed to consist of atomistic-isolated entities, is a sure recipe for failure when the real world is known to be an open system where complex and relational structures and agents interact. Validly deducing things in models of that kind doesn’t much help us understanding or explaining what is taking place in the real world we happen to live in. Validly deducing things from patently unreal assumptions — that we all know are purely fictional — makes most of the modelling exercises pursued by mainstream economists rather pointless. It’s simply not the stuff that real understanding and explanation in science is made of. Just telling us that the plethora of mathematical models that make up modern economics  “expand the range of the discipline’s insights” is nothing short of hand waving.

No matter how many thousands of technical working papers or models mainstream economists come up with, as long as they are just ‘wildly inconsistent’ axiomatic variations of the same old mathematical-deductive ilk, they will not take us one single inch closer to giving us relevant and usable means to further our understanding and possible explanations of real economies.

Next Page »

Blog at WordPress.com.
Entries and comments feeds.