The LATE approach — a critique

31 Oct, 2023 at 16:42 | Posted in Statistics & Econometrics | Comments Off on The LATE approach — a critique

One of the reasons Guido Imbens and Joshua Angrist won the 2021 ‘Nobel prize’ in economics is their LATE approach used especially in instrumental variables estimation of causal effects. Another prominent ‘Nobel prize’ winner in economics — Angus Deaton — is not overly impressed:

Without explicit prior consideration of the effect of the instrument choice on the parameter being estimated, such a procedure is effectively the opposite of standard statistical practice in which a parameter of interest is defined first, followed by an estimator that delivers that parameter. Instead, we have a procedure in which the choice of the instrument, which is guided by criteria designed for a situation in which there is no heterogeneity, is implicitly allowed to determine the parameter of interest. This goes beyond the old story of looking for an object where the light is strong enough to see; rather, we have at least some control over the light but choose to let it fall where it may and then proclaim that whatever it illuminates is what we were looking for all along …

A Nobel for the idea of well-being | Latest News | The HinduI find it hard to make any sense of the LATE. We are unlikely to learn much about the processes at work if we refuse to say anything about what determines (the effect ‘parameter’) θ; heterogeneity is not a technical problem calling for an econometric solution but a reflection of the fact that we have not started on our proper business, which is trying to understand what is going on. Of course, if we are as skeptical of the ability of economic theory to deliver useful models as are many applied economists today, the ability to avoid modeling can be seen as an advantage, though it should not be a surprise when such an approach delivers answers that are hard to interpret.

Even if we accept the limitation of only being able to say something about (some kind of) average treatment effects when using instrumental-variables designs, another significant and major problem is that researchers who use these randomization-based research strategies often set up problem formulations that are not at all the ones we really want answers to, in order to achieve ‘exact’ and ‘precise’ results. Design becomes the main thing, and as long as one can get more or less clever experiments in place, they believe they can draw far-reaching conclusions about both causality and the ability to generalize experimental outcomes to larger populations. Unfortunately, this often means that this type of research has a negative bias away from interesting and important problems towards prioritizing method selection. Design and research planning are important, but the credibility of research ultimately lies in being able to provide answers to relevant questions that both citizens and researchers want answers to. Focusing on finding narrow LATE results threatens to lead research away from the really important research questions we as social scientists want to answer. 

Believing there is only one really good evidence-based method on the market — and that randomization is the only way to achieve scientific validity — blinds people to searching for and using other methods that in many contexts are better. Insisting on using only one tool often means using the wrong tool.

The purist streak in economics

30 Oct, 2023 at 10:27 | Posted in Economics | Comments Off on The purist streak in economics

4703325-2So in what sense is this “dynamic stochastic general equilibrium” model firmly grounded in the principles of economic theory? I do not want to be misunderstood. Friends have reminded me that much of the effort of “modern macro” goes into the incorporation of important deviations from the Panglossian assumptions that underlie the simplistic application of the Ramsey model to positive macroeconomics. Research focuses on the implications of wage and price stickiness, gaps and asymmetries of information, long-term contracts, imperfect competition, search, bargaining and other forms of strategic behavior, and so on. That is indeed so, and it is how progress is made …

There has always been a purist streak in economics that wants everything to follow neatly from greed, rationality, and equilibrium, with no ifs, ands, or buts. Most of us have felt that tug. Here is a theory that gives you just that, and this time “everything” means everything: macro, not micro. The theory is neat, learnable, not terribly difficult, but just technical enough to feel like “science.”

Robert Solow

Yes, indeed, there is a “purist streak in economics that wants everything to follow neatly from greed, rationality, and equilibrium, with no ifs, ands, or buts.” That purist streak has given birth to a kind of ‘deductivist blindness’ of mainstream economics, something that also to a larger extent explains why it contributes to causing economic crises rather than to solving them. But where does this ‘deductivist blindness’ of mainstream economics come from? To answer that question we have to examine the methodology of mainstream economics.

The insistence on constructing models showing the certainty of logical entailment has been central in the development of mainstream economics. Insisting on formalistic (mathematical) modelling has more or less forced the economist to give up on realism and substitute axiomatics for real-world relevance. The price paid for the illusory rigour and precision has been monumentally high

wrong-tool-by-jerome-awThis deductivist orientation is the main reason behind the difficulty that mainstream economics has in terms of understanding, explaining and predicting what takes place in our societies. But it has also given mainstream economics much of its discursive power – at least as long as no one starts asking tough questions on the veracity of – and justification for – the assumptions on which the deductivist foundation is erected. Asking these questions is an important ingredient in a sustained critical effort to show how nonsensical the embellishing of a smorgasbord of models founded on wanting (often hidden) methodological foundations is.

The mathematical-deductivist straitjacket used in mainstream economics presupposes atomistic closed systems — i.e., something that we find very little of in the real world, a world significantly at odds with an (implicitly) assumed logic world where deductive entailment rules the roost. Ultimately then, the failings of modern mainstream economics have their roots in a deficient ontology. The kind of formal-analytical and axiomatic-deductive mathematical modelling that makes up the core of mainstream economics is hard to make compatible with a real-world ontology. It is also the reason why so many critics find mainstream economic analysis patently and utterly unrealistic and irrelevant. The empty formalism that Solow points out in his critique of ‘modern’ macroeconomics is still one of the main reasons behind the monumental failure of ‘modern’ macroeconomics.

Räddaren i nöden

29 Oct, 2023 at 12:50 | Posted in Varia | Comments Off on Räddaren i nöden

Regn och höstrusk ute på ön idag. Men vad gör det — så länge man kan lyssna på programmet Text och musik med Eric Schüldt!

radioJag har i flera år nu lyssnat på Erics program varje söndag. En helg utan hans tänkvärda och ofta lite melankoliska funderingar och vemodiga musik har blivit otänkbart.

Som så ofta de senaste åren är det Eric som får mig att hitta nya ljudintryck. I en tid — många kallar den ‘modern’ — då alla förväntar sig omedelbar behovstillfredsställelse här och nu, njuter jag av att få gå och längta efter nästa söndags musikupplevelse och betraktelser över tillvarons mystik.

Tack Eric!

Pink Floyd

28 Oct, 2023 at 18:44 | Posted in Varia | Comments Off on Pink Floyd

.

On universalism and the Israeli-Palestinian conflict

28 Oct, 2023 at 10:00 | Posted in Politics & Society | Comments Off on On universalism and the Israeli-Palestinian conflict

.

Economic myths that just won’t die

27 Oct, 2023 at 22:47 | Posted in Economics | Comments Off on Economic myths that just won’t die

.

Goethe vs. Schiller

27 Oct, 2023 at 12:20 | Posted in Varia | 1 Comment

.

How to get a ‘Nobel Prize’ in economics — talking absolute nonsense!

26 Oct, 2023 at 21:59 | Posted in Economics | 11 Comments

Many people would argue that, in this case, the inefficiency was primarily in the credit markets, not the stock market—that there was a credit bubble that inflated and ultimately burst.

Dumb and Dumber: A Study of Management and Decision-Making StructuresEugene Fama: I don’t even know what that means. People who get credit have to get it from somewhere. Does a credit bubble mean that people save too much during that period? I don’t know what a credit bubble means. I don’t even know what a bubble means. These words have become popular. I don’t think they have any meaning.

I guess most people would define a bubble as an extended period during which asset prices depart quite significantly from economic fundamentals.

Eugene Fama: That’s what I would think it is, but that means that somebody must have made a lot of money betting on that, if you could identify it. It’s easy to say prices went down, it must have been a bubble, after the fact. I think most bubbles are twenty-twenty hindsight. Now after the fact you always find people who said before the fact that prices are too high. People are always saying that prices are too high. When they turn out to be right, we anoint them. When they turn out to be wrong, we ignore them. They are typically right and wrong about half the time.

Are you saying that bubbles can’t exist?

Eugene Fama: They have to be predictable phenomena. I don’t think any of this was particularly predictable.

John Cassidy

L’intelligence humaine est en déclin

25 Oct, 2023 at 13:48 | Posted in Education & School, Politics & Society | Comments Off on L’intelligence humaine est en déclin

.

Comme le montre ce programme, il y a vraiment de nombreux signes que l’intelligence humaine est en déclin. Ce déclin est probablement le résultat d’une convergence de facteurs sociaux et culturels.

L’une des principales raisons est la dépendance croissante à la technologie. Alors que les avancées technologiques ont indéniablement amélioré notre qualité de vie, elles ont également créé une génération de personnes de plus en plus dépendantes de leurs smartphones et ordinateurs. Cette dépendance a réduit notre capacité à résoudre des problèmes de manière indépendante, à retenir des informations essentielles et à interagir avec le monde qui nous entoure.

Un autre facteur contribuant à la baisse de l’intelligence humaine est le déclin de l’éducation. De nombreuses régions du monde font face à des coupes budgétaires dans le secteur de l’éducation, ce qui entraîne une qualité d’enseignement inférieure. En conséquence, de nombreuses personnes ne sont pas en mesure d’acquérir les compétences cognitives et les connaissances dont elles ont besoin pour fonctionner au mieux dans la société.

Si nous voulons améliorer notre faculté intellectuelle, il est impératif de reconsidérer notre relation à la technologie et à l’éducation.

The Road Not Taken

24 Oct, 2023 at 17:48 | Posted in Varia | 2 Comments

Amazon.com: Robert Frost - The Road Not Taken - 11x14 Unframed Motivational  Book Page Print - Great Gift for Poetry Fans and Inspirational Decor for  Home and Office Under $15 : Handmade Products

We all heterodox economists who have chosen the road ‘less traveled by’ know that this choice comes at a price. Fewer opportunities to secure ample research funding or positions at prestigious institutes or universities. Nevertheless, I believe that very few of us regret our choices. One doesn’t bargain with one’s conscience. No amount of money or prestige in the world can replace the feeling of looking in the mirror and liking what one sees.

Natural ‘natural experiments’

24 Oct, 2023 at 11:14 | Posted in Statistics & Econometrics | Comments Off on Natural ‘natural experiments’

Natural Experiments Help Answer Important Questions: 2021 Nobel Prize in  Economic Sciences | Institute of Advanced Studies | NTU SingaporeEvidently, however, the potential for the strictly natural natural experimental approach, which relies exclusively on natural events as instruments, is constrained by the small number of random events provided by nature and by the fact that most outcomes of interest are the result of many factors associated with preferences, technologies, and markets. And the prospect of the discovery of new and useful natural events is limited …

It is clear that the number of natural instruments will never be sufficient to eliminate the necessity of imposing auxiliary assumptions or of obtaining supplementary empirical information relevant to the assumptions needed for identification … Measurement without theory, however, is not significantly more valuable than it ever was before the use of natural natural experiments.

Mark Rosenzweig & Kenneth Wolpin

Rosenzweig and Wolpin discuss several serious issues with studies based on natural ‘natural experiments.’ One noticeable and significant problem they do not address, however, is that researchers using these randomization-based research strategies consistently formulate problems to achieve ‘exact’ and ‘precise’ results that are not the ones we truly want answers to. The design becomes the main focus, and as long as one can set up more or less clever experiments, they believe they can draw far-reaching conclusions about both causality and generalizing experimental outcomes to larger populations. Unfortunately, this often leads to a shift in this type of research away from interesting and important issues towards prioritizing method choices. While design and research planning are important, the credibility of research ultimately comes down to being able to provide answers to relevant questions that both citizens and researchers want to know.

Alter Friedhof

24 Oct, 2023 at 10:36 | Posted in Varia | Comments Off on Alter Friedhof

Berlin23

For a ‘churchyard romantic’ like yours truly, a visit to Alter Friedhof Prenzlauer Allee is, of course, a must when in Berlin …

Berlin

19 Oct, 2023 at 19:05 | Posted in Varia | Comments Off on Berlin

Tagesspiegel Leute's tweet - "Seit über 20 Jahren gibt es den Walter- Benjamin-Platz in Berlin-#Charlottenburg. Jetzt soll dort an den  Namensgeber besser erinnert werden. ➡Thema im aktuellen  Tagesspiegel-Newsletter für die City West #ChaWi.Touring again.

Guest appearance​​ in Berlin.

When there, yours truly always visits Walter-Benjamin-Platz (close to Kurfürstendamm, between Leibnizstraße and Wielandstraße). Interesting architecture and a couple of excellent restaurants and cafés.

Regular blogging to be resumed next week.

Mainstream economics — a methodological strait-jacket

19 Oct, 2023 at 12:07 | Posted in Economics | Comments Off on Mainstream economics — a methodological strait-jacket

Jamie Morgan: To a member of the public it must seem weird that it is possible to state, as you do, such fundamental criticism of an entire field of study. The perplexing issue from a third party point of view is how do we reconcile good intention (or at least legitimate sense of self as a scholar), and power and influence in the world with error, failure and falsity in some primary sense; given that the primary problem is methodological, the issues seem to extend in different ways from Milton Friedman to Robert Lucas Jr, from Paul Krugman to Joseph Stiglitz. Do such observations give you pause? My question (invitation) I suppose, is how does one reconcile (explain or account for) the direction of travel of mainstream economics: the degree of commonality identified in relation to its otherwise diverse parts, the glaring problems of that commonality – as identified and stated by you and many other critics?

einstein1988berlinLars P. Syll: When politically “radical” economists like Krugman, Wren-Lewis or Stiglitz confront the critique of mainstream economics from people like me, they usually have the attitude that if the critique isn’t formulated in a well-specified mathematical model it isn’t worth taking seriously. To me that only shows that, despite all their radical rhetoric, these economists — just like Milton Friedman, Robert Lucas Jr or Greg Mankiw — are nothing but die-hard defenders of mainstream economics. The only economic analysis acceptable to these people is the one that takes place within the analytic-formalistic modelling strategy that makes up the core of mainstream economics. Models and theories that do not live up to the precepts of the mainstream methodological canon are considered “cheap talk”. If you do not follow this particular mathematical-deductive analytical formalism you’re not even considered to be doing economics …

The kind of “diversity” you asked me about, is perhaps even better to get a perspective on, by considering someone like Dani Rodrik, who a couple of years ago wrote a book on economics and its modelling strategies — Economics Rules (2015) — that attracted much attention among economists in the academic world. Just like Krugman and the other politically “radical” mainstream economists, Rodrik shares the view that there is nothing basically wrong with standard theory. As long as policymakers and economists stick to standard economic analysis everything is fine. Economics is just a method that makes us “think straight” and “reach correct answers”. Similar to Krugman, Rodrik likes to present himself as a kind of pluralist anti-establishment economics iconoclast, but when it really counts, he shows what he is – a mainstream economist fanatically defending the relevance of standard economic modelling strategies. In other words – no heterodoxy where it would really count. In my view, this isn’t pluralism. It’s a methodological reductionist strait-jacket.

Real-World Economics Review

Instrumental Variables — The Good and the Bad

17 Oct, 2023 at 19:01 | Posted in Statistics & Econometrics | Comments Off on Instrumental Variables — The Good and the Bad

.

Making appropriate extrapolations from (ideal, natural or quasi) experiments to different settings, populations or target systems, is not easy. “It works there” is no evidence for “it will work here.” The causal background assumptions made have to be justified, and without licenses to export, the value of ‘rigorous’ and ‘precise’ methods used when analyzing ‘natural experiments’ is often despairingly small. Since the core assumptions on which onstrumental variables (IV) analysis builds are NEVER directly testable, those of us who choose to use instrumental variables to find out about causality ALWAYS have to defend and argue for the validity of the assumptions the causal inferences build on. Especially when dealing with natural experiments, we should be very cautious when being presented with causal conclusions without convincing arguments about the veracity of the assumptions made. If you are out to make causal inferences you have to rely on a trustworthy theory of the data-generating process. The empirical results causal analysis supplies us with are only as good as the assumptions we make about the data-generating process.

It also needs to be pointed out that many economists, when they use instrumental variables analysis, make the mistake of thinking that swapping an assumption of residuals being uncorrelated with the independent variables with the assumption that the same residuals are uncorrelated with an instrument doesn’t solve the endogeneity problem or improve the causal analysis.

The present interest in randomization, instrumental variables estimation, and natural experiments, is an expression of a new trend in economics, where there is a growing interest in (ideal, quasi, natural) experiments and — not least — how to design them to possibly provide answers to questions about causality and policy effects. Economic research on e.g. discrimination nowadays often emphasizes the importance of a randomization design, for example when trying to determine to what extent discrimination can be causally attributed to differences in preferences or information, using so-called correspondence tests and field experiments.

A common starting point is the ‘counterfactual approach’ developed mainly by Neyman and Rubin, which is often presented and discussed based on examples of research designs like randomized control studies, natural experiments, difference in difference, matching, regression discontinuity, etc.

Mainstream economists generally view this development of the economics toolbox positively. Since yours truly is not entirely positive about the randomization approach, I will share with you some of my criticisms.

A notable limitation of counterfactual randomization designs is that they only give us answers on how ‘treatment groups’ differ on average from ‘control groups.’ Let me give an example to illustrate how limiting this fact can be:

Among school debaters and politicians in Sweden, it is claimed that so-called ‘independent schools’ (charter schools) are better than municipal schools. They are said to lead to better results. To find out if this is really the case, a number of students are randomly selected to take a test. The result could be: Test result = 20 + 5T, where T=1 if the student attends an independent school and T=0 if the student attends a municipal school. This would confirm the assumption that independent school students have an average of 5 points higher results than students in municipal schools. Now, politicians (hopefully) are aware that this statistical result cannot be interpreted in causal terms because independent school students typically do not have the same background (socio-economic, educational, cultural, etc.) as those who attend municipal schools (the relationship between school type and result is confounded by selection bias). To obtain a better measure of the causal effects of school type, politicians suggest that 1000 students be admitted to an independent school through a lottery — a classic example of a randomization design in natural experiments. The chance of winning is 10%, so 100 students are given this opportunity. Of these, 20 accept the offer to attend an independent school. Of the 900 lottery participants who do not ‘win,’ 100 choose to attend an independent school. The lottery is often perceived by school researchers as an ‘instrumental variable,’ and when the analysis is carried out, the result is: Test result = 20 + 2T. This is standardly interpreted as having obtained a causal measure of how much better students would, on average, perform on the test if they chose to attend independent schools instead of municipal schools. But is it true? No! If not all school students have exactly the same test results (which is a rather far-fetched ‘homogeneity assumption’), the specified average causal effect only applies to the students who choose to attend an independent school if they ‘win’ the lottery, but who would not otherwise choose to attend an independent school (in statistical jargon, we call these ‘compliers’). It is difficult to see why this group of students would be particularly interesting in this example, given that the average causal effect estimated using the instrumental variable says nothing at all about the effect on the majority (the 100 out of 120 who choose to attend an independent school without ‘winning’ in the lottery) of those who choose to attend an independent school.

Conclusion: Researchers must be much more careful in interpreting ‘average estimates’ as causal. Reality exhibits a high degree of heterogeneity, and ‘average parameters’ often tell us very little!

To randomize ideally means that we achieve orthogonality (independence) in our models. But it does not mean that in real experiments when we randomize, we achieve this ideal. The ‘balance’ that randomization should ideally result in cannot be taken for granted when the ideal is translated into reality. Here, one must argue and verify that the ‘assignment mechanism’ is truly stochastic and that ‘balance’ has indeed been achieved!

Even if we accept the limitation of only being able to say something about average treatment effects there is another theoretical problem. An ideal randomized experiment assumes that a number of individuals are first chosen from a randomly selected population and then randomly assigned to a treatment group or a control group. Given that both selection and assignment are successfully carried out randomly, it can be shown that the expected outcome difference between the two groups is the average causal effect in the population. The snag is that the experiments conducted almost never involve participants selected from a random population! In most cases, experiments are started because there is a problem of some kind in a given population (e.g., schoolchildren or job seekers in country X) that one wants to address. An ideal randomized experiment assumes that both selection and assignment are randomized — this means that virtually none of the empirical results that randomization advocates so eagerly tout hold up in a strict mathematical-statistical sense. The fact that only assignment is talked about when it comes to ‘as if’ randomization in natural experiments is hardly a coincidence. Moreover, when it comes to ‘as if’ randomization in natural experiments, the sad but inevitable fact is that there can always be a dependency between the variables being studied and unobservable factors in the error term, which can never be tested!

Another significant and major problem is that researchers who use these randomization-based research strategies often set up problem formulations that are not at all the ones we really want answers to, in order to achieve ‘exact’ and ‘precise’ results. Design becomes the main thing, and as long as one can get more or less clever experiments in place, they believe they can draw far-reaching conclusions about both causality and the ability to generalize experimental outcomes to larger populations. Unfortunately, this often means that this type of research has a negative bias away from interesting and important problems towards prioritizing method selection. Design and research planning are important, but the credibility of research ultimately lies in being able to provide answers to relevant questions that both citizens and researchers want answers to.

Believing there is only one really good evidence-based method on the market — and that randomization is the only way to achieve scientific validity — blinds people to searching for and using other methods that in many contexts are better. Insisting on using only one tool often means using the wrong tool.

Next Page »

Blog at WordPress.com.
Entries and Comments feeds.