The labor shortage myth

28 Feb, 2023 at 21:07 | Posted in Economics | Leave a comment

.

Wicked Game

27 Feb, 2023 at 23:10 | Posted in Varia | Leave a comment

.

‘Overcontrolling’ in statistics

26 Feb, 2023 at 12:19 | Posted in Statistics & Econometrics | Comments Off on ‘Overcontrolling’ in statistics

You see it all the time in studies. “We controlled for…” And then the list starts … The more things you can control for, the stronger your study is — or, at least, the stronger your study seems. Controls give the feeling of specificity, of precision. But sometimes, you can control for too much. Sometimes you end up controlling for the thing you’re trying to measure …

multiple regression - Why is controlling for too many variables considered  harmful? - Cross Validated

An example is research around the gender wage gap, which tries to control for so many things that it ends up controlling for the thing it’s trying to measure …

Take hours worked, which is a standard control in some of the more sophisticated wage gap studies. Women tend to work fewer hours than men. If you control for hours worked, then some of the gender wage gap vanishes. As Yglesias wrote, it’s “silly to act like this is just some crazy coincidence. Women work shorter hours because as a society we hold women to a higher standard of housekeeping, and because they tend to be assigned the bulk of childcare responsibilities.”

Controlling for hours worked, in other words, is at least partly controlling for how gender works in our society. It’s controlling for the thing that you’re trying to isolate.

Ezra Klein

Trying to reduce the risk of having established only ‘spurious relations’ when dealing with observational data, statisticians and econometricians standardly add control variables. The hope is that one thereby will be able to make more reliable causal inferences. But — as Keynes showed already back in the 1930s when criticizing statistical-econometric applications of regression analysis — if you do not manage to get hold of all potential confounding factors, the model risks producing estimates of the variable of interest that are even worse than models without any control variables at all. Conclusion: think twice before you simply include ‘control variables’ in your models!

piled-up-dishes-in-kitchen-sinkWhen I present this argument … one or more scholars say, “But shouldn’t I control for everything I can in my regressions? If not, aren’t my coefficients biased due to excluded variables?” … The excluded variable argument only works if you are sure your specification is precisely correct with all variables included. But no one can know that with more than a handful of explanatory variables …

A preferable approach is to separate the observations into meaningful subsets—internally compatible statistical regimes … If this can’t be done, then statistical analysis can’t be done. A researcher claiming that nothing else but the big, messy regression is possible because, after all, some results have to be produced, is like a jury that says, “Well, the evidence was weak, but somebody had to be convicted.”

Christopher H. Achen

Kitchen sink econometric models are often the result of researchers trying to control for confounding. But what they usually haven’t understood is that the confounder problem requires a causal solution and not statistical ‘control.’ Controlling for everything opens up the risk that we control for ‘collider’ variables and thereby create ‘back-door paths’ which gives us confounding that wasn’t there to begin with.

Solar Wind

26 Feb, 2023 at 12:08 | Posted in Varia | Comments Off on Solar Wind

.

Ukraine — fighting for all of us

24 Feb, 2023 at 17:40 | Posted in Politics & Society | Comments Off on Ukraine — fighting for all of us

.

Image result for whatever you do you do to me

To all my brothers and sisters in Ukraine, fighting the Russian invasion for one year now.

May God be with you.

Never give in. Never give up.

Ice and Fire

24 Feb, 2023 at 17:24 | Posted in Varia | Comments Off on Ice and Fire

.

Getting causality into statistics

24 Feb, 2023 at 10:39 | Posted in Statistics & Econometrics | 6 Comments

Sander Greenland at Judea Pearl Symposium - YouTubeBecause statistical analyses need a causal skeleton to connect to the world, causality is not extra-statistical but instead is a logical antecedent of real-world inferences. Claims of random or “ignorable” or “unbiased” sampling or allocation are justified by causal actions to block (“control”) unwanted causal effects on the sample patterns. Without such actions of causal blocking, independence can only be treated as a subjective exchangeability assumption whose justification requires detailed contextual information about absence of factors capable of causally influencing both selection (including selection for treatment) and outcomes. Otherwise it is essential to consider pathways for the causation of biases (nonrandom, systematic errors) and their interactions …

Probability is inadequate as a foundation for applied statistics, because competent statistical practice integrates logic, context, and probability into scientific inference and decision, using causal narratives to explain diverse data. Thus, given the absence of elaborated causality discussions in statistics textbooks and coursework, we should not be surprised at the widespread misuse and misinterpretation of statistical methods and results. This is why incorporation of causality into introductory statistics is needed as urgently as other far more modest yet equally resisted reforms involving shifts in labels and interpretations for P-values and interval estimates.

Sander Greenland

Causality can never be reduced to a question of statistics or probabilities unless you are — miraculously — able to keep constant all other factors that influence the probability of the outcome studied. To understand causality we always have to relate it to a specific causal structure. Statistical correlations are never enough. No structure, no causality.

Statistical patterns should never be seen as anything else than possible clues to follow. Behind observable data, there are real structures and mechanisms operating, things that are  — if we really want to understand, explain and (possibly) predict things in the real world — more important to get hold of than to simply correlate and regress observable variables.

Statistics cannot establish the truth value of a fact. Never has. Never will.

Econometric fictionalism

23 Feb, 2023 at 18:23 | Posted in Statistics & Econometrics | 2 Comments

Mostly Harmless EconometricsIf you can’t devise an experiment that answers your question in a world where anything goes, then the odds of generating useful results with a modest budget and nonexperimental survey data seem pretty slim. The description of an ideal experiment also helps you formulate causal questions precisely. The mechanics of an ideal experiment highlight the forces you’d like to manipulate and the factors you’d like to hold constant.

Research questions that cannot be answered by any experiment are fundamentally unidentified questions.

One of the limitations of economics is the restricted possibility to perform experiments, forcing it to mainly rely on observational studies for knowledge of real-world economies.

But still — the idea of performing laboratory experiments holds a firm grip on our wish to discover (causal) relationships between economic ‘variables.’If we only could isolate and manipulate variables in controlled environments, we would probably find ourselves in a situation where we with greater ‘rigour’ and ‘precision’ could describe, predict, or explain economic happenings in terms of ‘structural’ causes, ‘parameter’ values of relevant variables, and economic ‘laws.’

Galileo Galilei’s experiments are often held as exemplary for how to perform experiments to learn something about the real world. Galileo’s heavy balls dropping from the tower of Pisa, confirmed that the distance an object falls is proportional to the square of time and that this law (empirical regularity) of falling bodies could be applicable outside a vacuum tube when e. g. air existence is negligible.

The big problem is to decide or find out exactly for which objects air resistance (and other potentially ‘confounding’ factors) is ‘negligible.’ In the case of heavy balls, air resistance is obviously negligible, but how about feathers or plastic bags?

One possibility is to take the all-encompassing-theory road and find out all about possible disturbing/confounding factors — not only air resistance — influencing the fall and build that into one great model delivering accurate predictions on what happens when the object that falls is not only a heavy ball but feathers and plastic bags. This usually amounts to ultimately stating some kind of ceteris paribus interpretation of the ‘law.’

Another road to take would be to concentrate on the negligibility assumption and to specify the domain of applicability to be only heavy compact bodies. The price you have to pay for this is that (1) ‘negligibility’ may be hard to establish in open real-world systems, (2) the generalization you can make from ‘sample’ to ‘population’ is heavily restricted, and (3) you actually have to use some ‘shoe leather’ and empirically try to find out how large is the ‘reach’ of the ‘law.’

In mainstream economics, one has usually settled for the ‘theoretical’ road (and in case you think the present ‘natural experiments’ hype has changed anything, remember that to mimic real experiments, exceedingly stringent special conditions standardly have to obtain).

In the end, it all boils down to one question — are there any Galilean ‘heavy balls’ to be found in economics, so that we can indisputably establish the existence of economic laws operating in real-world economies?

As far as I can see there are some heavy balls out there, but not even one single real economic law.

Economic factors/variables are more like feathers than heavy balls — non-negligible factors (like air resistance and chaotic turbulence) are hard to rule out as having no influence on the object studied.

Galilean experiments are hard to carry out in economics, and the theoretical ‘analogue’ models economists construct and in which they perform their ‘thought experiments’ build on assumptions that are far away from the kind of idealized conditions under which Galileo performed his experiments. The ‘nomological machines’ that Galileo and other scientists have been able to construct have no real analogues in economics. The stability, autonomy, modularity, and interventional invariance, that we may find between entities in nature, simply are not there in real-world economies. That’s are real-world fact, and contrary to the beliefs of most mainstream economists, they won’t go away simply by applying deductive-axiomatic economic theory with tons of more or less unsubstantiated assumptions.

By this, I do not mean to say that we have to discard all (causal) theories/laws building on modularity, stability, invariance, etc. But we have to acknowledge the fact that outside the systems that possibly fulfil these requirements/assumptions, they are of little substantial value. Running paper and pen experiments on artificial ‘analogue’ model economies is a sure way of ‘establishing’ (causal) economic laws or solving intricate econometric problems of autonomy, identification, invariance and structural stability — in the model world. But they are pure substitutes for the real thing and they don’t have much bearing on what goes on in real-world open social systems. Setting up convenient circumstances for conducting Galilean experiments may tell us a lot about what happens under those kinds of circumstances. But — few, if any, real-world social systems are ‘convenient.’ So most of those systems, theories and models, are irrelevant for letting us know what we really want to know.

To solve, understand, or explain real-world problems you actually have to know something about them — logic, pure mathematics, data simulations or deductive axiomatics don’t take you very far. Most econometrics and economic theories/models are splendid logic machines. But — applying them to the real world is a totally hopeless undertaking! The assumptions one has to make in order to successfully apply these deductive-axiomatic theories/models/machines are devastatingly restrictive and mostly empirically untestable– and hence make their real-world scope ridiculously narrow. To fruitfully analyze real-world phenomena with models and theories you cannot build on patently and known to be ridiculously absurd assumptions. No matter how much you would like the world to entirely consist of heavy balls, the world is not like that. The world also has its fair share of feathers and plastic bags.

Most of the ‘idealizations’ we find in mainstream economic models are not ‘core’ assumptions, but rather structural ‘auxiliary’ assumptions. Without those supplementary assumptions, the core assumptions deliver next to nothing of interest. So to come up with interesting conclusions you have to rely heavily on those other — ‘structural’ — assumptions.

In physics, we have theories and centuries of experience and experiments that show how gravity makes bodies move. In economics, we know there is nothing equivalent. So instead mainstream economists necessarily have to load their theories and models with sets of auxiliary structural assumptions to get any results at all in their models.

So why then do mainstream economists keep on pursuing this modelling project?

Mainstream ‘as if’ models are based on the logic of idealization and a set of tight axiomatic and ‘structural’ assumptions from which consistent and precise inferences are made. The beauty of this procedure is, of course, that if the assumptions are true, the conclusions necessarily follow. But it is a poor guide for real-world systems.

The way axioms and theorems are formulated in mainstream economics often leaves their specification without almost any restrictions whatsoever, safely making every imaginable evidence compatible with the all-embracing ‘theory’ — and theory without informational content never risks being empirically tested and found falsified. Used in mainstream ‘thought experimental’ activities, it may, of course, ​be very ‘handy,’ but totally void of any empirical value.

Some economic methodologists have lately been arguing that economic models may well be considered ‘minimal models’ that portray ‘credible worlds’ without having to care about things like similarity, isomorphism, simplified ‘representationality’ or resemblance to the real world. These models are said to resemble ‘realistic novels’ that portray ‘possible worlds’. And sure: economists constructing and working with those kinds of models learn things about what might happen in those ‘possible worlds’. But is that really the stuff real science is made of? I think not. As long as one doesn’t come up with credible export warrants to real-world target systems and show how those models — often building on idealizations with known to be false assumptions — enhance our understanding or explanations about the real world, well, they are just nothing more than just novels.  Showing that something is possible in a ‘possible world’ doesn’t give us a justified license to infer that it therefore also is possible in the real world. ‘The Great Gatsby’ is a wonderful novel, but if you truly want to learn about what is going on in the world of finance, I would recommend rather reading Minsky or Keynes and directly confronting real-world finance.

Different models have different cognitive goals. Constructing models that aim for explanatory insights may not optimize the models for making (quantitative) predictions or deliver some kind of ‘understanding’ of what’s going on in the intended target system. All modelling in science has tradeoffs. There simply is no ‘best’ model. For one purpose in one context model A is ‘best’, for other purposes and contexts model B may be deemed ‘best’. Depending on the level of generality, abstraction, and depth, we come up with different models. But even so, I would argue that if we are looking for what I have called ‘adequate explanations’ (Syll, Ekonomisk teori och metod, Studentlitteratur, 2005) it is not enough to just come up with ‘minimal’ or ‘credible world’ models.

The assumptions and descriptions we use in our modelling have to be true — or at least ‘harmlessly’ false — and give a sufficiently detailed characterization of the mechanisms and forces at work. Models in mainstream economics do nothing of the kind.

Coming up with models that show how things may possibly be explained is not what we are looking for. It is not enough. We want to have models that build on assumptions that are not in conflict with known facts and that show how things actually are to be explained. Our aspirations have to be more far-reaching than just constructing coherent and ‘credible’ models about ‘possible worlds’. We want to understand and explain ‘difference-making’ in the real world and not just in some made-up fantasy world. No matter how many mechanisms or coherent relations you represent in your model, you still have to show that these mechanisms and relations are at work and exist in society if we are to do real science. Science has to be something more than just more or less realistic ‘story-telling’ or ‘explanatory fictionalism.’ You have to provide decisive empirical evidence that what you can infer in your model also helps us to uncover what actually goes on in the real world. It is not enough to present your students with epistemically informative insights about logically possible but non-existent general equilibrium models. You also, and more importantly, have to have a world-linking argumentation and show how those models explain or teach us something about real-world economies. If you fail to support your models in that way, why should we care about them? And if you do not inform us about what are the real-world intended target systems of your modelling, how are we going to be able to value or test them? Without giving that kind of information it is impossible for us to check if the ‘possible world’ models you come up with actually hold also for the one world in which we live — the real world.

The Rivers of Belief

23 Feb, 2023 at 01:23 | Posted in Varia | Comments Off on The Rivers of Belief

.

“Est-il de vérité plus douce que l’espérance?”

21 Feb, 2023 at 16:41 | Posted in Varia | Comments Off on “Est-il de vérité plus douce que l’espérance?”

.

R.E.M.

20 Feb, 2023 at 15:07 | Posted in Varia | Comments Off on R.E.M.

.

The Keynes-Tinbergen debate on econometrics

20 Feb, 2023 at 13:35 | Posted in Statistics & Econometrics | 1 Comment

Het econometrie-debat tussen Keynes en Tinbergen - Over Economie & EconomenIt is widely recognized but often tacitly neglected that all statistical approaches have intrinsic limitations that affect the degree to which they are applicable to particular contexts … John Maynard Keynes was perhaps the first to provide a concise and comprehensive summation of the key issues in his critique of Jan Tinbergen’s book Statistical Testing of Business Cycle Theories

Keynes’s intervention has, of course, become the basis of the “Tinbergen debate” and is a touchstone whenever historically or philosophically informed methodological discussion of econometrics is undertaken. It has remained the case, however, that Keynes’s concerns with the “logical issues” regarding the “conditions which the economic material must satisfy” still
gain little attention in theory and practice.

Muhammad Ali Nasir & Jamie Morgan

Mainstream economists often hold the view that Keynes’ criticism of econometrics was the result of a sadly misinformed and misguided person who disliked and did not understand much of it.

This is, however, as Nasir and Morgan convincingly argue, nothing but a gross misapprehension.

To be careful and cautious is not the same as to dislike. Keynes did not misunderstand the crucial issues at stake in the development of econometrics. Quite the contrary. He knew them all too well — and was not satisfied with the validity and philosophical underpinning of the assumptions made for applying its methods.

Keynes’ critique of the “logical issues” regarding the conditions that have to be satisfied if we are going to be able to apply econometric methods, is still valid and unanswered in the sense that the problems he pointed at are still with us today and largely unsolved. Ignoring them — the most common practice among applied econometricians — is not to solve them.

To apply statistical and mathematical methods to the real-world economy, the econometrician has to make some quite strong assumptions. In a review of Tinbergen’s econometric work — published in The Economic Journal in 1939 — Keynes gave a comprehensive critique of Tinbergen’s work, focusing on the limiting and unreal character of the assumptions that econometric analyses build on:

Completeness: Where Tinbergen attempts to specify and quantify which factors influence the business cycle, Keynes maintains there must be a complete list of all the relevant factors to avoid misspecification and spurious causal claims. Usually, this problem is ‘solved’ by econometricians assuming that they somehow have a ‘correct’ model specification. Keynes is, to put it mildly, unconvinced:

It will be remembered that the seventy translators of the Septuagint were shut up in seventy separate rooms with the Hebrew text and brought out with them, when they emerged, seventy identical translations. Would the same miracle be vouchsafed if seventy multiple correlators were shut up with the same statistical material? And anyhow, I suppose, if each had a different economist perched on his a priori, that would make a difference to the outcome.

Homogeneity: To make inductive inferences possible — and be able to apply econometrics — the system we try to analyse has to have a large degree of ‘homogeneity.’ According to Keynes most social and economic systems — especially from the perspective of real historical time — lack that ‘homogeneity.’ As he had argued already in Treatise on Probability (ch. 22), it wasn’t always possible to take repeated samples from a fixed population when we were analysing real-world economies. In many cases, there simply are no reasons at all to assume the samples to be homogenous. Lack of ‘homogeneity’ makes the principle of ‘limited independent variety’ non-applicable, and hence makes inductive inferences, strictly seen, impossible since one of its fundamental logical premises is not satisfied. Without “much repetition and uniformity in our experience” there is no justification for placing “great confidence” in our inductions (TP ch. 8).

And then, of course, there is also the ‘reverse’ variability problem of non-excitation: factors that do not change significantly during the period analysed, can still very well be extremely important causal factors.

Stability: Tinbergen assumes there is a stable spatio-temporal relationship between the variables his econometric models analyze. But as Keynes had argued already in his Treatise on Probability it was not really possible to make inductive generalisations based on correlations in one sample. As later studies of ‘regime shifts’ and ‘structural breaks’ have shown us, it is exceedingly difficult to find and establish the existence of stable econometric parameters for anything but rather short time series.

Measurability: Tinbergen’s model assumes that all relevant factors are measurable. Keynes questions if it is possible to adequately quantify and measure things like expectations and political and psychological factors. And more than anything, he questioned — both on epistemological and ontological grounds — that it was always and everywhere possible to measure real-world uncertainty with the help of probabilistic risk measures. Thinking otherwise can, as Keynes wrote, “only lead to error and delusion.”

Independence: Tinbergen assumes that the variables he treats are independent (still a standard assumption in econometrics). Keynes argues that in such a complex, organic and evolutionary system as an economy, independence is a deeply unrealistic assumption to make. Building econometric models from that kind of simplistic and unrealistic assumptions risks producing nothing but spurious correlations and causalities. Real-world economies are organic systems for which the statistical methods used in econometrics are ill-suited, or even, strictly seen, inapplicable. Mechanical probabilistic models have little leverage when applied to non-atomic evolving organic systems — such as economies.

originalIt is a great fault of symbolic pseudo-mathematical methods of formalising a system of economic analysis … that they expressly assume strict independence between the factors involved and lose all their cogency and authority if this hypothesis is disallowed; whereas, in ordinary discourse, where we are not blindly manipulating but know all the time what we are doing and what the words mean, we can keep “at the back of our heads” the necessary reserves and qualifications and the adjustments which we shall have to make later on, in a way in which we cannot keep complicated partial differentials “at the back” of several pages of algebra which assume that they all vanish.

Building econometric models can’t be a goal in itself. Good econometric models are means that make it possible for us to infer things about the real-world systems they ‘represent.’ If we can’t show that the mechanisms or causes that we isolate and handle in our econometric models are ‘exportable’ to the real world, they are of limited value to our understanding, explanations or predictions of real-world economic systems.

The kind of fundamental assumption about the character of material laws, on which scientists appear commonly to act, seems to me to be much less simple than the bare principle of uniformity. They appear to assume something much more like what mathematicians call the principle of the superposition of small effects, or, as I prefer to call it, in this connection, the atomic character of natural law. 3The system of the material universe must consist, if this kind of assumption is warranted, of bodies which we may term (without any implication as to their size being conveyed thereby) legal atoms, such that each of them exercises its own separate, independent, and invariable effect, a change of the total state being compounded of a number of separate changes each of which is solely due to a separate portion of the preceding state …

The scientist wishes, in fact, to assume that the occurrence of a phenomenon which has appeared as part of a more complex phenomenon, may be some reason for expecting it to be associated on another occasion with part of the same complex. Yet if different wholes were subject to laws qua wholes and not simply on account of and in proportion to the differences of their parts, knowledge of a part could not lead, it would seem, even to presumptive or probable knowledge as to its association with other parts.

Linearity: To make his models tractable, Tinbergen assumes the relationships between the variables he studies to be linear. This is still standard procedure today, but as Keynes writes:

It is a very drastic and usually improbable postulate to suppose that all economic forces are of this character, producing independent changes in the phenomenon under investigation which are directly proportional to the changes in themselves; indeed, it is ridiculous.

To Keynes, it was a ‘fallacy of reification’ to assume that all quantities are additive (an assumption closely linked to independence and linearity).

The unpopularity of the principle of organic unities shows very clearly how great is the danger of the assumption of unproved additive formulas. The fallacy, of which ignorance of organic unity is a particular instance, may perhaps be mathematically represented thus: suppose f(x) is the goodness of x and f(y) is the goodness of y. It is then assumed that the goodness of x and y together is f(x) + f(y) when it is clearly f(x + y) and only in special cases will it be true that f(x + y) = f(x) + f(y). It is plain that it is never legitimate to assume this property in the case of any given function without proof.

J. M. Keynes “Ethics in Relation to Conduct” (1903)

And as even one of the founding fathers of modern econometrics — Trygve Haavelmo — wrote:

What is the use of testing, say, the significance of regression coefficients, when maybe, the whole assumption of the linear regression equation is wrong?

Real-world social systems are usually not governed by stable causal mechanisms or capacities. The kinds of ‘laws’ and relations that econometrics has established, are laws and relations about entities in models that presuppose causal mechanisms and variables — and the relationship between them — being linear, additive, homogenous, stable, invariant and atomistic. But — when causal mechanisms operate in the real world they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. Since statisticians and econometricians — as far as I can see — haven’t been able to convincingly warrant their assumptions of homogeneity, stability, invariance, independence, and additivity as being ontologically isomorphic to real-world economic systems, Keynes’ critique is still valid. As long as — as Keynes writes in a letter to Frisch in 1935 — “nothing emerges at the end which has not been introduced expressively or tacitly at the beginning,” I remain doubtful of the scientific aspirations of econometrics. Especially when it comes to using econometrics for making causal inferences, it is still often based on counterfactual assumptions that have outrageously weak grounds.

In his critique of Tinbergen, Keynes points us to the fundamental logical, epistemological and ontological problems of applying statistical methods to a basically unpredictable, uncertain, complex, unstable, interdependent, and ever-changing social reality. Methods designed to analyse repeated sampling in controlled experiments under fixed conditions are not easily extended to an organic and non-atomistic world where time and history play decisive roles.

Econometric modelling should never be a substitute for thinking. From that perspective, it is really depressing to see how much of Keynes’ critique of the pioneering econometrics in the 1930s-1940s is still relevant today.

The general line you take is interesting and useful. It is, of course, not exactly comparable with mine. I was raising the logical difficulties. You say in effect that, if one was to take these seriously, one would give up the ghost in the first lap, but that the method, used judiciously as an aid to more theoretical enquiries and as a means of suggesting possibilities and probabilities rather than anything else, taken with enough grains of salt and applied with superlative common sense, won’t do much harm. I should quite agree with that. That is how the method ought to be used.

J. M. Keynes, letter to E.J. Broster, December 19, 1939

The randomization tools economists use

19 Feb, 2023 at 11:57 | Posted in Theory of Science & Methodology | Comments Off on The randomization tools economists use

Randomization Instrument Tool Concept, Random Number Generator Stock Vector  - Illustration of magic, blue: 198321060Preference-based discrimination is based on the fact that, for example, employers, customers, or colleagues have a dislike for those who belong to a certain group. Such discrimination can lead to wage differences between discriminated and non-discriminated groups. However, competition can undermine these wage differences, as non-discriminatory employers will make greater profits and drive discriminatory employers out of the market. Since many markets are not characterized by perfect competition, there is still the possibility for this type of discrimination to persist.

In contrast, statistical discrimination describes situations where those who belong to different groups are affected by expectations about the group’s average characteristics. These differences in characteristics between different groups can be real or based on pure prejudice, but it is difficult to see why profit-maximizing employers would not realize that pure prejudices are just that. They can have a lot to gain by finding out how things actually are and adapting their behavior accordingly. However, even what starts out as a pure prejudice can become a self-fulfilling prophecy with actual consequences. If employers in general avoid investing in a group of employees because they are expected to prioritize family over career, the group in question may find it completely rational to prioritize family.

Mikael Priks & Jonas Vlachos

Priks and Vlachos’ The Tools of Economics is an expression of a new trend in economics, where there is a growing interest in experiments and — not least — how to design them to possibly provide answers to questions about causality and policy effects. Economic research on discrimination nowadays often emphasizes the importance of a randomization design, for example when trying to determine to what extent discrimination can be causally attributed to differences in preferences or information, using so-called correspondence tests and field experiments.

A common starting point is the ‘counterfactual approach’ developed mainly by Neyman and Rubin, which is often presented and discussed based on examples of randomized control studies, natural experiments, difference in difference, matching, regression discontinuity, etc.

Mainstream economists generally view this development of the economics toolbox positively. Since yours truly — like, for example, Nancy Cartwright and Angus Deaton — is not entirely positive about the randomization approach, it may perhaps be interesting for the reader to hear some of my criticisms.

A notable limitation of counterfactual randomization designs is that they only give us answers on how ‘treatment groups’ differ on average from ‘control groups.’ Let me give an example to illustrate how limiting this fact can be:

Among school debaters and politicians in Sweden, it is claimed that so-called ‘independent schools’ (charter schools) are better than municipal schools. They are said to lead to better results. To find out if this is really the case, a number of students are randomly selected to take a test. The result could be: Test result = 20 + 5T, where T=1 if the student attends an independent school and T=0 if the student attends a municipal school. This would confirm the assumption that independent school students have an average of 5 points higher results than students in municipal schools. Now, politicians (hopefully) are aware that this statistical result cannot be interpreted in causal terms because independent school students typically do not have the same background (socio-economic, educational, cultural, etc.) as those who attend municipal schools (the relationship between school type and result is confounded by selection bias). To obtain a better measure of the causal effects of school type, politicians suggest that 1000 students be admitted to an independent school through a lottery — a classic example of a randomization design in natural experiments. The chance of winning is 10%, so 100 students are given this opportunity. Of these, 20 accept the offer to attend an independent school. Of the 900 lottery participants who do not ‘win,’ 100 choose to attend an independent school. The lottery is often perceived by school researchers as an ‘instrumental variable,’ and when the analysis is carried out, the result is: Test result = 20 + 2T. This is standardly interpreted as having obtained a causal measure of how much better students would, on average, perform on the test if they chose to attend independent schools instead of municipal schools. But is it true? No! If not all school students have exactly the same test results (which is a rather far-fetched ‘homogeneity assumption’), the specified average causal effect only applies to the students who choose to attend an independent school if they ‘win’ the lottery, but who would not otherwise choose to attend an independent school (in statistical jargon, we call these ‘compliers’). It is difficult to see why this group of students would be particularly interesting in this example, given that the average causal effect estimated using the instrumental variable says nothing at all about the effect on the majority (the 100 out of 120 who choose to attend an independent school without ‘winning’ in the lottery) of those who choose to attend an independent school.

Conclusion: Researchers must be much more careful in interpreting ‘average estimates’ as causal. Reality exhibits a high degree of heterogeneity, and ‘average parameters’ often tell us very little!

To randomize ideally means that we achieve orthogonality (independence) in our models. But it does not mean that in real experiments when we randomize, we achieve this ideal. The ‘balance’ that randomization should ideally result in cannot be taken for granted when the ideal is translated into reality. Here, one must argue and verify that the ‘assignment mechanism’ is truly stochastic and that ‘balance’ has indeed been achieved!

Even if we accept the limitation of only being able to say something about average treatment effects there is another theoretical problem. An ideal randomized experiment assumes that a number of individuals are first chosen from a randomly selected population and then randomly assigned to a treatment group or a control group. Given that both selection and assignment are successfully carried out randomly, it can be shown that the expected outcome difference between the two groups is the average causal effect in the population. The snag is that the experiments conducted almost never involve participants selected from a random population! In most cases, experiments are started because there is a problem of some kind in a given population (e.g., schoolchildren or job seekers in country X) that one wants to address. An ideal randomized experiment assumes that both selection and assignment are randomized — this means that virtually none of the empirical results that randomization advocates so eagerly tout hold up in a strict mathematical-statistical sense. The fact that only assignment is talked about when it comes to ‘as if’ randomization in natural experiments is hardly a coincidence. Moreover, when it comes to ‘as if’ randomization in natural experiments, the sad but inevitable fact is that there can always be a dependency between the variables being studied and unobservable factors in the error term, which can never be tested!

Another significant and major problem is that researchers who use these randomization-based research strategies often set up problem formulations that are not at all the ones we really want answers to, in order to achieve ‘exact’ and ‘precise’ results. Design becomes the main thing, and as long as one can get more or less clever experiments in place, they believe they can draw far-reaching conclusions about both causality and the ability to generalize experimental outcomes to larger populations. Unfortunately, this often means that this type of research has a negative bias away from interesting and important problems towards prioritizing method selection. Design and research planning are important, but the credibility of research ultimately lies in being able to provide answers to relevant questions that both citizens and researchers want answers to.

Believing there is only one really good evidence-based method on the market — and that randomization is the only way to achieve scientific validity — blinds people to searching for and using other methods that in many contexts are better. Insisting on using only one tool often means using the wrong tool.

He ain’t heavy, he’s my brother (personal)

18 Feb, 2023 at 15:06 | Posted in Varia | Comments Off on He ain’t heavy, he’s my brother (personal)

.

In loving memory of my older brother Peter Pålsson (1955-2001)

peter o jag2But in dreams,
I can hear your name.
And in dreams,
We will meet again.

When the seas and mountains fall
And we come to end of days,
In the dark I hear a call
Calling me there
I will go there
And back again.

Economics as religion

18 Feb, 2023 at 14:49 | Posted in Economics | Comments Off on Economics as religion

Contrary to the tenets of orthodox economists, contemporary research suggests that, rather than seeking always to maximise our personal gain, humans still remain reasonably altruistic and selfless. Nor is it clear that the endless accumulation of wealth always makes us happier. And when we do make decisions, especially those to do with matters of principle, we seem not to engage in the sort of rational “utility-maximizing” calculus that orthodox economic models take as a given. The truth is, in much of our daily life we don’t fit the model all that well.

rapleyFor decades, neoliberal evangelists replied to such objections by saying it was incumbent on us all to adapt to the model, which was held to be immutable – one recalls Bill Clinton’s depiction of neoliberal globalisation, for instance, as a “force of nature”. And yet, in the wake of the 2008 financial crisis and the consequent recession, there has been a turn against globalisation across much of the west …

It would be tempting for anyone who belongs to the “expert” class, and to the priesthood of economics, to dismiss such behaviour as a clash between faith and facts, in which the facts are bound to win in the end. In truth, the clash was between two rival faiths – in effect, two distinct moral tales. So enamoured had the so-called experts become with their scientific authority that they blinded themselves to the fact that their own narrative of scientific progress was embedded in a moral tale. It happened to be a narrative that had a happy ending for those who told it, for it perpetuated the story of their own relatively comfortable position as the reward of life in a meritocratic society that blessed people for their skills and flexibility. That narrative made no room for the losers of this order, whose resentments were derided as being a reflection of their boorish and retrograde character – which is to say, their fundamental vice. The best this moral tale could offer everyone else was incremental adaptation to an order whose caste system had become calcified. For an audience yearning for a happy ending, this was bound to be a tale of woe.

The failure of this grand narrative is not, however, a reason for students of economics to dispense with narratives altogether. Narratives will remain an inescapable part of the human sciences for the simple reason that they are inescapable for humans. It’s funny that so few economists get this, because businesses do.

Yes indeed, one would think it self-evident that “the facts are bound to win in the end.” But still, mainstream economists seem to be impressed by the ‘rigor’ they bring to macroeconomics with their totally unreal New-Classical-New-Keynesian DSGE models and their rational expectations and microfoundations!

It is difficult to see why.

Take the rational expectations assumption. Rational expectations in the mainstream economists’ world imply that relevant distributions have to be time-independent. This amounts to assuming that an economy is like a closed system with known stochastic probability distributions for all different events. In reality, it is straining one’s beliefs to try to represent economies as outcomes of stochastic processes. An existing economy is a single realization tout court, and hardly conceivable as one realization out of an ensemble of economy-worlds since an economy can hardly be conceived as being completely replicated over time. It is — to say the least — very difficult to see any similarity between these modeling assumptions and the expectations of real persons. In the world of the rational expectations hypothesis, we are never disappointed in any other way than when we lose at the roulette wheels. But real life is not an urn or a roulette wheel. And that’s also the reason why allowing for cases where agents make ‘predictable errors’ in DSGE models doesn’t take us any closer to a relevant and realist depiction of actual economic decisions and behaviors. If we really want to have anything of interest to say on real economies, financial crises, and the decisions and choices real people make we have to replace the rational expectations hypothesis with more relevant and realistic assumptions concerning economic agents and their expectations than childish roulette and urn analogies.

‘Rigorous’ and ‘precise’ DSGE models cannot be considered anything else than unsubstantiated conjectures as long as they aren’t supported by evidence from outside the theory or model. To my knowledge no in any way decisive empirical evidence has been presented.

No matter how precise and rigorous the analysis, and no matter how hard one tries to cast the argument in modern mathematical form, they do not push economic science forward one single millimeter if they do not stand the acid test of relevance to the target. No matter how clear, precise, rigorous, or certain the inferences delivered inside these models are, they do not say anything about real-world economies.

Proving things ‘rigorously’ in DSGE models is at most a starting point for doing an interesting and relevant economic analysis. Forgetting to supply export warrants to the real world makes the analysis an empty exercise in formalism without real scientific value.

no-miracles-in-science-please

Mainstream economists think there is a gain from the DSGE style of modeling in its capacity to offer some kind of structure around which to organize discussions. To me, that sounds more like religious theoretical-methodological dogma, where one paradigm rules in divine hegemony. That’s not progress. That’s the death of economics as a science.

Next Page »

Blog at WordPress.com.
Entries and Comments feeds.