Postmodern thinking

1 Jul, 2020 at 15:08 | Posted in Theory of Science & Methodology | 1 Comment

adornoThe compulsive types there correspond to the paranoids here. The wistful opposition to factual research, the legitimate consciousness that scientism forgets what is best, exacerbates through its naïvété the split from which it suffers. Instead of comprehending the facts, behind which others are barricaded, it hurriedly throws together whatever it can grab from them, rushing off to play so uncritically with apochryphal cognitions, with a couple isolated and hypostatized categories, and with itself, that it is easily disposed of by referring to the unyielding facts. It is precisely the critical element which is lost in the apparently independent thought. The insistence on the secret of the world hidden beneath the shell, which dares not explain how it relates to the shell, only reconfirms through such abstemiousness the thought that there must be good reasons for that shell, which one ought to accept without question. Between the pleasure of emptiness and the lie of plenitude, the ruling condition of the spirit [Geistes: mind] permits no third option.

Long before ‘postmodernism’ became fashionable among a certain kind of ‘intellectuals’, Adorno wrote searching critiques of this kind of thinking.

When listening to — or reading — the postmodern mumbo-jumbo​ that surrounds​ us today in social sciences and humanities, I often find myself wishing for that special Annie Hall moment of truth:

Cultures of expertise

21 Jun, 2020 at 23:32 | Posted in Theory of Science & Methodology | Leave a comment

 

Social science — a plaidoyer

16 Jun, 2020 at 08:37 | Posted in Theory of Science & Methodology | 4 Comments

One of the most important tasks of social sciences is to explain the events, processes, and structures that take place and act in society. But the researcher cannot stop at this. As a consequence of the relations and connections that the researcher finds, a will and demand arise for critical reflection on the findings. To show that unemployment depends on rigid social institutions or adaptations to European economic aspirations to integration, for instance, constitutes at the same time a critique of these conditions. It also entails an implicit critique of other explanations that one can show to be built on false beliefs. The researcher can never be satisfied with establishing that false beliefs exist but must go on to seek an explanation for why they exist. What is it that maintains and reproduces them? To show that something causes false beliefs – and to explain why – constitutes at the same time a critique.

bhskThis I think is something particular to the humanities and social sciences. There is no full equivalent in the natural sciences since the objects of their study are not fundamentally created by human beings in the same sense as the objects of study in social sciences. We do not criticize apples for falling to earth in accordance with the law of gravity.

The explanatory critique that constitutes all good social science thus has repercussions on the reflective person in society. To digest the explanations and understandings that social sciences can provide means a simultaneous questioning and critique of one’s self-understanding and the actions and attitudes it gives rise to. Science can play an important emancipating role in this way. Human beings can fulfill and develop themselves only if they do not base their thoughts and actions on false beliefs about reality. Fulfillment may also require changing fundamental structures of society. Understanding of the need for this change may issue from various sources like everyday praxis and reflection as well as from science.

Explanations of social phenomena must be subject to criticism, and this criticism must be an essential part of the task of social science. Social science has to be an explanatory critique. The researcher’s explanations have to constitute a critical attitude toward the very object of research, society. Hopefully, the critique may result in proposals for how the institutions and structures of society can be constructed. The social scientist has a responsibility to try to elucidate possible alternatives to existing institutions and structures.

In a time when scientific relativism is on the march, it is important to keep up the claim for not reducing science to a pure discursive level. Against all kinds of social constructivism we have to maintain the Enlightenment tradition of thinking of reality as something that is not created by our views of it and of the main task of science as studying the structure of this reality. Ontology is important. It is the foundation for all sustainable epistemologies.

The problem with positivist social science is not that it gives the wrong answers, but rather that in a strict sense it does not give answers at all. Its explanatory models presuppose that the social reality is closed, and since social reality is fundamentally open, models of that kind do not explain anything of what happens in such a universe.

Crazy econometricians

12 May, 2020 at 17:47 | Posted in Theory of Science & Methodology | Comments Off on Crazy econometricians

the-dappled-worldWith a few notable exceptions, such as the planetary systems, our most beautiful and exact applications of the laws of physics are all within the entirely artificial and precisely constrained environment of the modern laboratory … Haavelmo remarks that physicists are very clever. They confine their predictions to the outcomes of their experiments. They do not try to predict the course of a rock in the mountains and trace the development of the avalanche. It is only the crazy econometrician who tries to do that, he says.

John von Neumann on mathematics

12 May, 2020 at 15:53 | Posted in Theory of Science & Methodology | 2 Comments

587aaabdfdf42f314b0da9f7fcf2a47d

My ten favourite science books

2 May, 2020 at 18:43 | Posted in Theory of Science & Methodology | 1 Comment

top-10-retail-news-thumb-610xauto-79997-600x240

• Bhaskar, Roy (1978). A realist theory of science

• Cartwright, Nancy (2007). Hunting causes and using them

• Freedman, David (2010). Statistical models and causal inferences

• Georgescu-Roegen, Nicholas (1971). The Entropy Law and the Economic Process

• Harré, Rom (1960). An introduction to the logic of the sciences

• Keynes, John Maynard (1936). The General Theory

• Lawson, Tony (1997). Economics and reality

• Lipton, Peter (2004). Inference to the best explanation 

• Marx, Karl (1867). Das Kapital

Polanyi, Karl (1944). The Great Transformation

The relationship between logic and truth

26 Apr, 2020 at 14:20 | Posted in Theory of Science & Methodology | 13 Comments

 

To be ‘analytical’ and ‘logical’ is something most people find recommendable. These words have a positive connotation. Scientists think deeper than most other people because they use ‘logical’ and ‘analytical’ methods. In dictionaries, logic is often defined as “reasoning conducted or assessed according to strict principles of validity” and ‘analysis’ as having to do with “breaking something down.”

anBut that’s not the whole picture. As used in science, analysis usually means something more specific. It means to separate a problem into its constituent elements so to reduce complex — and often complicated — wholes into smaller (simpler) and more manageable parts. You take the whole and break it down (decompose) into its separate parts. Looking at the parts separately one at a time you are supposed to gain a better understanding of how these parts operate and work. Built on that more or less ‘atomistic’ knowledge you are then supposed to be able to predict and explain the behaviour of the complex and complicated whole.

In economics, that means you take the economic system and divide it into its separate parts, analyse these parts one at a time, and then after analysing the parts separately, you put the pieces together.

The ‘analytical’ approach is typically used in economic modelling, where you start with a simple model with few isolated and idealized variables. By ‘successive approximations,’ you then add more and more variables and finally get a ‘true’ model of the whole.

This may sound like a convincing and good scientific approach.

But there is a snag!

The procedure only really works when you have a machine-like whole/system/economy where the parts appear in fixed and stable configurations. And if there is anything we know about reality, it is that it is not a machine! The world we live in is not a ‘closed’ system. On the contrary. It is an essentially ‘open’ system. Things are uncertain, relational, interdependent, complex, and ever-changing.

Without assuming that the underlying structure of the economy that you try to analyze remains stable/invariant/constant, there is no chance the equations of the model remain constant. That’s the very rationale why economists use (often only implicitly) the assumption of ceteris paribus. But — nota bene — this can only be a hypothesis. You have to argue the case. If you cannot supply any sustainable justifications or warrants for the adequacy of making that assumption, then the whole analytical economic project becomes pointless non-informative nonsense. Not only have we to assume that we can shield off variables from each other analytically (external closure). We also have to assume that each and every variable themselves are amenable to be understood as stable and regularity producing machines (internal closure). Which, of course, we know is as a rule not possible. Some things, relations, and structures are not analytically graspable. Trying to analyse parenthood, marriage, employment, etc, piece by piece doesn’t make sense. To be a chieftain, a capital-owner, or a slave is not an individual property of an individual. It can come about only when individuals are integral parts of certain social structures and positions. Social relations and contexts cannot be reduced to individual phenomena. A cheque presupposes a banking system and being a tribe-member presupposes a tribe.  Not taking account of this in their ‘analytical’ approach, economic ‘analysis’ becomes uninformative nonsense.

Using ‘logical’ and ‘analytical’ methods in social sciences means that economists succumb to the fallacy of composition — the belief that the whole is nothing but the sum of its parts.  In society and in the economy this is arguably not the case. An adequate analysis of society and economy a fortiori cannot proceed by just adding up the acts and decisions of individuals. The whole is more than a sum of parts.

Mainstream economics is built on using the ‘analytical’ method. The models built with this method presuppose that social reality is ‘closed.’ Since social reality is known to be fundamentally ‘open,’ it is difficult to see how models of that kind can explain anything about what happens in such a universe. Postulating closed conditions to make models operational and then impute these closed conditions to society’s real structure is an unwarranted procedure that does not take necessary ontological considerations seriously.

In face of the kind of methodological individualism and rational choice theory that dominate mainstream economics we have to admit that even if knowing the aspirations and intentions of individuals are necessary prerequisites for giving explanations of social events, they are far from sufficient. Even the most elementary ‘rational’ actions in society presuppose the existence of social forms that it is not possible to reduce to the intentions of individuals. Here, the ‘analytical’ method fails again.

The overarching flaw with the ‘analytical’ economic approach using methodological individualism and rational choice theory is basically that they reduce social explanations to purportedly individual characteristics. But many of the characteristics and actions of the individual originate in and are made possible only through society and its relations. Society is not a Wittgensteinian ‘Tractatus-world’ characterized by atomistic states of affairs. Society is not reducible to individuals, since the social characteristics, forces, and actions of the individual are determined by pre-existing social structures and positions. Even though society is not a volitional individual, and the individual is not an entity given outside of society, the individual (actor) and the society (structure) have to be kept analytically distinct. They are tied together through the individual’s reproduction and transformation of already given social structures.

Since at least the marginal revolution in economics in the 1870s it has been an essential feature of economics to ‘analytically’ treat individuals as essentially independent and separate entities of action and decision. But, really, in such a complex, organic and evolutionary system as an economy, that kind of independence is a deeply unrealistic assumption to make. To simply assume that there is strict independence between the variables we try to analyze doesn’t help us the least if that hypothesis turns out to be unwarranted.

To be able to apply the ‘analytical’ approach, economists have to basically assume that the universe consists of ‘atoms’ that exercise their own separate and invariable effects in such a way that the whole consist of nothing but an addition of these separate atoms and their changes. These simplistic assumptions of isolation, atomicity, and additivity are, however, at odds with reality. In real-world settings, we know that the ever-changing contexts make it futile to search for knowledge by making such reductionist assumptions. Real-world individuals are not reducible to contentless atoms and so not susceptible to atomistic analysis. The world is not reducible to a set of atomistic ‘individuals’ and ‘states.’ How variable X works and influence real-world economies in situation A cannot simply be assumed to be understood or explained by looking at how X works in situation B. Knowledge of X probably does not tell us much if we do not take into consideration how it depends on Y and Z. It can never be legitimate just to assume that the world is ‘atomistic.’ Assuming real-world additivity cannot be the right thing to do if the things we have around us rather than being ‘atoms’ are ‘organic’ entities.

If we want to develop new and better economics we have to give up on the single-minded insistence on using a deductivist straitjacket methodology and the ‘analytical’ method. To focus scientific endeavours on proving things in models is a gross misapprehension of the purpose of economic theory. Deductivist models and ‘analytical’ methods disconnected from reality are not relevant to predict, explain or understand real-world economies

To have ‘consistent’ models and ‘valid’ evidence is not enough. What economics needs are real-world relevant models and sound evidence. Aiming only for ‘consistency’ and ‘validity’ is setting the economics aspirations level too low for developing a realist and relevant science.

Economics is not mathematics or logic. It’s about society. The real world.

Models may help us think through problems. But we should never forget that the formalism we use in our models is not self-evidently transportable to a largely unknown and uncertain reality. The tragedy with mainstream economic theory is that it thinks that the logic and mathematics used are sufficient for dealing with our real-world problems. They are not! Model deductions based on questionable assumptions can never be anything but pure exercises in hypothetical reasoning.

The world in which we live is inherently uncertain and quantifiable probabilities are the exception rather than the rule. To every statement about it is attached a ‘weight of argument’ that makes it impossible to reduce our beliefs and expectations to a one-dimensional stochastic probability distribution. If “God does not play dice” as Einstein maintained, I would add “nor do people.” The world as we know it has limited scope for certainty and perfect knowledge. Its intrinsic and almost unlimited complexity and the interrelatedness of its organic parts prevent the possibility of treating it as constituted by ‘legal atoms’ with discretely distinct, separable and stable causal relations. Our knowledge accordingly has to be of a rather fallible kind.

If the real world is fuzzy, vague and indeterminate, then why should our models build upon a desire to describe it as precise and predictable? Even if there always has to be a trade-off between theory-internal validity and external validity, we have to ask ourselves if our models are relevant.

‘Human logic’ has to supplant the classical — formal — logic of deductivism if we want to have anything of interest to say of the real world we inhabit. Logic is a marvellous tool in mathematics and axiomatic-deductivist systems, but a poor guide for action in real-world systems, in which concepts and entities are without clear boundaries and continually interact and overlap. In this world, I would say we are better served with a methodology that takes into account that the more we know, the more we know we do not know.

Mathematics and logic cannot establish the truth value of facts. Never has. Never will.

Why we need Big Theories

4 Apr, 2020 at 18:13 | Posted in Theory of Science & Methodology | Comments Off on Why we need Big Theories

 

Judea Pearl and interventionist causal models (wonkish)

11 Mar, 2020 at 10:53 | Posted in Theory of Science & Methodology | Comments Off on Judea Pearl and interventionist causal models (wonkish)

largepreviewAs X’s effect on some other variable in the system S depends on there
being a possible intervention on X, and the possibility of an intervention in
turn depends on the modularity of S, it is a necessary condition for something
to be a cause that the system in which it is a cause is modular with respect
to that factor. The requirement that all systems are modular with respect to
their causes can, in a way, be regarded as an interventionist addition to the
unmanipulable causes problem … This implication has also been criticized in particular by Nancy Cartwright. She has proposed that many causal systems are not modular … Pearl has responded to this in 2009 (sect. 11.4.7), where he proposes, on the one hand, that it is in general sufficient that a symbolic intervention can be performed on the causal model, for the determination of causal effects, and on the other hand that we nevertheless could isolate the individual causal contributions …

It is tempting—to philosophers at least—to equate claims in this literature,
about the meaning of causal claims being given by claims about what would
happen under a hypothetical intervention—or an explicit definition of causation to the same effect—with that same claim as it would be interpreted in a philosophical context. That is to say, such a claim would normally be understood there as giving the truth conditions of said causal claims. It is generally hard to know whether any such beliefs are involved in the scientific context. However, Pearl in particular has denied, in increasingly explicit terms, that this is what is intended … He has recently
liked to describe a factor Y , that is causally dependent on another factor X, as
“listening” to X and determining “its value in response to what it hears” … This formulation suggests to me that it is the fact that Y is “listening” to X that explains why and how Y changes under an intervention on X. That is, what a possible intervention does, is to isolate the influence that X has on Y , in virtue of Y ’s “listening” to X. Thus, Pearl’s theory does not imply an interventionist theory of causation, as we understand that concept in this monograph. This, moreover, suggests that the intervention that is always available, for any cause that is represented by a variable in a causal model, is a formal operation. I take this to be supported by the way he responds to Nancy Cartwright’s objection that modularity does not hold of all causal systems: it is sufficient that a symbolic intervention can be performed. Thus, the operation alluded to in Pearl’s operationalization of causation is a formal operation, always available, regardless of whether it corresponds to any possible intervention event or not.

Interesting dissertation well worth reading for anyone interested in the ongoing debate on the reach of interventionist causal theories.

Critical realism

15 Feb, 2020 at 11:26 | Posted in Theory of Science & Methodology | 7 Comments

royWhat properties do societies possess that might make them possible objects of knowledge for us? My strategy in developing an answer to this question will be effectively based on a pincer movement. But in deploying the pincer I shall concentrate first on the ontological question of the properties that societies possess, before shifting to the epistemological question of how these properties make them possible objects of knowledge for us. This is not an arbitrary order of development. It reflects the condition that, for transcendental realism, it is the nature of objects that determines their cognitive possibilities for us; that, in nature, it is humanity that is contingent and knowledge, so to speak, accidental. Thus it is because sticks and stones are solid that they can be picked up and thrown, not because they can be picked up and thrown that they are solid (though that they can be handled in this sort of way may be a contingently necessary condition for our knowledge of their solidity).

No philosopher of science has influenced yours truly’s thinking more than Roy did, and in a time when scientific relativism is still on the march, it is important to keep up his claim for not reducing science to a pure discursive level.

royScience is made possible by the fact that there exists a reality beyond our theories and concepts of it. It is this reality that our theories in some way deal with. Contrary to positivism, I cannot see that the main task of science is to detect event-regularities between observed facts. Rather, the task must be conceived as identifying the underlying structure and forces that produce the observed events.

The problem with positivist social science is not that it gives the wrong answers, but rather that in a strict sense it does not give answers at all. Its explanatory models presuppose that the social reality is ‘closed,’ and since social reality is fundamentally ‘open,’ models of that kind cannot explain anything about​ what happens in such a universe. Positivist social science has to postulate closed conditions to make its models operational and then – totally unrealistically – impute these closed conditions to society’s real structure.

What makes knowledge in social sciences possible is the fact that society consists of social structures and positions that influence the individuals of society, partly through their being the necessary prerequisite for the actions of individuals but also because they dispose individuals to act (within a given structure) in a certain way. These structures constitute the ‘deep structure’ of society.

Our observations and theories are concept-dependent without therefore necessarily being concept-determined. There is a reality existing independently of our knowledge and theories of it. Although we cannot apprehend it without using our concepts and theories, these are not the same as reality itself. Reality and our concepts of it are not identical. Social science is made possible by existing structures and relations in society that are continually reproduced and transformed by different actors.

Explanations and predictions of social phenomena require theory constructions. Just looking for correlations between events is not enough. One has to get under the surface and see the deeper underlying structures and mechanisms that essentially constitute the social system.

The basic question one has to pose when studying social relations and events are​ what are the fundamental relations without which they would cease to exist. The answer will point to causal mechanisms and tendencies that act in the concrete contexts we study. Whether these mechanisms are activated and what effects they will have in that case it is not possible to predict, since these depend on accidental and variable relations. Every social phenomenon is determined by a host of both necessary and contingent relations, and it is impossible in practice to have complete knowledge of these constantly changing relations. That is also why we can never confidently predict them. What we can do, through learning about the mechanisms of the structures of society, is to identify the driving forces behind them, thereby making it possible to indicate the direction in which things tend to develop.

The world itself should never be conflated with the knowledge we have of it. Science can only produce meaningful, relevant and realist knowledge if it acknowledges its dependence of the​ world out there. Ultimately that also means that the critique yours truly wages against mainstream economics is that it doesn’t take that ontological requirement seriously.

The logic of scientific discovery

3 Feb, 2020 at 15:53 | Posted in Theory of Science & Methodology | 1 Comment

bhaskIt is because we are material things, possessed of the senses of sight and touch, that we accord priority in verifying existential claims to changes in material things. But scientists posit for these changes both continuants and causes, some of which are necessarily unperceivable. It is true that ‘that a flash or a bang occurs does not entail that anything flashes or bangs. “Let there be light” does not mean “let something shine”. But a scientist can never rest content with effects: he must search for causes; and causes reside in or constitute things. Charged clouds, magnetic fields and radio stars can only be detected through their effects. But this does not lead us to deny their existence, any more than we can rationally doubt the existence of society or of language as a structure irreducible to its effects. There could be a world of electrons without material objects; and there could be a world of material objects without men. It is contingent that we exist (and so know this). But given that we do, no other position is rationally defensible. It is the nature of the world that determines which aspects of reality can be possible objects of knowledge for us.

Postmodern mumbo jumbo

29 Jan, 2020 at 13:09 | Posted in Theory of Science & Methodology | 3 Comments

hall
Fyra viktiga drag är gemensamma för de olika rörelserna:

    1. Centrala idéer förklaras inte.
    2. Grunderna för en övertygelse anges inte.
    3. Framställningen av läran har en språklig stereotypi …
    4. När det gäller åberopandet av lärofäder råder samma stereotypi — ett begränsat antal namn återkommer. Heidegger, Foucault, och Derrida kommer tillbaka, åter och åter …

Till de fyra punkterna vill jag emellertid … lägga till en femte:

5. Vederbörande har inte något väsentligen nytt att framföra.

Överdrivet? Elakt? Tja, smaken är olika. Men smaka på den här soppan och försök sen säga att det inte ligger något i den gamle lundaprofessorns karakteristik …

MUMBO-JUMBO1The move from a structuralist account in which capital is understood to structure social relations in relatively homologous ways to a view of hegemony in which power relations are subject to repetition, convergence, and rearticulation brought the question of temporality into the thinking of structure, and marked a shift from a form of Althusserian theory that takes structural totalities as theoretical objects to one in which the insights into the contingent possibility of structure inaugurate a renewed conception of hegemony as bound up with the contingent sites and strategies of the rearticulation of power.

Judith Butler

RCTs — the danger of imposing a hierarchy of evidence

28 Dec, 2019 at 14:37 | Posted in Theory of Science & Methodology | Comments Off on RCTs — the danger of imposing a hierarchy of evidence

a2The imposition of a hierarchy of evidence is both dangerous and unscientific. Dangerous because it automatically discards evidence that may need to be considered, evidence that might be critical. Evidence from an RCT gets counted even if when the population it covers is very different from the population where it is to be used, if it has only a handful of observations, if many subjects dropped out or refused to accept their assignments, or if there is no blinding and knowing you are in the experiment can be expected to change the outcome. Discounting trials for these flaws makes sense, but doesn’t help if it excludes more informative non-randomized evidence. By the hierarchy, evidence without randomization is no evidence at all, or at least is not “rigorous” evidence. An observational study is discarded even if it is well-designed, has no clear source of bias, and uses a very large sample of relevant people.

Angus Deaton

Understanding and misunderstanding RCTs

20 Dec, 2019 at 16:24 | Posted in Theory of Science & Methodology | Comments Off on Understanding and misunderstanding RCTs

 

Great lecture by one of my favourite philosophers of science.

Among other things, Nancy Cartwright underscores the problem many ‘randomistas’ end up with when underestimating heterogeneity and interaction is not only an external validity problem when trying to ‘export’ regression results to different times or different target populations. It is also often an internal problem to the millions of regression estimates that economists produce every year.

‘Ideally controlled experiments’ tell us with certainty what causes what effects — but only given the right ‘closures.’ Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. And since trials usually are not repeated, unbiasedness and balance on average over repeated trials says nothing about any one trial. ‘It works there’ is no evidence for ‘it will work here.’ Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of ‘rigorous’ and ‘precise’ methods — and ‘on-average-knowledge’ — is despairingly small.

RCTs have very little reach beyond giving descriptions of what has happened in the past. From the perspective of the future and for policy purposes they are as a rule of limited value since they cannot tell us what background factors were held constant when the trial intervention was being made.

RCTs usually do not provide evidence that the results are exportable to other target systems. RCTs cannot be taken for granted to give generalizable results. That something works somewhere for someone is no warranty for us to believe it to work for us here or even that it works generally.

RCTs — assumptions, biases and limitations

25 Nov, 2019 at 15:01 | Posted in Theory of Science & Methodology | Comments Off on RCTs — assumptions, biases and limitations

Randomised experiments require much more than just randomising an experiment to identify a treatment’s effectiveness. They involve many decisions and complex steps that bring their own assumptions and degree of bias before, during and after randomisation …

rcSome researchers may respond, “are RCTs not still more credible than these other methods even if they may have biases?” For most questions we are interested in, RCTs cannot be more credible because they cannot be applied (as outlined above). Other methods (such as observational studies) are needed for many questions not amendable to randomisation but also at times to help design trials, interpret and validate their results, provide further insight on the broader conditions under which treatments may work, among other rea- sons discussed earlier. Different methods are thus complements (not rivals) in improving understanding.

Finally, randomisation does not always even out everything well at the baseline and it cannot control for endline imbalances in background influencers. No researcher should thus just generate a single randomisation schedule and then use it to run an experiment. Instead researchers need to run a set of randomisation iterations before conducting a trial and select the one with the most balanced distribution of background influencers between trial groups, and then also control for changes in those background influencers during the trial by collecting endline data. Though if researchers hold onto the belief that flipping a coin brings us closer to scientific rigour and understanding than for example systematically ensuring participants are distributed well at baseline and endline, then scientific understanding will be undermined in the name of computer-based randomisation.

Alexander Krauss

The point of making a randomized experiment is often said to be that it ‘ensures’ that any correlation between a supposed cause and effect indicates a causal relation. This is believed to hold since randomization (allegedly) ensures that a supposed causal variable does not correlate with other variables that may influence the effect.

The problem with that simplistic view on randomization is that the claims made are both exaggerated and false:

• Even if you manage to do the assignment to treatment and control groups ideally random, the sample selection certainly is — except in extremely rare cases — not random. Even if we make a proper randomized assignment, if we apply the results to a biased sample, there is always the risk that the experimental findings will not apply. What works ‘there,’ does not work ‘here.’ Randomization hence does not ‘guarantee ‘ or ‘ensure’ making the right causal claim. Although randomization may help us rule out certain possible causal claims, randomization per se does not guarantee anything!

• Even if both sampling and assignment are made in an ideal random way, performing standard randomized experiments only give you averages. The problem here is that although we may get an estimate of the ‘true’ average causal effect, this may ‘mask’ important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are ‘treated’  may have causal effects equal to -100 and those ‘not treated’ may have causal effects equal to 100. Contemplating being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the average effect particularly enlightening.

• There is almost always a trade-off between bias and precision. In real-world settings, a little bias often does not overtrump greater precision. And — most importantly — in case we have a population with sizeable heterogeneity, the average treatment effect of the sample may differ substantially from the average treatment effect in the population. If so, the value of any extrapolating inferences made from trial samples to other populations is highly questionable.

• Since most real-world experiments and trials build on performing a single randomization, what would happen if you kept on randomizing forever, does not help you to ‘ensure’ or ‘guarantee’ that you do not make false causal conclusions in the one particular randomized experiment you actually do perform. It is indeed difficult to see why thinking about what you know you will never do, would make you happy about what you actually do.

Randomization is not a panacea. It is not the best method for all questions and circumstances. Proponents of randomization make claims about its ability to deliver causal knowledge that are simply wrong. There are good reasons to be sceptical of the now popular — and ill-informed — view that randomization is the only valid and best method on the market. It is not.

Next Page »

Blog at WordPress.com.
Entries and comments feeds.