Randomization — a philosophical device gone astray

23 Nov, 2017 at 10:30 | Posted in Theory of Science & Methodology | 1 Comment

When giving courses in the philosophy of science yours truly has often had David Papineau’s book Philosophical Devices (OUP 2012) on the reading list. Overall it is a good introduction to many of the instruments used when performing methodological and science theoretical analyses of economic and other social sciences issues.

Unfortunately, the book has also fallen prey to the randomization hype that scourges sciences nowadays.

philosophical-devices-proofs-probabilities-possibilities-and-sets The hard way to show that alcohol really is a cause of heart disease is to survey the population … But there is an easier way … Suppose we are able to perform a ‘randomized experiment.’ The idea here is not to look at correlations in the population at large, but rather to pick out a sample of individuals, and arrange randomly for some to have the putative cause and some not.

The point of such a randomized experiment is to ensure that any correlation between the putative cause and effect does indicate a causal connection. This works​ because the randomization ensures that the putative cause is no longer itself systematically correlated with any other properties that exert a causal influence on the putative effect … So a remaining correlation between the putative cause and effect must mean that they really are causally connected.

The problem with this simplistic view on randomization is that the claims made by Papineau on behalf of randomization are both exaggerated and invalid:

• Even if you manage to do the assignment to treatment and control groups ideally random, the sample selection certainly is — except in extremely rare cases — not random. Even if we make a proper randomized assignment, if we apply the results to a biased sample, there is always the risk that the experimental findings will not apply. What works ‘there,’ does not work ‘here.’ Randomization a fortiori does not ‘guarantee ‘ or ‘ensure’ making the right causal claim. Although randomization may help us rule out certain possible causal claims, randomization per se does not guarantee anything!

• Even if both sampling and assignment are made in an ideal random way, performing standard randomized experiments only give you averages. The problem here is that although we may get an estimate of the ‘true’ average causal effect, this may ‘mask’ important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are ‘treated’  may have causal effects equal to -100 and those ‘not treated’ may have causal effects equal to 100. Contemplating being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the average effect particularly enlightening.

• Since most real-world experiments and trials build on performing a single randomization, what would happen if you kept on randomizing forever, does not help you to ‘ensure’ or ‘guarantee’ that you do not make false causal conclusions in the one particular randomized experiment you actually do perform. It is indeed difficult to see why thinking about what you know you will never do, would make you happy about what you actually do.

Randomization is not a panacea — it is not the best method for all questions and circumstances. Papineau and other proponents of randomization make claims about its ability to deliver causal knowledge that are simply wrong. There are good reasons to be sceptical of the now popular — and ill-informed — view that randomization is the only valid and best method on the market. It is not.

1 Comment

  1. There is something about “random” as a concept that seems to completely befuddle shared human intuitions about the way the world works. Is it entropy? Ignorance? Is entropy ignorance? Is distinguishing the signal from the noise a matter of finding the meaning of the noise? Or the meaning of the signal? Is the signal a product of the design of a voice or an ear?
    .
    Probability concepts were developed in large part by examining gambling problems and the casino became a source of foundational metaphors, but with only rare references to concepts of control. A casino is a minutely controlled environment, engineered to deliver precisely regular events from nearly perfectly controlled processes, event ready-made to be measured and aggregated as statistics without any loss of information. How someone as intelligent as Papineau obviously is could dupe himself into imagining complex, uncontrolled (or very imperfectly irregularly controlled) nature or society as presenting itself as a casino, and a conveniently willy nilly casino at that, stuffing our near-total ignorance into dark corners of linearly balanced irrelevancy . . . ? — just asking the question is exasperating.
    .
    It seems to me that we need to start again, let loose our grip on our “solutions” and try to grasp the problem anew, to see the problem as identifying how the complex systems of nature or society (as our curiosity may lead us) come to create emergent control of processes and building a theory of knowledge that acknowledges how we ourselves come to recognize our own power to intervene and grasp the levers of control and call that recognition, knowledge.
    .
    We are in the society we study and our uncertainty is inextricably married to our emergent knowledge. The experimenter is part of the experiment; the observer is part of the observation. This business where otherwise intelligent people wipe intricacy away to pretend some metaphoric “treatment” can be isolated in its effects by assumption in lieu of imposing wholly impractical or impossible control, without even asking about the ontological implications for the general subject of study of that impossibility or impracticality — we need to put aside this “method” as being what it is: no method at all.


Sorry, the comment form is closed at this time.

Blog at WordPress.com.
Entries and Comments feeds.