Journal of Failed Experiments

18 Feb, 2016 at 10:25 | Posted in Theory of Science & Methodology | 2 Comments

The advantage of randomised experiments in describing populations creates an illusion of knowledge … This happens because of the propensity of scientific journals to value so-called causal findings and not to value findings where no (so-called) causality is found. In brief, it is arguable that we know less than we think we do.

tumblr_mvn24oSKXv1rsxr1do1_500To see this, suppose—as is indeed the case in reality—that thousands of researchers in thousands of places are conducting experiments to reveal some causal link. Let us in particular suppose that there are numerous researchers in numerous villages carrying out randomised experiments to see whether M causes P. Words being more transparent than symbols, let us assume they want to see whether medicine (M) improves the school participation (P) of school-going children. In each village, 10 randomly selected children are administered M and the school participation rates of those children and also children who were not given M are monitored. Suppose children without M go to school half the time and are out of school the other half. The question is: is there a systematic difference of behaviour among children given M?

I shall now deliberately construct an underlying model whereby there will be no causal link between M and P. Suppose Nature does the following. For each child, whether or not the child has had M, Nature tosses a coin. If it comes out tails the child does not go to school and if it comes out heads, the child goes to school regularly.

Consider a village and an RCT researcher in the village. What is the probability, p, that she will find that all 10 children given M will go to school regularly? The answer is clearly

p = (1/2)^10

because we have to get heads for each of the 10 tosses for the 10 children.

Now consider n researchers in n villages. What is the probability that in none of these villages will a researcher find that all the 10 children given M go to school regularly? Clearly, the answer is (1–p)^n.

Hence, if w(n) is used to denote the probability that among the n villages where the experiment is done, there is at least one village where all 10 tosses come out heads, we have:

w(n) = 1 – (1-p)^n.

Check now that if n = 1, that is, there is only one village where this experiment is done, the probability that all 10 children administered M will participate in school regularly is w(1) = 0.001. In other words, the likelihood is negligible.

It is easy to check the following are true:

w(100) = 0.0931,
w(1000) = 0.6236,
w(10 000) = 0.9999.

Therein lies the catch. If the experiment is done in 100 villages, the probability that there exists at least one village in which all tosses result in heads is still very small, less than 0.1. But if there are 1000 experimenters in 1000 villages doing this, the probability that there will exist one village where it will be found that all 10 children administered M will participate regularly in school is 0.6236. That is, it is more likely that such a village will exist than not. If the experiment is done in 10 000 villages, the probability of there being one village where M always leads to P is a virtual certainty (0.9999).

This is, of course, a specific example. But that this problem will invariably arise follows from the fact that

lim(n => infinity)w(n) = 1 – (1 -p)^n = 1.

Given that those who find such a compelling link between M and P will be able to publish their paper and others will not, we will get the impression that a true causal link has been found, though in this case (since we know the underlying process) we know that that is not the case. With 10 000 experiments, it is close to certainty that someone will find a firm link between M and P. Hence, the finding of such a link shows nothing but the laws of probability being intact. Yet, thanks to the propensity of journals to publish the presence rather than the absence of “causal” links, we get an illusion of knowledge and discovery where there are none.

One practical implication of this observation is that it spells out the urgent need for a Journal of Failed Experiments … Such a journal, there can be little doubt, will have a sobering effect on economics, making evident where the presence of a result is likely to be a pure statistical artefact.

Kaushik Basu

2 Comments

  1. Economists routinely publish hocus-pocus that can’t be internally replicated, let alone externally validated. As for meta-analysis:

    “Meta-analysts deal with publication bias by making the ‘file-drawer’ calculation: how many studies would have to be withheld from publication to change the outcome of the meta-analysis from significant to insignificant? Typically, the number is astronomical. This is because of a crucial assumption in the procedure — that the missing estimates are centred on zero. The calculation ignores the possibility that studies with contrarian findings — significant or insignificant — are the ones that have been withheld. There is still another possibility, which is ignored by the calculation: study designs may get changed in midstream if results are going the wrong way.”

    — David Freedman, Statistical Models and Causal Inference, p. 42 n. 10

  2. Remedies exist for the file draw problem: replication, and meta-analysis. I would be interested to hear how often these methods are used in Economics.


Sorry, the comment form is closed at this time.

Blog at WordPress.com.
Entries and Comments feeds.