What kind of evidence do RCTs provide?

6 May, 2021 at 14:01 | Posted in Theory of Science & Methodology | 3 Comments

Randomized Control Trials Feed Our Fetish for Single-Focus Interventions -  ICTworksPerhaps it is supposed that the assumptions for an RCT are generally more often met (or meetable) than those for other methods. What justifies that? Especially given that the easiest assumption to feel secure about for RCTs—that the assignment is done “randomly”—is far from enough to support orthogonality, which is itself only one among the assumptions that need support. I sometimes hear, “Only the RCT can control for unknown unknowns.” But nothing can control for unknowns that we know nothing about. There is no reason to suppose that, for a given conclusion, the causal knowledge that it takes to stop post‐randomization correlations in an RCT is always, or generally, more available or more reliable than the knowledge required for one or another of the other methods to be reliable.

It is also essential to be clear what the conclusion is. As with any study method, RCTS can only draw conclusions about the objects studied—for the RCT, the population enrolled in the trial, which is seldom the one we are interested in. The RCT method can be expanded of course to include among its assumptions that the trial population is a representative sample of the target. Then it follows deductively that the difference in mean outcomes between treatment and control groups is an unbiased estimate of the ATE of the target population. How often are we warranted in assuming that, though, and on what grounds? Without this assumption, an RCT is just a voucher for claims about any except the trial population. What then justifies placing it above methods that are clinchers for claims we are really interested in—about target populations?

Nancy Cartwright

3 Comments

  1. What I find especially refreshing in Nancy’s JECL paper is her advocacy for causal pluralism. There is nothing wrong with using RCTs, quantitative counterfactual do-calculus, or potential outcome models per se — but they only give valid and ‘transportable’ results when the (often both epistemologically and ontologically) demanding causal assumptions underlying them are (more or less uncontroversially) justified and warranted. In some contexts — perhaps more in clinical than in economic — they may often be applicable. In others, they are not, and in those cases, we would probably profit from applying some of the other models and approaches that Nancy discusses in her paper.

    • For what it’s worth, I think your point about clinical vs economic is key, and often missed in these discussions. Randomized allocation of biological interventions in patients is a very different animal than randomized allocation of economic interventions in communities, certainly with respect to the degree of heroism need to make assumptions about transportability.

  2. The first of the two quoted paragraphs is pretty much mistaken, insofar as it reflects a common failure to understand what treatment randomization is supposed to do (and often does): provide an uncertainty (“statistical”) distribution for residual confounding effects in the trial; for details see Senn “Seven Myths of Randomisation in Clinical Trials” https://onlinelibrary.wiley.com/doi/abs/10.1002/sim.5713
    A more general way of putting it is that randomized treatment assignment is designed to create a perfect instrumental variable for treatment effects in the trial.

    Interestingly, misunderstanding, about the purpose and effect (and thus the justification) of randomization seems common among philosophers but not among clinicians, who more often evince the opposite mistake which Cartwright rightly criticizes: The belief that randomization creates balance (“orthogonality”) on unmeasured covariates. And many statisticians foster this misconception when they refer to randomization as justifying an assumption of “no unmeasured confounding” – a locution based on ignoring the ordinary-language common-sense meaning of confounding as mixing of effect (just as they similarly abuse words like “significance” and “confidence”). This is a case where very much, the truth is somewhere in between the mistakes, and much more subtle than any of them.

    The second quoted paragraph is however on target: The Achilles heel of real RCTs is that they are done on carefully screened patient groups that rarely begin to approach what clinical practice population look like. And all the worse for real practice, those randomized groups are usually selected (with ethical justification, BTW) based on expectations about who will be helped and not harmed by the treatment, much more intensely so than in real practice. That trial vs. practice discrepancy leads to the oft-observed overoptimism (relative to real practice) of RCT estimates for benefit and harm frequencies. So Cartwright’s RCT skepticism and the cartoon are justified after all! There is however a growing body of methodology on how to best project from trials to practice, much of it under the heading of “transportability” of causal effects.


Sorry, the comment form is closed at this time.

Blog at WordPress.com.
Entries and Comments feeds.