The above footnote in Nancy Cartwright’s Hunting Causes and Using Them: Approaches in Philosophy and Economics* piqued my interest. She refers to John Worrall’s paper “What evidence in evidence-based medicine?”** Much of my work involves randomized trials of international development projects, so the argument interested me.
Ultimately, Worrall makes some very good points (yes, other evidence has validity as well) but I don’t find his critiques as convincing as he does.
Greatest hits below the fold…
It is widely believed in the medical profession that the only really scientifically “valid” evidence to be obtained from clinical trials is that obtained from trials employing randomized controls.
He gives a bunch of supporting quotes.
Because the randomised trial, and especially the systematic review of several randomised trials, is so much more likely to inform us and so much less likely to mislead us, it has become the “gold standard” for judging whether a treatment does more good than harm.
He then summarizes the four reasons usually given for randomizing and seeks to poke some holes in each:
1. “Fisher argued that the logic of the classical statistical significance test requires randomization.”
I just report first that it is not in fact clear that the argument is convincing even on its own terms [gives some cites in the footnote]; and secondly that there are, of course, many-not all of them card-carrying Bayesians-who regard the whole of classical signficance-testing as a broken-backed enterprise and hence who would not be persuaded of the need for randomization even if it had been convincingly shown that the justification for a significance test presupposes randomization.
I have no response to that.
2. “Randomization ‘controls for all variables, known and unknown.'”
The critiques here are (a) that randomization actually just makes it highly likely that known and unknown variables will be balanced – to which I say Fine, Most sensible researchers admit that from the outset; (b)
Even if there is only a small probability that an individual factor is unbalanced, given that there are indefinitely many possible confounding factors, then it would seem to follow that the probability that there is some factor on which the two groups are unbalanced (when remember randomly constructed) might for all we know be high.
Okay: neither of these seem a particular reason to toss the baby out with the bathwater, despite Worrall’s later claim that “the idea that randomization controls all at once for known and unknown factors (or even that it “tends” to do so) is a will-o’-the-wisp.”
3. Selection bias
My formulation: If doctors choose which treatment to give, then they could – for example – assign all the most desperate patients to the experimental treatment, making a poor comparison indeed.
Notice however that randomization as a way of controlling for selection bias is very much a means to an end, rather than an end in itself. What does the methodological work here is really the blinding [ME: not letting doctors choose where to assign patients] -randomization is simply one method of achieving this.
Okay… but randomization is a good way.
4. “Observational studies are ‘known’ to exaggerate treatment effects.”
He points out many studies which overturn earlier studies, suggesting that observational studies are not biased in any predictable way relative to randomized studies. (And that in some contexts, randomized studies of the same treatment have shown less consistency than observational studies: he interprets this as evidence against randomized trials. I see other possible explanations, including publication bias in observational studies.)
He then goes on to tell a compelling true story of a new treatment for neonatal hypertension in which observational evidence made it clear that it worked oodles better than conventional treatment, but researchers did multiple randomized trials anyway and therefore were potentially unethical by giving randomized trials the only claim to truth.
Point taken. His conclusion:
No solid grounds seem to have been provided for the automatic downgrading of “observational studies”-the undoubted fact that there have been such studies that are significantly methodological flawed does not, of course, simply that all such studies are methodologically flawed (and, in any case, no solid reason has been given for thinking of RCTs as miracle methodological flaw-removers).
I still suspect that randomized control trials are less skill dependent than observational studies (not non skill dependent, just less so): i.e., they’re harder to turn into complete nonsense. But I’ll get back to you if I ever get through Cartwright.
* Cartwright’s back-page blurb claims the book is “for anyone who wants to understand what causality is and what it is good for,” but that anyone had best already have a working knowledge of phrases like “Bayes-nets methods” and “modularity accounts,” for Nancy will not offer any help.
** Philosophy of Science, 69 (September 2002), pp S316-330. Free copy here.