Nature Clinical Practice Cardiovascular Medicine has recently published an interesting review article on clinical trial design – ‘From randomized trials to registry studies: translating data into clinical information‘.
This isn’t a guide on how to read a clinical paper – you should have a look at Prof Trisha Greenhalgh‘s book ‘How to read a paper‘ or the extracts published in the BMJ way back in 1997 in you need tips on that front. Rather, the NCP Cardiovascular Medicine review examines different study designs, and interestingly puts forward a case for observational trials, as compared with randomized controlled trials.
Randomization – in which patients are allocated to treatment or no treatment (or placebo) in an entirely indiscriminate manner, thus distributing both known and unknown confounders between groups – and control or placebo groups – which comprise patients who do not receive the intervention – are the benchmarks of a good clinical study, allowing an investigator to isolate the effect of a treatment from various confounding factors.
However, the NCP review argues:
“The results of observational studies are often dismissed in favor of prospective randomized studies because of the widely recognized biases inherent in observational studies. Yet such studies form the basis of much of the medical knowledge we have today. Accordingly, rather than dismiss information gained from observational studies, it is more appropriate to recognize these biases and their effect on results, and to modify interpretation appropriately. Indeed, from a practical standpoint, all studies sustain some form of bias, either implicitly or explicitly.”
In addition, the authors state:
“Strict inclusion and exclusion criteria mean that the results of randomized studies might not be as applicable to general populations as are findings from observational studies, including both clinical registries and retrospective reviews”
The take-home message of the article is that practicing clinicans should analyze the patient population of a trial carefully before applying the findings to a patients of theirs.
This paper also discusses statistical power and the use of surrogate and composite end points, the validity (or not) of post hoc analysis, and the utility of peer review for spotting trial-design pitfalls. But obviously I’m more interested in the iconoclastic view of randomized controlled trials…