Not really, say two studies recently published articles in JAMA and PLoS Medicine that scrutinized studies in various clinical trial registries.
The idea of trial registries is that researchers provide all the details of their study – such as the number of patients they need to recruit and the primary outcome (e.g. death or heart attack) – before they start the study. Then once the study has been completed and published, other people can refer to the record in the registry to see whether the pre-specified protocol has been followed and if the researchers have really done what they said they were going to do and how they said they were going to do it.
The JAMA study compared published articles with their entries in clinical trial databases and found that less than half of the 323 published articles in their sample were adequately registered and nearly a third weren’t registered at all. Shockingly, 31% of adequately registered trials published different outcomes from the outcomes registered on the clinical trial databases – that is, the authors “changed their mind” about what they were researching at some point in the process.
The authors of the PLoS paper did their research the other way around and looked at whether trials in ClinicalTrials.gov, the registry set up by the US National Institutes of Health and the Food and Drug Administration, went on to be published. Less than half of the 677 trials they studied were eventually published. Trials sponsored by industry and, interestingly, trials funded by the government were less likely to be published than those funded by nonindustry or nongovernmental sources.
Why is this important? Imagine a group of researchers are looking into whether an new drug reduces the incidence of heart attack (the primary outcome). They spend thousands of pounds recruiting patients and studying them for years and years, then find out that their drug doesn’t prevent heart attack at all but is very helpful in patients with migraine. The researchers could then decide to change the primary outcome of their study from “incidence of heart attack” to “incidence of migraine”, even though the trial had been intricately designed to look at the former not the latter. Their statistics and results will be all out of whack and frankly unreliable, but they could still go ahead and market their blockbuster drug. Researchers could get away with this if they don’t put their trial details in a registry before they started.
Imagine that the wonder cardiovascular drug our shady researchers are investigating has no effect whatsoever on heart attack, so the researchers just decide not to publish their results. Other researchers looking at very similar drugs could plod on for ages with their own experiments to no avail because they had no idea that someone else had already shown the drug was useless. A more disconcerting possibility is that a drugs company could find out their key money spinner has a nasty side effect but decide to bury the research showing this and never publish it. This is known as publication bias. Researchers, funding bodies and editors are all more interested in publishing studies that find something interesting rather than ones that show nothing at all – where’s the headline in “candidate drug doesn’t do much”? When it comes to drugs that are already on the market though, knowing the situations in which the drug doesn’t work is just as important as knowing when it does work.
And from a patient perspective? Imagine your doctor has been prescribing you a drug that should help your stomach ulcers. What he doesn’t know is that an unpublished trial somewhere says the drug will also dangerously thin your blood. If the doctor has no idea this trial ever existed – it wasn’t registered when the trial kicked off and was never published – or the negative results were masked by the paper reporting a different primary outcome to that in a registry, he or she will continue prescribing you a drug with risky side effects. The example of arthritis drug Vioxx springs to mind…
These are all quite dramatic examples but illustrate why we need to know about trials at their inception rather than once they’re published and, therefore, why full use of clinical trial registries is important.