Nearly a third of clinical trials don’t adequately report adverse events

Medical libraryA study published in Archives of Internal Medicine has found that almost a third of clinical trials reported in top medical journals don’t adequately report the side effects of the intervention being tested.

Pitrou et al. assessed the reporting of safety data in 133 randomised controlled trials published between January 2006 and January 2007 in five high impact factor medical journals: New England Journal of Medicine, Lancet, Journal of the American Medical Association, British Medical Journal and Annals of Internal Medicine. PLoS Medicine was included in the search for relevant papers, but no trials from this journal were assessed.

Although 88.7% of published trials mentioned at some point the adverse effects of the study intervention – that is, the drug or non-pharmacological treatment being investigated – 32.6% of all trials didn’t properly report the adverse events data. For example, 17 articles only provided a description of the most common adverse events, whereas 16 reported just severe adverse events.

Thirty-six articles (27.1%) did not give any information the severity of the adverse events reported.  In addition, 63 reports (47.4%) did not give any data on the number of patients who withdrew from the trial owing to adverse events.

So why is this research important?  As the authors say, “the reporting of harm is as important as the reporting of efficacy in publications of clinical trials.”  Insufficient reporting of side effects affects the interpretation of the trial results and distorts the picture of the drug for both clinicians and patients – the drug seems effective but without full adverse effect data no-one can properly assess the risks and benefits of using it.

Writing in an Editorial in the same issue of Arch Intern Med, John PA Ioannidis discusses this issue further. “Accurate information on harms of medical interventions is essential for evidence-based practice”, he says. “Most newly introduced treatments usually have small, incremental benefits, if any, against already available interventions, and differences in the profile of harms should play a key role on treatment choice.”

In addition, this research raises the issue of reported research focusing on the benefits of the intervention being investigated and playing down the negative aspects – the dreaded publication bias.

Guidelines like the Consolidated Standards of Reporting Trials (CONSORT) statement have been put together to try to make sure that researchers report their trials in a complete and transparent way. The CONSORT Statement is a set of 22 recommendations for reporting randomised controlled trials that provides a standard way for authors to prepare reports of trial findings, thus aiding critical appraisal and interpretation of the results.

Granted, the CONSORT statement was only published in 2001 and thus it’s not entirely suprising that trial reporting wasn’t completely up to scratch in the 2006 papers analysed by Pitrou et al.

However, several journals, including BMJ, currently insist that authors fill in the CONSORT checklist and provide a flow chart before the paper can be accepted.  Let’s hope that researchers and publishers are now taking seriously the issue of thoroughly reporting adverse effects.

————————————————————————————
Pitrou I, Boutron I, Ahmad N & Ravaud P (2009) Reporting of safety results in published reports of randomized controlled trials. Archives of Internal Medicine 169 (19): 1756-61. PMID: 19858432

Continue Reading

Do clinical trial registries reduce selective reporting of medical research?

Stack of papersNot really, say two studies recently published articles in JAMA and PLoS Medicine that scrutinized studies in various clinical trial registries.

The idea of trial registries is that researchers provide all the details of their study – such as the number of patients they need to recruit and the primary outcome (e.g. death or heart attack) – before they start the study. Then once the study has been completed and published, other people can refer to the record in the registry to see whether the pre-specified protocol has been followed and if the researchers have really done what they said they were going to do and how they said they were going to do it.

The JAMA study compared published articles with their entries in clinical trial databases and found that less than half of the 323 published articles in their sample were adequately registered and nearly a third weren’t registered at all.  Shockingly, 31% of adequately registered trials published different outcomes from the outcomes registered on the clinical trial databases – that is, the authors “changed their mind” about what they were researching at some point in the process.

The authors of the PLoS paper did their research the other way around and looked at whether trials in ClinicalTrials.gov, the registry set up by the US National Institutes of Health and the Food and Drug Administration, went on to be published.  Less than half of the 677 trials they studied were eventually published.  Trials sponsored by industry and, interestingly, trials funded by the government were less likely to be published than those funded by nonindustry or nongovernmental sources.

Why is this important? Imagine a group of researchers are looking into whether an new drug reduces the incidence of heart attack (the primary outcome). They spend thousands of pounds recruiting patients and studying them for years and years, then find out that their drug doesn’t prevent heart attack at all but is very helpful in patients with migraine. The researchers could then decide to change the primary outcome of their study from “incidence of heart attack” to “incidence of migraine”, even though the trial had been intricately designed to look at the former not the latter. Their statistics and results will be all out of whack and frankly unreliable, but they could still go ahead and market their blockbuster drug. Researchers could get away with this if they don’t put their trial details in a registry before they started.

Imagine that the wonder cardiovascular drug our shady researchers are investigating has no effect whatsoever on heart attack, so the researchers just decide not to publish their results. Other researchers looking at very similar drugs could plod on for ages with their own experiments to no avail because they had no idea that someone else had already shown the drug was useless. A more disconcerting possibility is that a drugs company could find out their key money spinner has a nasty side effect but decide to bury the research showing this and never publish it.  This is known as publication bias. Researchers, funding bodies and editors are all more interested in publishing studies that find something interesting rather than ones that show nothing at all – where’s the headline in “candidate drug doesn’t do much”? When it comes to drugs that are already on the market though, knowing the situations in which the drug doesn’t work is just as important as knowing when it does work.

And from a patient perspective? Imagine your doctor has been prescribing you a drug that should help your stomach ulcers. What he doesn’t know is that an unpublished trial somewhere says the drug will also dangerously thin your blood. If the doctor has no idea this trial ever existed – it wasn’t registered when the trial kicked off and was never published – or the negative results were masked by the paper reporting a different primary outcome to that in a registry, he or she will continue prescribing you a drug with risky side effects.  The example of arthritis drug Vioxx springs to mind…

These are all quite dramatic examples but illustrate why we need to know about trials at their inception rather than once they’re published and, therefore, why full use of clinical trial registries is important.

Continue Reading