What’s in placebos? No one’s telling…

Placebos – the inert substances taken by control groups in clinical trials – are often assumed to be harmless sugar pills or something along those lines. New research has found that actually it’s impossible to know what’s in placebos because there’s precious little documentation of what exactly is used in clinical trials.

Out of 176 research studies published in four of the biggest international medical journals, only one in five fully disclosed the composition of the placebo treatment. This lack of transparency suggests that all sorts of things could be being used, some of which might be having some sort of physiological effect and compromising the validity of findings on the study drug.

Placebo controlled clinical trials investigate the effects of a particular drug on a disease by comparing people who receive the treatment against patients receiving a placebo, which looks, smells, and tastes the same as the study drug but has no active ingredients. This design accounts for the placebo effect – the possibility that people in a trial may experience a health benefit because they’re taking a drug, not because the study drug they’re taking is effective.

In this study, the authors searched for randomised, placebo controlled trials published from January 2008 to December 2009 in four top medical journals – New England Journal of Medicine, JAMA, The Lancet, and Annals of Internal Medicine. A total of 176 trials were eligible for inclusion in the study – six studies of placebo pills, 65 studies of placebo injections, and 25 studies of other treatment methods (for example, nasal spray).

Only 40 (23%) of the 176 trials studied fully disclosed the composition of the placebo treatment, and 120 (68%) did not disclose any information on the placebo at all. The remainder partially disclosed what was in the placebo treatment.

Less than one in 10 (9.3%) studies that used pills disclosed the placebo, compared with 33.8% of studies that used injections and 40.0% that assessed other treatments. When papers that referred to previous publications for their primary findings or study design were excluded, these figures fell to 8.24%, 26.3%, and 27.8%, respectively.

By not paying attention to what’s in the placebo, researchers could be burying cases where the placebo has some sort of effect that’s similar to the effect of the study drug. The authors cite the example of trials of cholesterol lowering drugs that use olive oil and corn oil as the placebo. The monounsaturated and polyunsaturated fatty acids of these “placebos,” and their antioxidant and anti-inflammatory effects, could potentially reduce lipid levels and heart disease, just like the study drug, causing researchers to underestimate the effect of the cholesterol lowering drug.

“A positive or negative effect of the placebo can lead to the misleading appearance of a negative or positive effect of the drug,” author Beatrice Golomb, associate professor of medicine at the University of California, San Diego School of Medicine, told Science Daily. “And an effect in the same direction as the drug can lead a true effect of the drug to be lost. These concerns aren’t just theoretical. Where the composition has been disclosed, the ingredients of the placebo have in some instances had a likely impact on the result of the study – in either direction (obscuring a real effect, or creating a spurious one). In the cases we know about, this is not because of any willful manipulation, but because it can in fact be difficult to come up with a placebo that does not have some kind of problem.”
This post was chosen as an Editor's Selection for ResearchBlogging.org
The authors highlight what a huge effect this lack of transparency regarding placebos could have on medical research. “Because inferences from clinical trials propagate to clinical practice, failure to report placebo composition compromises the foundation on which medical decisions are based, and on which the fate of lives may rest,” they write.

————————————————————————————————-
Golomb BA et al. (2010) What’s in placebos: who knows? Analysis of randomized, controlled trials. Annals of Internal Medicine 153 (8): 532-5 PMID: 20956710

Continue Reading

Arch Intern Med roundup: diets, delays and disclosure

Arch intern MedThe journal Archives of Internal Medicine has a several cracking research papers this week.

Low carb dieters are grumpier than those on low fat diets

First up is Brinkworth et al.‘s research on the long-term psychological effects of low carbohydrate diets compared with low fat diets.

In this study, 106 overweight and obese individuals were randomly assigned to receive a low carbohydrate, high fat diet or a high carbohydrate, low fat plan. After one year, those participants on the low carbohydrate diet were more likely to be anxious, depressed, angry or confused than were those on the low fat diet. Both diets had the same number of calories and produced a similar amount of weight loss (13.7kg).

The authors suggest that the social difficulty of adhering to a low carbohydrate plan, which is counter to the typical Western diet full of pasta and bread, may be in part responsible for the mood deterioration in the low carb group. Alternatively, protein and fat intake may differently affect brain levels of serotonin, the so-called “happy hormone” (NB: its a neurotransmitter, not a hormone).

The Daily Telegraph points out that the infamous meat-heavy Atkins diet is essentially a low carb, high fat plan – bad news for all the celebrity fans.  Suddenly the term “hangry” makes more sense…

——————-
Brinkworth GD, Buckley JD, Noakes M, Clifton PM, & Wilson CJ (2009) Long-term Effects of a Very Low-Carbohydrate Diet and a Low-Fat Diet on Mood and Cognitive Function. Arch Intern Med 169 (20): 1873-1880. URL: Here

Fewer than ever emergency department patients are being seen on time

Next is Horwitz and Bradley’s paper on wait times to see a doctor in US emergency departments. The authors assessed more than 150,000 visits and found that only one in four patients were seen within the target triage time in 2006, compared with one in five in 2007.  By 2006, the odds of being seen on time were 30% lower than in 1997.

Interestingly, the proportion of patients seen on time did not differ on the basis of insurance status or race/ethnicity.  As the LA Times put it, “The conventional wisdom that throngs of low-income, uninsured people who use the ER as a substitute for primary care visits are to blame is wrong.”

Instead, the change in wait times was driven by delays in attending to emergency cases, who were 87% less likely to be seen within the target time than nonurgent cases.

As the authors says, “The percentage of patients in the emergency department who are seen by a physician within the time recommended … is at its lowest point in at least 10 years”

——————-
Horwitz LI & Bradley EH (2009) Percentage of US Emergency Department Patients Seen Within the Recommended Triage Time: 1997 to 2006. Arch Intern Med 169 (20): 1857-1865. URL: Here

GP visits are getting longer and better

Timings are also increasing in primary care, but rather than waiting times the time patients spend with their doctor is growing, according to Chen and colleagues.

Visits by adults to primary care physicians in the US between 1997 and 2005 increased by 10%, from 273 million to 338 million annually.  During this period, the mean duration of visit increased from 18.0 minutes to 20.8 minutes. Visit duration increased the most for people with any form of arthritis – by 5.9 minutes.

The increase in time spent with physicians seemed to be down to doctors spending longer counselling their patients. Visits for counselling or screening generally took 2.6-4.2 minutes longer than visits in which patients did not receive these services, whereas there was no change in the duration of visits in which doctors simply provided medication.

“Although it is possible that physicians have become less efficient over time, it is far more likely that visit duration has increased because it takes more resources or time to care for an older and sicker population,” the authors conclude. These findings thus contradict the belief that doctors are shaving time off consultations to meet efficiency goals, says the Wall Street Journal.

————–
Chen LM, Farwell WR, & Jha AK (2009) Primary Care Visit Duration and Quality: Does Good Care Take Longer? Arch Intern Med 169 (20): 1866-1872. URL: Here

Patients rate care better if doctors disclose mistakes

Finally, López et al. looked at how health professional disclosure of adverse events – an injury caused by some aspect of medical care and not by the underlying medical condition – affects patient perceptions of care.  They found that in patients who experienced an adverse event in hospital, those whose doctor told them about the event were likely to rate their care more highly than patients whose caregivers did not address the problem.

A total of 845 adverse events were reported in this sample of almost 2,600 acute care adult patients, but only 40% of these were disclosed. However, disclosure of preventable and nonpreventable events was associated with high ratings of quality of care. In addition, patients who felt that they were able to protect themselves from adverse events were likely to rate their care favourably.

On the other hand, patients who experienced medical accidents that were preventable, caused increased discomfort, or continued to negatively affect the patient for some time after the event tended to rate their care poorly.

“Although rates of disclosure remain disappointingly low, our findings should encourage more disclosure and allay fears of malpractice lawsuits,” say the authors. “Patients want to be told the truth, and they perceive disclosure as integral to high-quality medical care.”

———————-
López L, Weissman JS, Schneider EC, Weingart SN, Cohen AP, & Epstein AM (2009) Disclosure of Hospital Adverse Events and Its Association With Patients’ Ratings of the Quality of Care. Arch Intern Med 169 (20): 1888-1894. URL: Here

Continue Reading

Nearly a third of clinical trials don’t adequately report adverse events

Medical libraryA study published in Archives of Internal Medicine has found that almost a third of clinical trials reported in top medical journals don’t adequately report the side effects of the intervention being tested.

Pitrou et al. assessed the reporting of safety data in 133 randomised controlled trials published between January 2006 and January 2007 in five high impact factor medical journals: New England Journal of Medicine, Lancet, Journal of the American Medical Association, British Medical Journal and Annals of Internal Medicine. PLoS Medicine was included in the search for relevant papers, but no trials from this journal were assessed.

Although 88.7% of published trials mentioned at some point the adverse effects of the study intervention – that is, the drug or non-pharmacological treatment being investigated – 32.6% of all trials didn’t properly report the adverse events data. For example, 17 articles only provided a description of the most common adverse events, whereas 16 reported just severe adverse events.

Thirty-six articles (27.1%) did not give any information the severity of the adverse events reported.  In addition, 63 reports (47.4%) did not give any data on the number of patients who withdrew from the trial owing to adverse events.

So why is this research important?  As the authors say, “the reporting of harm is as important as the reporting of efficacy in publications of clinical trials.”  Insufficient reporting of side effects affects the interpretation of the trial results and distorts the picture of the drug for both clinicians and patients – the drug seems effective but without full adverse effect data no-one can properly assess the risks and benefits of using it.

Writing in an Editorial in the same issue of Arch Intern Med, John PA Ioannidis discusses this issue further. “Accurate information on harms of medical interventions is essential for evidence-based practice”, he says. “Most newly introduced treatments usually have small, incremental benefits, if any, against already available interventions, and differences in the profile of harms should play a key role on treatment choice.”

In addition, this research raises the issue of reported research focusing on the benefits of the intervention being investigated and playing down the negative aspects – the dreaded publication bias.

Guidelines like the Consolidated Standards of Reporting Trials (CONSORT) statement have been put together to try to make sure that researchers report their trials in a complete and transparent way. The CONSORT Statement is a set of 22 recommendations for reporting randomised controlled trials that provides a standard way for authors to prepare reports of trial findings, thus aiding critical appraisal and interpretation of the results.

Granted, the CONSORT statement was only published in 2001 and thus it’s not entirely suprising that trial reporting wasn’t completely up to scratch in the 2006 papers analysed by Pitrou et al.

However, several journals, including BMJ, currently insist that authors fill in the CONSORT checklist and provide a flow chart before the paper can be accepted.  Let’s hope that researchers and publishers are now taking seriously the issue of thoroughly reporting adverse effects.

————————————————————————————
Pitrou I, Boutron I, Ahmad N & Ravaud P (2009) Reporting of safety results in published reports of randomized controlled trials. Archives of Internal Medicine 169 (19): 1756-61. PMID: 19858432

Continue Reading

Switching from paper to patient – taking part in a clinical trial

tracera_logoI make a living reading clinical research papers and am familiar with the big picture of clinical trials – papers published, guidelines amended and practice improved.  Grassroots clinical research – the work of doctors, nurses and patients undertaking a trial – has always seemed like a million miles away to me.

However, I’m hoping to get a new perspective on the nuts and bolts of how clinical research is conducted as my Dad is currently taking part in a huge rheumatoid arthritis trial – the TRACE RA trial.  This study is investigating whether heart drugs – statins – reduce the risk of heart attack and stroke in people with rheumatoid arthritis.

People with rheumatoid arthritis are at higher risk of cardiovascular disease than the general population and are thus are more likely to have a fatal heart attack or stroke.  This increased risk is thought to be due to a higher incidence of atherosclerosis in patients with rheumatoid arthritis – the inflammation that attacks the joints in such people is though to also affect the lining of their blood vessels.

Statins reduce “cardiovascular disease events” and mortality in high risk populations, largely through lowering cholesterol but also possibly through reducing inflammation.  We don’t know whether statins are beneficial in rheumatoid arthritis though, as people with this highly inflammatory condition are usually excluded from statin trials.

The TRACE RA trial is a prospective, 5-year, multicentre, randomised, double blind, placebo-controlled study that will assess the hypothesis that a statin is more effective than a placebo in the primary prevention of cardiovascular events in patients with rheumatoid arthritis.

Up to 4,000 people over the age of 50 who have had rheumatoid arthritis for at least 10 years are being randomised to receive either the statin atorvastatin or placebo daily. The patients in the trial will be followed up for up to 7 years to see if those on the statin are less likely to have a cardiovascular event than those on the placebo.

My Dad joined the trial quite recently and is currently going through his initial follow-up visits, which take place at 3, 6 and 12 months. At each visit he gives a blood sample and also fills in a questionnaire, which he showed me last time I visited. Dad was a bit concerned about the questionnaire, as he was being asked quite dramatic things like whether he was able to dress himself or cut up his own food. Thankfully his arthritis is well controlled and he doesn’t have any mobility problems, so he can answer “no” to most the questions; I’m guessing other trial participants aren’t so lucky.

The questionnaire he has to fill in is a validated tool for assessing functional disability called the Health Assessment Questionnaire (HAQ).  I’ve come across the questionnaire when reading rheumatology papers – it’s used in patients with a wide variety of rheumatic diseases including rheumatoid arthritis, osteoarthritis, lupus, and ankylosing spondylitis – so I was intrigued to get a proper look at it.

Given how much I go on about clinical research, Dad was keen to be involved in a trial himself and hopefully contribute a small part to improving treatment for rheumatoid arthritis.  There’s also a history of heart disease in my family, so (potentially) receiving a statin when he otherwise wouldn’t be on the list to do so could prove doubly beneficial for Dad.

I’m looking forward to following my Dad’s progress in the trial and reading the first published paper.  The design of the trial seems pretty solid so any positive findings could have considerable implications for how patients with rheumatoid arthritis are treated.  And my Dad – subject number whatever out of n=4000 or so – is playing his own little part.

Continue Reading

Do clinical trial registries reduce selective reporting of medical research?

Stack of papersNot really, say two studies recently published articles in JAMA and PLoS Medicine that scrutinized studies in various clinical trial registries.

The idea of trial registries is that researchers provide all the details of their study – such as the number of patients they need to recruit and the primary outcome (e.g. death or heart attack) – before they start the study. Then once the study has been completed and published, other people can refer to the record in the registry to see whether the pre-specified protocol has been followed and if the researchers have really done what they said they were going to do and how they said they were going to do it.

The JAMA study compared published articles with their entries in clinical trial databases and found that less than half of the 323 published articles in their sample were adequately registered and nearly a third weren’t registered at all.  Shockingly, 31% of adequately registered trials published different outcomes from the outcomes registered on the clinical trial databases – that is, the authors “changed their mind” about what they were researching at some point in the process.

The authors of the PLoS paper did their research the other way around and looked at whether trials in ClinicalTrials.gov, the registry set up by the US National Institutes of Health and the Food and Drug Administration, went on to be published.  Less than half of the 677 trials they studied were eventually published.  Trials sponsored by industry and, interestingly, trials funded by the government were less likely to be published than those funded by nonindustry or nongovernmental sources.

Why is this important? Imagine a group of researchers are looking into whether an new drug reduces the incidence of heart attack (the primary outcome). They spend thousands of pounds recruiting patients and studying them for years and years, then find out that their drug doesn’t prevent heart attack at all but is very helpful in patients with migraine. The researchers could then decide to change the primary outcome of their study from “incidence of heart attack” to “incidence of migraine”, even though the trial had been intricately designed to look at the former not the latter. Their statistics and results will be all out of whack and frankly unreliable, but they could still go ahead and market their blockbuster drug. Researchers could get away with this if they don’t put their trial details in a registry before they started.

Imagine that the wonder cardiovascular drug our shady researchers are investigating has no effect whatsoever on heart attack, so the researchers just decide not to publish their results. Other researchers looking at very similar drugs could plod on for ages with their own experiments to no avail because they had no idea that someone else had already shown the drug was useless. A more disconcerting possibility is that a drugs company could find out their key money spinner has a nasty side effect but decide to bury the research showing this and never publish it.  This is known as publication bias. Researchers, funding bodies and editors are all more interested in publishing studies that find something interesting rather than ones that show nothing at all – where’s the headline in “candidate drug doesn’t do much”? When it comes to drugs that are already on the market though, knowing the situations in which the drug doesn’t work is just as important as knowing when it does work.

And from a patient perspective? Imagine your doctor has been prescribing you a drug that should help your stomach ulcers. What he doesn’t know is that an unpublished trial somewhere says the drug will also dangerously thin your blood. If the doctor has no idea this trial ever existed – it wasn’t registered when the trial kicked off and was never published – or the negative results were masked by the paper reporting a different primary outcome to that in a registry, he or she will continue prescribing you a drug with risky side effects.  The example of arthritis drug Vioxx springs to mind…

These are all quite dramatic examples but illustrate why we need to know about trials at their inception rather than once they’re published and, therefore, why full use of clinical trial registries is important.

Continue Reading

Are researchers fudging clinical trial statistics?

Before a clinical trial can commence a protocol – a plan of exactly how a trial will be conducted – will be formulated.  As part of the planning, the individuals undertaking the trial will calculate approximately how many patients need to take part for the results to be meaningful (the ‘sample size’) and prespecify which statistical tests they will perform on the data once the trial is complete.

A new study of published clinical trials, however, has found that many do not report these crucial sample-size calculations and that authors often do not mention if they have changed their mind as to which statistical test they are going to use.  About half of the trials studied by Chan et al. did not include sample-size calculations or mention whether the statistical tests actually used on the data differed from those provided in the trial protocol.

It is important that people conducting clinical trials stick to the statistical methods outlined in their protocol, as different types of statistical test can produce different outcomes for the same set of raw data.  If trial authors plan to use a particular test then change their mind and use a different test once they have seen the data, the results can be inadvertently biased – or directly manipulated – so they appear much more positive.

In the recent BMJ study, Chan et al. compared the published papers of 70 Danish randomized clinical trials with the corresponding protocols, which had been submitted to the local ethics committees for approval before the trials commenced.

Only 11 trials fully and consistently described sample-size calculations in both the protocol and the published paper. There were unacknowledged discrepancies between the calculations in the protocol and those in the published paper in 53% of cases.

Most protocols and publications specified which statistical tests would be used on the trial data; however, in 60-100% of cases the tests listed in the published paper differed from those in the protocol.

So it seems that in many cases sample size calculations and statistical methods are not prespecified in trial protocols or are poorly reported.  If they are prespecified, authors don’t tend to acknowledge instances when the statistical methods used differ from those in the protocol.  These two practices can easily introduce bias into the analysis of clinical trials and, ultimately, lead to misinterpretation of study results.

All this is bad news for everyone – if trial results aren’t reported honestly and transparently then it will be impossible to tell which trials, and therefore treatments, will genuinely help patients.  Hopefully initiatives such as SPIRIT (Standard Protocol Items for Randomised Trials), launched by Chan et al., and CONSORT (Consolidated Standards of Reporting Trials) will improve the accuracy of clinical trial reporting, but always remember: “There are three kinds of lies: lies, damned lies, and statistics”.

————————————————————————————————-
Chan AW et al. (2008) Discrepancies in sample size calculations and data analyses reported in randomised trials: comparison of publications with protocols BMJ 337 (4 Dec 2008) DOI: 10.1136/bmj.a2299

Continue Reading

Irresponsible reporting of clinical trials by the news media

It is important for journalists to highlight any potential bias in medical research so that patients and physicians alike can judge how valid clinical trial findings are. Today the Journal of the American Medical Association published a study showing that almost half of news stories on clinical trials fail to report the funding source of the trial. In addition, two-thirds of news articles refer to study medications by their brand names instead of by their generic names.

The authors Hochman et al. reviewed papers published between 1st April 2004 and 30th April 2008 in the top five medical journals (New England Journal of Medicine, JAMA, the Lancet, Annals of Internal Medicine and Archives of Internal Medicine) to find pharmaceutical-company-funded studies that evaluated the efficacy or safety of medications. They then searched 45 major US newspapers (for example New York Times and USA Today) and 7 US-based primary news websites (including ABC News, CNN and MSNBC) for news stories that reported these clinical trials.

A total of 358 company-funded clinical trials were identified, and 117 of these yielded 306 distinct news stories. Of the 306 news stories, 42% did not report the funding source of the clinical study. A total of 277 of these news articles were about medications that had both brand names and generic names, but 67% of stories used brand names in at least half of the references to the medication and 38% used only brand names.

By using a brand name in news articles instead of a generic name, journalists are inadvertently favouring one pharmaceutical company over another. For example, the cholesterol lowering drug atorvastatin (generic name) is manufactured by several different pharmaceutical companies who all give it a different brand name – Pfizer call it Lipitor, whereas Merck until recently marketed a version called Zocor. Drugs are often referred to by their brand name because these titles tend to be better known – you’ve probably heard of paracetamol but not of acetaminophen; fair enough, maybe, but this practice still represents biased reporting.

Hochman et al. also surveyed 94 newspaper editors to find out whether these individuals thought that their publication accurately reported clinical trials. Interestingly, 88% of editors stated that their newspaper often or always reported reported company funding in articles about medical research, and 77% said that their publication often or always referred to medications by their generic names.

It seems that news outlets think they are reporting funding sources in medical articles when actually they’re not. Academic journals have strict policies for disclosing funding and potential conflicts of interest, so why don’t newspapers follow suit?

———————————————————————————————————————
M. Hochman, S. Hochman, D. Bor, D. McCormick (2008). News Media Coverage of Medication Research: Reporting Pharmaceutical Company Funding and Use of Generic Medication Names JAMA: The Journal of the American Medical Association, 300 (13), 1544-1550 DOI: 10.1001/jama.300.13.1544

Continue Reading

Randomized control freakery

Nature Clinical Practice Cardiovascular Medicine has recently published an interesting review article on clinical trial design – ‘From randomized trials to registry studies: translating data into clinical information‘.

This isn’t a guide on how to read a clinical paper – you should have a look at Prof Trisha Greenhalgh‘s book ‘How to read a paper‘ or the extracts published in the BMJ way back in 1997 in you need tips on that front. Rather, the NCP Cardiovascular Medicine review examines different study designs, and interestingly puts forward a case for observational trials, as compared with randomized controlled trials.

Randomization – in which patients are allocated to treatment or no treatment (or placebo) in an entirely indiscriminate manner, thus distributing both known and unknown confounders between groups – and control or placebo groups – which comprise patients who do not receive the intervention – are the benchmarks of a good clinical study, allowing an investigator to isolate the effect of a treatment from various confounding factors.

However, the NCP review argues:

“The results of observational studies are often dismissed in favor of prospective randomized studies because of the widely recognized biases inherent in observational studies. Yet such studies form the basis of much of the medical knowledge we have today. Accordingly, rather than dismiss information gained from observational studies, it is more appropriate to recognize these biases and their effect on results, and to modify interpretation appropriately. Indeed, from a practical standpoint, all studies sustain some form of bias, either implicitly or explicitly.”

In addition, the authors state:

“Strict inclusion and exclusion criteria mean that the results of randomized studies might not be as applicable to general populations as are findings from observational studies, including both clinical registries and retrospective reviews”

The take-home message of the article is that practicing clinicans should analyze the patient population of a trial carefully before applying the findings to a patients of theirs.

This paper also discusses statistical power and the use of surrogate and composite end points, the validity (or not) of post hoc analysis, and the utility of peer review for spotting trial-design pitfalls. But obviously I’m more interested in the iconoclastic view of randomized controlled trials…

Continue Reading