Are researchers fudging clinical trial statistics?

Before a clinical trial can commence a protocol – a plan of exactly how a trial will be conducted – will be formulated.  As part of the planning, the individuals undertaking the trial will calculate approximately how many patients need to take part for the results to be meaningful (the ‘sample size’) and prespecify which statistical tests they will perform on the data once the trial is complete.

A new study of published clinical trials, however, has found that many do not report these crucial sample-size calculations and that authors often do not mention if they have changed their mind as to which statistical test they are going to use.  About half of the trials studied by Chan et al. did not include sample-size calculations or mention whether the statistical tests actually used on the data differed from those provided in the trial protocol.

It is important that people conducting clinical trials stick to the statistical methods outlined in their protocol, as different types of statistical test can produce different outcomes for the same set of raw data.  If trial authors plan to use a particular test then change their mind and use a different test once they have seen the data, the results can be inadvertently biased – or directly manipulated – so they appear much more positive.

In the recent BMJ study, Chan et al. compared the published papers of 70 Danish randomized clinical trials with the corresponding protocols, which had been submitted to the local ethics committees for approval before the trials commenced.

Only 11 trials fully and consistently described sample-size calculations in both the protocol and the published paper. There were unacknowledged discrepancies between the calculations in the protocol and those in the published paper in 53% of cases.

Most protocols and publications specified which statistical tests would be used on the trial data; however, in 60-100% of cases the tests listed in the published paper differed from those in the protocol.

So it seems that in many cases sample size calculations and statistical methods are not prespecified in trial protocols or are poorly reported.  If they are prespecified, authors don’t tend to acknowledge instances when the statistical methods used differ from those in the protocol.  These two practices can easily introduce bias into the analysis of clinical trials and, ultimately, lead to misinterpretation of study results.

All this is bad news for everyone – if trial results aren’t reported honestly and transparently then it will be impossible to tell which trials, and therefore treatments, will genuinely help patients.  Hopefully initiatives such as SPIRIT (Standard Protocol Items for Randomised Trials), launched by Chan et al., and CONSORT (Consolidated Standards of Reporting Trials) will improve the accuracy of clinical trial reporting, but always remember: “There are three kinds of lies: lies, damned lies, and statistics”.

————————————————————————————————-
Chan AW et al. (2008) Discrepancies in sample size calculations and data analyses reported in randomised trials: comparison of publications with protocols BMJ 337 (4 Dec 2008) DOI: 10.1136/bmj.a2299

Continue Reading

The sexual health of Great Britain

This week the Office for National Statistics released the results of their 2007/08 contraception and sexual health survey, which was undertaken as part of the National Statistics Omnibus Survey.

Over four months, 1,164 women aged 16-49 and 1,543 men aged 16-69 completed a questionnaire on contraception use, sexual health, and knowledge of sexually transmitted infections (STIs). The survey found that the majority of Brits are monogamous. Men still claim to have had more sexual partners than women but at least are mostly using condoms while they’re playing the field. Women, on the other hand, prefer the pill to any other form of contraception. We’re not too hot on emergency contraception but know our STIs better than we used to, gleaning most of our info from the TV.

As many as 75% of men and 78% of women reported having had only one sexual partner in the previous year. Within all age groups, a higher proportion of men than women reported multiple sexual partners and more women than men reported having had just one partner.

The pill was the most popular form of contraception, used by 28% of women employing such measures, and the condom was the second most popular method (24%). In total, 43% men and 50% of women had used a condom in the past year, with those who had had more than one sexual partner more likely to have used a condom than those who had only had one partner. More specifically, 80% of men and 82% of women who had multiple partners had used a condom in the past year.

Almost all women (91%) had heard of the morning after pill, but awareness of the emergency intrauterine device (IUD) had dropped from 49% in 2000/01 to 37% in 2007/08. Less than half (49%) of the women who had heard of emergency contraception knew that the morning after pill is effective up to 72 hours after intercourse, while less than 10% were aware that the emergency IUD was effective if inserted up to five days after sex. Only 6% thought that the morning after pill protected against pregnancy until the next period and less than 1% believed that it protected against sexually transmitted infections.

Nearly all respondents correctly identified that chlamydia is an STI (85% of men and 93% of women), far more than in 2000/01 (35% and 65%, respectively), and nearly all men and women knew that gonorrhoea is an STI (92% and 91%, respectively).

Alarmingly, half of all respondents reported making no changes to their behaviour as a result of what they had heard about HIV/AIDS and other STIs, but thankfully more than a third of men and women said they had increased their use of condoms.

Most respondents got their information on STIs from television programmes (31%), followed by TV adverts (22%), and newspapers, magazines or books (20%). On the other hand, the internet was rarely used as a source of information about STIs, even by young people (3% of those aged 16-24).

Continue Reading

Randomized control freakery

Nature Clinical Practice Cardiovascular Medicine has recently published an interesting review article on clinical trial design – ‘From randomized trials to registry studies: translating data into clinical information‘.

This isn’t a guide on how to read a clinical paper – you should have a look at Prof Trisha Greenhalgh‘s book ‘How to read a paper‘ or the extracts published in the BMJ way back in 1997 in you need tips on that front. Rather, the NCP Cardiovascular Medicine review examines different study designs, and interestingly puts forward a case for observational trials, as compared with randomized controlled trials.

Randomization – in which patients are allocated to treatment or no treatment (or placebo) in an entirely indiscriminate manner, thus distributing both known and unknown confounders between groups – and control or placebo groups – which comprise patients who do not receive the intervention – are the benchmarks of a good clinical study, allowing an investigator to isolate the effect of a treatment from various confounding factors.

However, the NCP review argues:

“The results of observational studies are often dismissed in favor of prospective randomized studies because of the widely recognized biases inherent in observational studies. Yet such studies form the basis of much of the medical knowledge we have today. Accordingly, rather than dismiss information gained from observational studies, it is more appropriate to recognize these biases and their effect on results, and to modify interpretation appropriately. Indeed, from a practical standpoint, all studies sustain some form of bias, either implicitly or explicitly.”

In addition, the authors state:

“Strict inclusion and exclusion criteria mean that the results of randomized studies might not be as applicable to general populations as are findings from observational studies, including both clinical registries and retrospective reviews”

The take-home message of the article is that practicing clinicans should analyze the patient population of a trial carefully before applying the findings to a patients of theirs.

This paper also discusses statistical power and the use of surrogate and composite end points, the validity (or not) of post hoc analysis, and the utility of peer review for spotting trial-design pitfalls. But obviously I’m more interested in the iconoclastic view of randomized controlled trials…

Continue Reading