Are researchers fudging clinical trial statistics?

Before a clinical trial can commence a protocol – a plan of exactly how a trial will be conducted – will be formulated.  As part of the planning, the individuals undertaking the trial will calculate approximately how many patients need to take part for the results to be meaningful (the ‘sample size’) and prespecify which statistical tests they will perform on the data once the trial is complete.

A new study of published clinical trials, however, has found that many do not report these crucial sample-size calculations and that authors often do not mention if they have changed their mind as to which statistical test they are going to use.  About half of the trials studied by Chan et al. did not include sample-size calculations or mention whether the statistical tests actually used on the data differed from those provided in the trial protocol.

It is important that people conducting clinical trials stick to the statistical methods outlined in their protocol, as different types of statistical test can produce different outcomes for the same set of raw data.  If trial authors plan to use a particular test then change their mind and use a different test once they have seen the data, the results can be inadvertently biased – or directly manipulated – so they appear much more positive.

In the recent BMJ study, Chan et al. compared the published papers of 70 Danish randomized clinical trials with the corresponding protocols, which had been submitted to the local ethics committees for approval before the trials commenced.

Only 11 trials fully and consistently described sample-size calculations in both the protocol and the published paper. There were unacknowledged discrepancies between the calculations in the protocol and those in the published paper in 53% of cases.

Most protocols and publications specified which statistical tests would be used on the trial data; however, in 60-100% of cases the tests listed in the published paper differed from those in the protocol.

So it seems that in many cases sample size calculations and statistical methods are not prespecified in trial protocols or are poorly reported.  If they are prespecified, authors don’t tend to acknowledge instances when the statistical methods used differ from those in the protocol.  These two practices can easily introduce bias into the analysis of clinical trials and, ultimately, lead to misinterpretation of study results.

All this is bad news for everyone – if trial results aren’t reported honestly and transparently then it will be impossible to tell which trials, and therefore treatments, will genuinely help patients.  Hopefully initiatives such as SPIRIT (Standard Protocol Items for Randomised Trials), launched by Chan et al., and CONSORT (Consolidated Standards of Reporting Trials) will improve the accuracy of clinical trial reporting, but always remember: “There are three kinds of lies: lies, damned lies, and statistics”.

————————————————————————————————-
Chan AW et al. (2008) Discrepancies in sample size calculations and data analyses reported in randomised trials: comparison of publications with protocols BMJ 337 (4 Dec 2008) DOI: 10.1136/bmj.a2299

You may also like

Leave a Reply