Competing interests at medical journals: industry sponsored trials boost impact factors

These days medical journals are rigorous when it comes to getting researchers to declare any associations with industry that might influence how a trial is reported. Before agreeing to publish a paper, many of the top medical journals require authors to sign a comprehensive conflicts of interest form that outlines any financial or personal relationships that could be perceived as inappropriately influencing the authors’ actions with regard to the research.

But what about the journals themselves, how do they fare when subjected to the same rigorous transparency they impose on their authors? Not too well according to a study published recently in PLoS Medicine. Industry supported trials published in six major medical journals were cited more often than trials not sponsored by industry, and had a considerable bolstering effect on the journals’ impact factors – a measure of the “importance” of a journal.

The authors of this study looked at 1,353 randomised controlled trials published in six general medical journals in 1996-7 and 2005-6: Annals of Internal Medicine, Archives of Internal Medicine, BMJ, Journal of the American Medical Association, Lancet, and New England Journal of Medicine. When they categorised these trials they found that a third (32%) of trials published in NEJM in 1996-7 were industry sponsored, compared with roughly one in 10 (13%) published in BMJ. These proportions dropped for all journals by 2005-6 except for NEJM, so the same proportions nine years later were 32% and 7%.

They then looked at citations for these trials, in 1998 for the 651 published 1996-7 and in 2007 for the 702 from 2005-6. For all journals, there was a significant association between the degree of industry support among the studies published in 2005-6 and the number of citations. This was most pronounced for Annals, Archives, and Lancet, where industry sponsored trials were cited more than twice as often as trials not supported by industry.
Citations are important because they indicate how much a paper has affected a field – the more it has been cited by other authors, the more it has influenced research in the field. Having a high number of citations thus reflects well on the journal that published the study, suggesting that it is featuring the most important research.

The authors then calculated approximate impact factors for the journals by dividing the number of citations by the number of trials published in order to figure out how often the “average” article would be cited. When they removed industry sponsored trials from these equations, the impact factors dropped notably, in particular for NEJM (13-15%) and Lancet (6-11%).

Lastly, editors in chief were contacted and asked to provide information on what proportion of the journal’s total income was from sales of advertisements, reprints, and industry supported supplements. Only BMJ and Lancet provided this data.

Almost half (41%) of Lancet‘s income was from article reprints, which are often bought up by drug companies that sponsored a particular trial to promote the efficacy of their product. A further 1% of the journal’s total income was from display adverts. On the other hand, only 3% of the BMJ‘s income was from reprints and 16% was from display adverts. Neither made much money from industry supported supplements.

Then the authors got cunning. To get financial data for the three American journals – Annals, Archives, and NEJM – they dug up publicly available tax information for the journals’ parent companies. The Massachusetts Medical Society, owner of NEJM, earned 23% of its income from adverts, whereas no data was available on reprints and supplements.

For the American Medical Association, publisher of JAMA, 53% of its income was from adverts and 12% from reprints (no data on supplements). Richard Smith, former editor of the BMJ, reported that reprint sales from a single trial may lead to an income of US$1 million for a journal, so the 12% figure for JAMA is quite noteworthy. The Internal Revenue Service data did not specify the income for the American College of Physicians, publisher of Annals.

This post was chosen as an Editor's Selection for ResearchBlogging.org
So what do all these findings mean? It seems that publishing industry sponsored trials boosts the reputation of journals, some more than others. Of course, it is important that industry data does get published rather than sit in a drawer somewhere, and it seems a bit misplaced to criticise journals for benefiting from this process. Instead perhaps journals should make clear to what degree they benefit from publishing industry supported trials – financially and status wise.

———————————————————————————————-

Lundh A et al. (2010) Conflicts of Interest at Medical Journals: The Influence of Industry-Supported Randomised Trials on Journal Impact Factors and Revenue – Cohort Study. PLoS Medicine 7 (10). DOI: 10.1371/journal.pmed.1000354.

Continue Reading

Half the top US academic medical centers have no policy on ghostwriting

Half of the top 50 academic medical centres in the United States have no policies on their staff ghostwriting research on the behalf of pharmaceutical companies – including UCLA and Mayo Medical School.

Medical ghostwriting is “the practice of pharmaceutical companies secretly authoring journal articles published under the byline of academic researchers.” By getting academics at top universities to put their names to papers, often for financial reward, pharmaceutical companies aim to improve the authority of their research or even sneak dodgy methodology or fabricated findings past journal editors and readers.

Only 10 (20%) of the top 50 US academic medical centres explicitly ban their staff from ghostwriting, according to the survey published in PLoS Medicine, although three of these institutions don’t specifically use the term “ghostwriting” in their policies.

A further three (6%) have authorship policies that prohibit medical ghostwriting in practice by insisting both that staff make a substantive contribution to the paper to qualify for authorship and that all who qualify for authorship be listed.

Although all the top 10 academic research centres in the US have authorship policies, only six ban ghostwriting and the remaining four – including Duke University and Yale – don’t have policies in place.

Ghostwritten articles can be used by pharmaceutical companies to influence the prescribing – and the sales – of their top products. The authors of the study explain this practise by describing how a pharmaceutical sales representative might use such an article to influence the prescribing of a practicing clinician. “When a pharmaceutical salesperson hands a clinician an article reprint, the name of the institution on the front page of the reprint serves as a stamp of approval,” they write. “The article is not viewed as an advertisement, but as scientific research; the reprint is an effective marketing tool because peer-reviewed journal articles generated in academia are perceived to be the result of unbiased scientific inquiry.”

For example, pharmaceutical companies have used ghostwritten articles to promote sertraline – the most prescribed antidepressant in the US in 2007 – methylphenidate – also known as ritalin, the widely used, and abused, ADHD drug – and rofecoxib – otherwise known as Vioxx, the arthritis drug withdrawn in 2004 because it caused heart attacks.

Given how ghostwritten articles can be used to influence drug approval or prescribing, the authors describe the practise as “a serious threat to public health.” To try to combat ghostwriting, they recommend that participating in medical ghostwriting is defined as academic misconduct akin to plagiarism or falsifying data.

————————————————————————————————-
Lacasse J & Leo J (2010) Ghostwriting at Elite Academic Medical Centers in the United States. PLoS Medicine 7 (2) DOI: 10.1371/journal.pmed.1000230

Continue Reading

So what happened to the AIDs vaccine?

HIVYou might remember all the hullabaloo last month about the HIV vaccine developed by the US military and tested on 16,000 people in Thailand.  Hailed as an “HIV breakthrough” and a “historic milestone“, the initial press release of the study certainly had the media convinced that a prevention for AIDs was just around the corner.

Now the research has been presented in full at the AIDS Vaccine 2009 Conference in Paris and in the New England Journal of Medicine, and reactions are far more circumspect.

Granted, the vaccine in question is the first ever to provide any kind of protection against HIV, but it only prevented HIV-1 infection in 31.2% of participants. 74 of the 8198 volunteer who received the placebo vaccine became infected with HIV-1, but 51 of the 8197 people who were given the vaccine still managed to get infected – a difference of only 23 people.

I’m not really sure what happened with this story.  Did it get press released before publication and before anyone had a good look at all the data?  To be fair the initial news stories were pretty good in their reporting of the research, but why is the story doing the rounds again?

New Scientist is on the ball with this.  In September they published an article “What to make of the HIV vaccine ‘triumph’“, in which they point out that “the victory was won by the slenderest of numerical margins.”

In addition, New Scientist provides some sort of answer to my previous question.  Says the article: “The result was disclosed at the earliest available opportunity at the request of the Thai collaborators, says Merlin Robb, deputy director for clinical research at the MHRP.  “The Thai Ministry of Public Health was very anxious to let the volunteers in Thailand know the result as soon as possible, instead of waiting for a scientific conference,” says Robb. “This reflects our commitment to the volunteers and transparency in all aspects of this trial,” he said.”

So what’s with this jumping of the gun and presenting research before it’s been published in a peer review journal?  But researchers live in a “publish or perish” environment and are in constant fear of being “pipped to the post”.

The BMJ says “We do not want material that is published in the BMJ appearing beforehand, in detail, in the mass media” and “The BMJ does not want to publish material that has already appeared in full elsewhere“. And the New England Journal of Medicine cites their “Ingelfinger rule“, which “requires that author-researchers not release the details of their findings to the mass media before their work undergoes peer review and is published.”

I don’t think this research would have subsequently been published in the NEJM if the authors had in fact broken the embargo, so there must have ben some intense behind the scenes bargaining to get the paper released early – but only a month early.

I’m not really sure what point I’m trying to make here, but I think it’s certainly interesting that this paper made a bug splash a month before the full data was published then did the rounds again.

Continue Reading

Do clinical trial registries reduce selective reporting of medical research?

Stack of papersNot really, say two studies recently published articles in JAMA and PLoS Medicine that scrutinized studies in various clinical trial registries.

The idea of trial registries is that researchers provide all the details of their study – such as the number of patients they need to recruit and the primary outcome (e.g. death or heart attack) – before they start the study. Then once the study has been completed and published, other people can refer to the record in the registry to see whether the pre-specified protocol has been followed and if the researchers have really done what they said they were going to do and how they said they were going to do it.

The JAMA study compared published articles with their entries in clinical trial databases and found that less than half of the 323 published articles in their sample were adequately registered and nearly a third weren’t registered at all.  Shockingly, 31% of adequately registered trials published different outcomes from the outcomes registered on the clinical trial databases – that is, the authors “changed their mind” about what they were researching at some point in the process.

The authors of the PLoS paper did their research the other way around and looked at whether trials in ClinicalTrials.gov, the registry set up by the US National Institutes of Health and the Food and Drug Administration, went on to be published.  Less than half of the 677 trials they studied were eventually published.  Trials sponsored by industry and, interestingly, trials funded by the government were less likely to be published than those funded by nonindustry or nongovernmental sources.

Why is this important? Imagine a group of researchers are looking into whether an new drug reduces the incidence of heart attack (the primary outcome). They spend thousands of pounds recruiting patients and studying them for years and years, then find out that their drug doesn’t prevent heart attack at all but is very helpful in patients with migraine. The researchers could then decide to change the primary outcome of their study from “incidence of heart attack” to “incidence of migraine”, even though the trial had been intricately designed to look at the former not the latter. Their statistics and results will be all out of whack and frankly unreliable, but they could still go ahead and market their blockbuster drug. Researchers could get away with this if they don’t put their trial details in a registry before they started.

Imagine that the wonder cardiovascular drug our shady researchers are investigating has no effect whatsoever on heart attack, so the researchers just decide not to publish their results. Other researchers looking at very similar drugs could plod on for ages with their own experiments to no avail because they had no idea that someone else had already shown the drug was useless. A more disconcerting possibility is that a drugs company could find out their key money spinner has a nasty side effect but decide to bury the research showing this and never publish it.  This is known as publication bias. Researchers, funding bodies and editors are all more interested in publishing studies that find something interesting rather than ones that show nothing at all – where’s the headline in “candidate drug doesn’t do much”? When it comes to drugs that are already on the market though, knowing the situations in which the drug doesn’t work is just as important as knowing when it does work.

And from a patient perspective? Imagine your doctor has been prescribing you a drug that should help your stomach ulcers. What he doesn’t know is that an unpublished trial somewhere says the drug will also dangerously thin your blood. If the doctor has no idea this trial ever existed – it wasn’t registered when the trial kicked off and was never published – or the negative results were masked by the paper reporting a different primary outcome to that in a registry, he or she will continue prescribing you a drug with risky side effects.  The example of arthritis drug Vioxx springs to mind…

These are all quite dramatic examples but illustrate why we need to know about trials at their inception rather than once they’re published and, therefore, why full use of clinical trial registries is important.

Continue Reading

Nature Clinical Practice journals relaunch as Nature Reviews

Out with the old (L), in with the new (R)The eight specialty-based Nature Clinical Practice journals, which form part of the medical publishing arm of Nature Publishing Group, are to relaunch in April 2009 as Nature Reviews.

As well as joining the hugely successful Nature Reviews portfolio, which will increase in size from seven to fifteen titles in one fell swoop, the Nature Clinical Practice journals will be given a ‘facelift’ in print and online.  The previously rather austere journals are soon going to be printed in full colour, and the layout of the content has been shaken up to make the journals more eye catching and easier to navigate.

Nature Clinical Practice is a series of monthly review journals targeted at doctors and physicians. The peer-reviewed publications aim to deliver timely interpretations of key research developments, thus facilitating the translation of the latest findings into clinical practice.  Doctors often don’t have time to keep abreast of developments in their field, so Nature Clinical Practice does the reading for the clinician, filtering original research published in primary journals and highlighting the most important and relevant findings.

ncp-new-names

Launched in 2004 and 2005, the Nature Clinical Practice journals cover the fields of cardiology, clinical oncology, endocrinology, gastroenterology & hepatology, nephrology, neurology, rheumatology and urology. Articles are written by experts in the field and include editorial and opinion pieces, short highlights of research from the current literature, commentaries on the application of recent research to practical patient care, comprehensive reviews, and in-depth case studies.

“Including the eight clinical journals as part of the Nature Reviews series will enable us to bring to the clinical sciences the qualities that have made the life science Nature Reviews journals so successful,” says Dr James Butcher, Publisher of the clinical Nature Reviews titles. “No other publishing company is able to offer high quality monthly review journals covering advances in medical research from bench to bedside.”

“The clinical Nature Reviews journals will retain the high quality content that practicing clinicians rely on from the Nature Clinical Practice titles,” continues Dr Butcher. “We hope that the journals will now also become indispensable resources for clinical academics working in research settings who are familiar with the look and feel of the life science Nature Reviews titles.”

I actually used to work for Nature Clinical Practice and had the chance to check out samples of the relaunch journals before I left the company.  I thought the new full-colour Nature Reviews journals looked infinitely better than their somewhat dry and serious predessors, the vivid new layout really enhancing the high quality and rigorously edited content. I’m really looking forward to browsing the real thing once the first clinical Nature Reviews journals are published in April.

Continue Reading

The Lancet website relaunch

Today medical journal The Lancet relaunched a sleek and efficient new version of their website TheLancet.com.

The team at The Lancet consulted 100 authors, readers, doctors and clinicians – or ‘development partners’ – to find out what users wanted, and the result is a much cleaner and easier to use website. In the new design, The Lancet journals The Lancet, The Lancet Infectious Diseases, The Lancet Oncology, and The Lancet Neurology are now all accessible and searchable from a single website.

In a special podcast to accompany the launch, the Editor-in-Chief Richard Horton outlines his favourite features:

“The most exciting things about The Lancet’s new site for me are first, we have the possibility for internet television … in the YouTube would that we live in I think that’s immensely important for communication, especially in health when you’ve got some pretty difficult concepts sometimes. And secondly, I think we’re also able to convey the personality of The Lancet in ways that we’ve never been able to before: the idea that we’re publishing research, educational material, and also opinion.”

The new video functionality is showcast TheLancet.com Story, a very flashy and professional-looking production in which members of the journal staff and Dr Anne Szarewski, clinical consultant at Cancer Research UK and one of the development partners, discuss what the new website means to them. The Lancet hopes that in the future users will be able to submit their own medical videos to the site.

Richard Horton boasts that the website has “the best search engine in medicine”, and certainly it’s an awful lot faster than the search on the previous incarnation. Importantly, the search results include not only results from The Lancet family of journals, but also all relevant results in Medline, a life sciences and biomedical publication database run by the US National Library of Medicine.

Articles now include links to related material as well as social bookmarking tools, including parent company Elsevier’s 2Collab social networking tool. In addition, online community features are planned, including social networking, debates, wikis and discussion boards.

On the editorial side of things, original research articles now include drop-down Editors’ Notes within the table of contents – 2 or 3 sentences that summarize what is important about the research – while journal homepages feature three articles that represent the Editor’s choice. The news aspect of the website has been expanded with the inclusion of ‘This Week in Medicine’, short paragraph-long summaries of what has been going on in medicine worldwide. Specialty-based online collections comprising content from across The Lancet family of journals will launch in the near future, one mooted project being a cardiology portal.

I think the new version of TheLancet.com is a vast improvement on the previous website, not least because it is so much easier to navigate and doesn’t trip you up with sign-ins every 5 minutes. The site also looks far crisper, in stark contrast to it’s cluttered forebear. I’m more into text than multimedia so I’m not fussed about using the new video content, but it’s certainly very impressive and a new direction for The Lancet.

Continue Reading