Article Text

Download PDFPDF

Bad trials are a scandal that need to be stopped
Free
  1. Philip Wiffen
  1. Thame, UK
  1. Correspondence to Professor Philip Wiffen, Thame, UK; pwiffen{at}oxfordsrs.org.uk

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Most of us are now encouraged to look for best evidence when making decisions about the care of an individual patient. Best evidence is generally considered to be a well conducted systematic review or a randomised controlled trial (RCT). But what if the RCT we choose to use is bad (red)? Some years ago Cochrane set up a risk of bias tool that enabled authors to evaluate bias within randomised trials based on an assessment of the reported methods used.1 While the tool is not perfect, it does highlight bias in trials by using a traffic light system. The original tool (there is an updated version) evaluated the risk of bias as low (green), unclear (amber) or high (red) for five different parameters based on the description of the study design and reporting of results. Unfortunately, we rarely have a full picture of what actually happened during the trial, merely a report for publication. The five parameters are:

  • Random sequence generation (checking for possible selection bias)

  • Allocation concealment (checking for possible selection bias)

  • Blinding of outcome assessment (checking for possible detection bias)

  • Incomplete outcome data (checking for possible attrition bias due to the amount, nature and handling of incomplete outcome data)

  • Any other bias including size. Some groups report size bias as follows: low risk of bias (200 participants or more per treatment arm); unclear risk of bias (50–199 participants per treatment arm); high risk of bias (fewer than 50 participants per treatment arm).

While the majority of studies score as a mixture of red and amber and some green, there are studies that exhibit a high risk of bias across all five assessments so five reds and hence a red trial.

Now the optimistic among us might think that given how trials are assessed has been in the public domain for many years, trialists would work hard to ensure that their methods and importantly the report of the trial avoids the high risk pitfall. Sadly that is not the case and the scandal of bad research and subsequent research waste continues.

This is particularly highlighted in a preprint article by Pirosca et al.2 This team used the Cochrane Library to assess how many of the included studies in published Cochrane reviews were red (high risk of bias). They randomly selected two intervention reviews from each of 49 review groups. The next stage was to analyse the number of trials across these reviews and categorise them into high, low and unclear risks of bias. They also used these data to estimate the potential costs of poor research to society.

What did they find? The 96 reviews were authored by 546 authors and 1640 trials provided risk of bias information. Of these 1013 (62%) were high risk (red), 494 (30%) were unclear and 133 (8%) were low risk of bias. There were 222 850 participants in red trials, 127 290 in amber and just 1132 in green. The authors stated that bad trials were found in studies in all clinical areas and all countries.

They estimated the cost of bad trials was £726 million (€853 million) to £8 billion (€9.4 billion).

The authors go on to draw some conclusions as follows:

  • ‘Do not fund a trial unless the trial team contains methodological and statistical expertise

  • Do not give ethical approval for a trial unless the trial team contains methodological and statistical expertise

  • Use a risk of bias tool at trial design

  • Train and support more methodologists and statisticians

  • Put more money into applied methodology research and supporting infrastructure’.

These generally seem sensible though the final two may be something of a wish list from statisticians.

What does this mean for those of us who want to use a systematic review or even a single RCT to inform treatment for a given patient?

First, in a review look for a report of the risk of bias. If it is not there be cautious about using it. For a trial take 10 min (shouldn’t take longer) to work out the risk of bias. If it is red then look for something else.

There is an ethical angle here too: 220 000 participants gave consent to participate in clinical trials that were poorly conducted (red). What a waste of resource and their altruism! Bad trials are a scandal that need to be stopped.

References

Footnotes

  • Competing interests None declared.

  • Provenance and peer review Commissioned; internally peer reviewed.