Article Text

Download PDFPDF

How much is enough in making clinical decisions?
Free
  1. Phil Wiffen
  1. Correspondence to Professor Phil Wiffen, Pain Research Unit, Churchill Hospital, Oxford OX3 7LE, UK; phil.wiffen{at}ndcn.ox.ac.uk

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

I have just returned from teaching systematic review methods in China (one of my other jobs). Part of the training involves teaching critical appraisal skills for systematic reviews. The reviews I use are chosen to be controversial in terms of methodology, numbers and/or outcomes, all designed to get candidates thinking about what matters (for more on critical appraisal, see Chapter 5 of Evidence-based Pharmacy1). One of the key issues for me is that appraisal of systematic reviews, or any evidence, is not just an academic activity but, ultimately, should affect our decisions about the care of either an individual or, for many pharmacists, the care of large numbers of patients, especially if the evidence is used to make formulary choices or generate guidelines. So the question posed back to me on a number of occasions was ‘How big do the numbers need to be to be reliable?’ Unfortunately, there is no ‘magic’ number, though bigger is likely to be more reliable.

There is a literature on this subject, and a team from Paris had published on this subject in 2013.2 The authors investigated 93 meta-analyses published in 10 leading medical journals which included 735 randomised controlled trials. There were huge ranges in sample sizes—one meta-analysis contained studies with both 106 participants and also 48 835 participants. The results make interesting reading. For example, the authors state: ‘Compared with trials of 1000 patients or more, treatment effects were on average 48% larger in trials with fewer than 50 patients, 34% larger in trials with 50–99 patients… and 10% larger in trials of 500–999 patients’. These differences are big and clinically important.

The authors then took a similar dataset.3 They looked for the overall estimate from the largest trial and compared this with the overall result from a meta-analysis within the systematic review. They found that treatment outcomes were often substantially larger in the meta-analysis of all trials when compared with the largest study included in the same meta-analysis.

These results raise some interesting questions for those of us involved in clinical decision making. First, we need to be careful in extrapolating the results of small trials (or even of a small systematic review) to a large cohort, such as patients in guidelines or whose treatment is predetermined by a formulary decision. Second, these studies raise questions about the validity of systematic reviews of multiple small studies questioning whether this provides the best estimate of the true treatment effect. There are those who argue that large randomised controlled studies are likely to be more reliable and nearer to the clinical scenario. It also means that we need to look beyond the bottom line of a systematic review and look more closely at the larger included studies to at least get some impression of how these vary from the overall pooled result.

No easy answers, and remember that these investigations are around beneficial treatment outcomes; adverse outcomes are an entirely different matter. While no magic number exists, we should be wary of studies involving less than 200 participants and look for studies with at least 1000 participants. Systematic reviews may help, but similar or preferably much larger numbers of included participants are needed for policy decisions.

References

Footnotes

  • Competing interests None.

  • Provenance and peer review Commissioned; internally peer reviewed.