Article Text

Download PDFPDF

Is size everything?
Free
  1. Phil Wiffen
  1. Correspondence to Professor Phil Wiffen, Pain Research Unit, Churchill Hospital, Oxford OX3 7LE, UK; phil.wiffen{at}ndcn.ox.ac.uk

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

In the course of my role as Editor in Chief of EJHP I read a wide variety of papers. One of the issues that stands out for me regularly and influenced by my work in evidence-based pain medicine is the issue of study size. On one day I can read a paper that is a study of 10 patients and later the same day read one with over 8000 patients. Which is intuitively more likely to be reliable?

In clinical trials we are all aware of studies where one paper shows effectiveness and another shows no benefit of the same drug at the same dose for the same outcomes. This was widely discussed in a seminal paper by Moore et al1 some 15 years ago and still has huge relevance today. In practice, many of the studies we need to work with to make informed clinical decisions are just too small. The Moore paper, for example, using computer-generated modelling of randomised trials (RCTs) shows that in a trial with group sizes of 40, the number needed to treat could vary between one and nine when the true answer was three. The authors calculate that study sizes need to be of the order of 10 times that in order to provide reliable answers. One way around this is of course the development and widespread use of systematic reviews, which attempt to bring together all the studies on a particular intervention irrespective of language or place of publication. Size is important but alongside the issue of size we need to measure the things that are meaningful to patients or relevant to practitioners.

What does this mean for EJHP and the constituency that we aim to reach, influence and support? In practice it is a juggling act. A small study in a specialist area may well be of value so has to be considered. However it sometimes appears that studies are written up due to pressures other than a sense of achieving meaningful numbers – who knows why? Maybe the lead person needed to move on or ran out of time or needed another publication. That may be cynical but we sometimes need to step back and consider the value of our publications for improving practice. We also need care in presenting the results so that they convey a useful message. For example, ensure that the right outcome is chosen and present statistics in a common sense way. I regularly see studies of less than 20 participants where percentages are tabled to several decimal places. Few of the studies published in EJHP undergo the rigours of randomisation and so are likely to be more prone to bias and random errors.

I love the range of material that comes into EJHP so don't stop, but please step back and have a think about the science before embarking on a study, aim for bigger numbers and read your paper before hitting the SEND button.

Reference

Footnotes

  • Competing interests None.

  • Provenance and peer review Commissioned; internally peer reviewed.