Article Text

Download PDFPDF

Are we offering patients the right medicines information? A retrospective evaluation of readability and quality in online patient drug information
Free
  1. Laia Robert Sabaté,
  2. Laura Diego
  1. Catalan Drug Information Center, Barcelona, Spain
  1. Correspondence to Dr Laia Robert Sabaté, Catalan Drug Information Center, Barcelona 08017, Spain; lrobert{at}cedimcat.info

Abstract

Objectives The aim of this study was to assess the quality and readability of patient drug information published on a corporate website and written by a multidisciplinary team including pharmacists, doctors, nurses, journalists, experts in healthcare communication, and patients. Documents were user-tested with patients in order to improve and adapt the final product.

Methods Readability was measured using the INFLESZ tool, a software developed from the Flesch formula and adapted to Spanish. DISCERN and EQIP were used for quality appraisal by two independent raters as they have shown to be useful and consistent for assessing information related to treatment and treatment choices.

Results Most of the documents (>67%) had an easy readability level, implying that they would be easily understood by a person with a primary school education level. The DISCERN tool showed higher reliability and concordance than EQIP. The overall DISCERN mean score for the documents was 55.4% for rater 1% and 51% for rater 2, implying very good quality.

Conclusions This retrospective analysis supports the implemented workflow of the multidisciplinary team and the user-testing process and encourages continuation of this systematic development of documents. Although both EQIP and DISCERN are useful and widely used tools, according to our results we would favour DISCERN to evaluate patient drug information.

  • quality in health care
  • readability
  • patients
  • hospital pharmacy education
  • drug information

Data availability statement

Data are available upon request.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

The availability of online healthcare information allows access to an unlimited number of resources. A recent Spanish survey, however, shows that physicians and pharmacists remain the most frequent source of health information (88.7% and 67.1%, respectively) for patients. The internet as a source of online health information is growing and approximately 60% of internet searches relate to healthcare queries.1 Searches cover a wide range of diseases, medical treatments, and lifestyle issues such as diet, nutrition, or exercise.

Over the past two decades, the role of the patient has shifted from a doctor-patient paternalistic relationship to a partnership with shared care, shared decision making, and shared responsibilities.2 Healthcare systems are now working towards a model where clinicians and patients work together to reach a decision about an intervention based on clinical evidence and patient preference. This process can be applied to many types of healthcare decisions, including whether to take a medication or not.3 4 Information regarding medications plays a key role in helping patients make informed choices.

The Catalan Department of Health has implemented a new healthcare social media policy in Catalonia which involves offering evidence-based, reliable, and practical information about drugs and pharmacotherapy to patients and citizens. A multidisciplinary team, involving not only pharmacists (primary care, hospital, and community pharmacists), doctors, and nurses but also journalists and other experts in healthcare communication, was built to implement this new strategy. Their main tasks are to identify patient information needs and topics of interest and produce written drug information aligned with local medicine optimisation policies. Each new document produced is reviewed and validated by patients: this improves the final product before being published as the information has been tested by the final user and required amendments have been made. A questionnaire and short semi-structured interview was used for this user-testing, in order to assess whether people can find and understand key points of information. Every document has been tested in two rounds consisting of six to eight people.

Medicines information, if well designed and written, can enhance people’s knowledge and contribute to informed decision making about their medication.5 Thus, the team set the objective in their ongoing improvement plan to assess the quality and readability of drug information published on the corporate website (http://medicaments.gencat.cat/) since 2015.

Methods

Readability and quality were considered the key elements to appraise. A search was conducted in order to select the most suitable instruments developed for this task. Various tools/instruments and computerised formulas are available to assess readability including the Fry formula, SMOG, and Flesch–Kincaid. These instruments are mainly developed in English but some have been adapted to Spanish.6 These formulas are based on the average length of words and sentences and do not take into account other factors that may facilitate reading. They do not provide information on reliability, structure, or layout, which is their main drawback. Thus, other tools are required to assess the quality and accessibility of drug information.7 There are a variety of validated instruments available to assess the quality of written patient drug information (SAM, PEMAT, BALD, IDAPS, MIDAS, CIRF, ELF, DISCERN, EQIP).8–12 Certain tools, such as MIDAS, CIRF, and ELF15,9 were specifically developed for rating medication leaflets while other instruments, such as JAMA benchmarks or Health on the Net, rate the quality of health website information.13 14 We decided to use DISCERN and EQIP, as these tools have shown usefulness and consistency in the quality assessment of information related to treatment and treatment choices.11 12

Sixty documents published in the past 5 years on the Health Department website were included in this study. The documents covered a variety of topics ranging from drug treatment options, safety issues, best practices, and general information about medicines.

INFLESZ, an online software developed by Barrio Catalejo in 2008, was used for the readability analysis.15 INFLESZ was chosen as it is the model tool developed from the Flesch formula and is adapted to the average Spanish reader. It is routinely used in healthcare to assess the readability of informed consent documents, medicine leaflets, and reading materials for health education purposes.15–17 INFLESZ classifies the readability of the materials into five groups: “very difficult”, “difficult”, “normal”, “easy,” and “very easy”.

For the quality analysis, two independent raters evaluated the material. Both raters had medical information backgrounds and were given a set of DISCERN and EQUIP instruments as well as guidelines for their use. The first step was to reach a consensus regarding the criteria to apply in order to overcome the certain ambiguities the documents may carry and that could interfere with the assessment process.

DISCERN can be used by both patients and health professionals and is a validated instrument designed to assess the appropriateness of written health information for treatment choices. Since its development in the UK in 1999, DISCERN has been used in several studies to assess patient information on cancer, chronic illnesses, or ADHD.18–20 This instrument comprises a set of 16 questions. The first eight questions address document reliability, the following seven questions address specific items related to treatment choices, and the last question is the overall rate.11 It includes open questions such as the aim of the document, topic relevance, accuracy and updates, treatment options and their impact on quality of life, as well as the benefits and side effects of therapies. The DISCERN score is based on a five-point Likert scale ranging from “no” (1 point) to “yes” (5 points). DISCERN scores range from 0 to 80 where the highest score represents a high-quality or excellent document. The DISCERN scale classifies readability as follows: poor (≤15%), fair (16%–31%), good (32%–47%), very good (48%–63%), and excellent (64%–80%).

EQIP is another validated instrument used by health professionals and researchers to measure the quality of any written health material (both printed and online). It is a 20-item tool and unlike DISCERN, EQIP also evaluates aspects of design and language.12 This instrument, developed for use in the paediatric setting, takes into account relevant informational needs for parents and caretakers as well.12 It has been used for rating patient information on topics such as bariatric surgery, eczema, diabetes, or statin prescribing in primary care.21–24 The EQIP score is calculated by adding one point when answering yes, 0.5 where the answer is “partially”, and zero points when the answer is “no”. Overall punctuation is expressed by percentage, the higher the percentage, the better the quality. Based on the score, EQUIP authors make one of the following recommendations: >75% maintain publication, review in 2 to 3 years; 51%–75% maintain publication, review in 1 to 2 years; 26%–50% maintain publication, start editorial review immediately and replace within 6 months to 1 year; and 0%–25% unpublish immediately.

Statistical analysis

Data analysis was performed using SAS software v9.4, SAS Institute Inc., Cary, NC, USA. A P-value<0.05 was considered statistically significant.

Agreement between the results of the two raters was measured using Cohen’s Kappa and the intra-class correlation coefficient (ICC). Reliability was assessed using Cronbach’s alpha coefficient. The correlation between scores for each instrument was calculated using Pearson’s correlation.

Results

Readability analysis

In this study the mean INFLESZ index was 71.96 (SD 4.7) which implies that the documents were easy to read. Any result above 55 points using this tool is considered easy to read. Although most documents (>67%) had an easy readability level, a few documents were rated as difficult or very difficult to read (21.% and 8% respectively). Table 1 displays the results of the INFLESZ index according to the five groups and provides a comparison with the education level and reading grade of Spanish publications.

Table 1

Readabily results using the INFLESZ index

Concordance analysis (Kappa coefficients and intra-class correlation coefficient)

Concordance was defined as the degree of agreement between the two raters and was calculated for each scale. Table 2 shows the results of the agreement analysis (using the Kappa coefficient to adjust for the effect of chance on the concordance observed) for both instruments including all items. Although the grade of agreement was considered high for both DISCERN and EQIP, the kappa for DISCERN was remarkably lower than that for EQIP (fair and good agreement, respectively).

Table 2

Kappa coefficients and ICC for DISCERN and EQIP

ICC was calculated for the overall score of each instrument. ICC, defined as the proportion of total variability that is due to the variability of the subjects, allows assessment of the general agreement between two or more measurement or observation methods based on a variance analysis model (ANOVA) with repeated measures. A high concordance was seen with both instruments: DISCERN-ICC was 0.81 (95% CI 0.70–0.88) and EQIP-ICC was 0.78, (95% CI 0.66–0.86). Values above 0.75 are considered to have excellent concordance.

Reliability and correlation analysis between raters

Reliability was defined as the ability of an instrument to measure consistently and was calculated using the Cronbach alpha coefficient. The closer the value of Cronbach alpha is to 1, the greater the internal consistency of the instrument. As seen in table 3, DISCERN showed higher reliability (Cronbach alpha coefficient>0.77) than EQIP for both raters.

Table 3

Cronbach alpha coefficient and mean puntuation of EQIP and DISCERN

For the EQIP scale, the mean score was 44% for both raters and thus the quality needed to improve, and documents should be reviewed in 6 months to 1 year (table 3). In contrast to these results, the overall mean score for the documents in the DISCERN scale was 55.4% for rater 1% and 51% for rater 2, which is considered very good in both instances.

Correlation between both instruments was calculated using Pearson’s coefficient (r). A value of r=1 indicates perfect positive correlation. The strength of association between DISCERN and EQIP for both raters was 0.85 and 0.69 for rater 1 and 2, respectively, meaning a moderate-strong correlation.

Discussion

The readability of more than two-thirds of the documents developed were considered easy to understand by a person with primary school level education. The readability level of the documents is similar to that of tabloids or best-seller books. Nevertheless, about one-third of the documents were rated as difficult or very difficult to read. This could be considered an accessibility barrier for certain citizens and should be reviewed in order to improve the overall quality of the documents produced. These results are slightly better than those from other studies where issues with online Spanish healthcare information readability have been reported.25 26 Nevertheless, readability tools are considered informative as they count words and the length of sentences, but experts recommend to use them with other techniques to evaluate quality and scientific evidence.8

We used the EQIP and DISCERN tools for quality analysis as they are validated and have been used to assess information on patient medicine and drug choices. These instruments were developed in English, and we have not identified any similar tool to assess quality that has been validated in Spanish or Catalan. Our results show higher reliability and concordance with DISCERN when used to evaluate the quality of online patient drug information. This has also been observed in other studies.22 27

Although global agreement calculated using ICC is higher for DISCERN, EQIP shows better mean Kappa values for each individual item evaluated in the instrument. The fact that the DISCERN five-point scale “forces” the rater to score even if an item does not apply may explain this difference. In EQIP there is an answer (“not apply”) that provides the opportunity to circumscribe the answers to only the items directly involved. Another factor that may have contributed is different standards between raters in terms of layout and document design that can be interpreted differently depending on the way the information is presented (printable PDF format or standard website html format).

Another limitation seen in our study is that results are only applicable to certain types of documents from the institutional website written in Catalan and Spanish and cannot be extrapolated to all patient drug information accessible online.

Although the analysis indicates that EQIP is a useful tool and addresses items such as layout and language, we would favour the use of DISCERN based on the results. This tool is especially useful for assessing documents or information developed to support shared decision making concerning drugs.

This retrospective analysis supports the implemented workflow and encourages the systematic development of documents implemented 5 years previously. Nonetheless, in light of the EQIP results, the multidisciplinary team has decided to review the material annually.

The strength of this paper is that the documents were written by a multidisciplinary team involving doctors, nurses, pharmacists, civil servants, journalists, and professionals trained in communication. Thus, they incorporated a wider and more realistic vision of the healthcare system. This approach allows the detection of information needs, selection of topics of most interest to patients, and production and validation of documents until they are fit for purpose. As patient involvement in the development of drug information is considered key,28 we decided to use a user-testing method. Pioneered by Professor David Sless in Australia in the 1990s, user-testing is a specific method to assess whether people can find and understand key information which can markedly improve performance.29

Information about medicine that is well written, unbiased, reliable, easy to understand, and practical can increase people’s knowledge and comprehension. This will enable patients:5 30

  • To improve pharmacotherapy literacy (individual’s ability to obtain, evaluate, calculate, comprehend, and properly act on patient-specific information concerning pharmacotherapy and pharmaceutical services necessary to make appropriate medication-related decisions, regardless of the mode of content delivery)

  • To use drugs safely and effectively

  • To make informed decisions about medicines and their treatments.

Conclusion

The institutional web aims to encourage patients’ self-care, promote rational drug use, and responsible utilisation of healthcare services. It offers scientific, objective, and updated information about drugs and treatments for citizens. Readability and quality appraisal are key to the continuous improvement plan and in order to offer scientific, objective, and updated information about drugs and treatments.

The analysis confirmed good readability and good quality of documents produced over the past few years by a multidisciplinary team (comprised of doctors, nurses, pharmacists, public administration agents, journalists, and experts in health communication) that have also been validated by patients. Although both EQIP and DISCERN are useful and widely used tools, according to our results we would favour DISCERN to evaluate patient drug information.

For those organisations interested in developing patient information concerning medicines, drug choices, and treatments we would recommend the workflow of the multidisciplinary team and the user-testing process to ensure documents are “fit for purpose”.

What this paper adds

What is already known on this subject

  • Readability and quality of patient drug information are key elements to appraise in order to know if the information is fit for purpose.

  • EQIP and DISCERN are validated and useful tools in the quality assessment of patient information about treatment and treatment choices.

What this study adds

  • A multidisciplinary team (pharmacists, doctors, nurses, journalists, and other experts in healthcare communication) and a user-testing process have to be considered to develop quality patient drug information.

Data availability statement

Data are available upon request.

References

Footnotes

  • EAHP Statement 6: Education and Research.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.