Measuring experience
Evidence review L
NICE Guideline, No. 204
Authors
National Guideline Alliance (UK).Measuring experience
Review question
How can the experience of babies, children and young people be measured so as to improve their experience of healthcare?
Introduction
In order to provide a good experience of healthcare, and to continually improve that experience, healthcare services need to review the experiences of those using the services, and act on this feedback to make changes to their services. Babies, children and young people may have different needs, experiences and perceptions of healthcare services compared to adults and it is important to capture these, and not rely on feedback from an adult-only population. In addition, babies, children and young people may have specific needs with respect to the methods used to obtain this feedback.
The aim of this question is to determine the best ways to measure the healthcare experience of babies, children and young people.
Summary of the protocol
See Table 1 for a summary of the Population, Intervention, Comparison and Outcome (PICO) characteristics of this review.
Table 1
Summary of the protocol (PICO table).
For further details see the review protocol in appendix A.
Methods and process
This evidence review was developed using the methods and process described in Developing NICE guidelines: the manual. Methods for this review question are described in the review protocol in appendix A and the methods supplement.
Clinical evidence
Included studies
This was a quantitative review with the aim of:
- Determining if there is evidence to support the use of a particular method of collecting feedback on the healthcare experience of babies, children and young people.
A systematic review of the literature was conducted. One study (Horn 2010), a cluster-randomised controlled trial (RCT) was included in this review. This study compared different formats of questionnaire administration in 3 Child and Adolescent Mental Health Service (CAMHS) teams.
The included study is summarised in Table 2.
See the literature search strategy in appendix B and study selection flow chart in appendix C.
Excluded studies
Studies not included in this review are listed, and reasons for their exclusion are provided in appendix K.
Summary of studies included in the evidence review
A summary of the study included in this review is presented in Table 2.
Table 2
Summary of included study.
See the full evidence tables in appendix D. No meta-analysis was conducted (and so there are no forest plots in appendix E).
Summary of the evidence
Evidence was found for 1 of the pre-defined critical outcomes set out in the protocol: response rate. No evidence was found for acceptability to respondent, mode effect or time taken to complete survey.
The included cluster-RCT compared response rates sent out 6 months after use of the Child and Adolescent Mental Health Service (CAMHS) for two questionnaires, the Strength and Difficulties Questionnaire (SDQ) and the Experience of Service Questionnaire (ESQ), which were sent during a 3-month baseline period and a 3-month intervention period. Interventions included a control intervention, which consisted of general administration improvements (centralisation of questionnaire administration, quality improvements to questionnaires, covering letters and information sheets, and increased presentation of feedback in patient waiting areas), and two additive interventions, which consisted of the general improvements plus a postal reminder only or both postal and telephone reminder. After adjustment for the cluster-RCT design assuming conservative intra-class correlations (ICCs), there were no significant differences between any of the three CAMHS team in response rates during the 3-month intervention period.
Quality assessment of studies included in the evidence review
See the evidence profiles in appendix F.
Evidence from reference groups and focus groups
The children and young people’s reference groups and focus groups provided additional evidence for this review. A summary of the findings is presented in Table 3.
Table 3
Summary of the evidence from reference groups and focus groups.
See the full evidence summary in appendix M.
Evidence from national surveys
The grey literature review of national surveys provided additional evidence for this review. A summary of the findings is presented in Table 4.
Table 4
Summary of the evidence from national surveys.
See the full evidence summary in appendix N.
Economic evidence
Included studies
One economic study was identified which was relevant to this question (Horn 2010).
A single economic search was undertaken for all topics included in the scope of this guideline. See supplementary material 6 for details.
Excluded studies
Economic studies not included in this review are listed, and reasons for their exclusion are provided in appendix K.
Summary of studies included in the economic evidence review
Horn 2010 was a cost-effectiveness study conducted in the UK. The economic evaluation was conducted alongside an RCT (N = 268). The study compared three strategies: mailing only, mailing plus postal reminder, and mailing plus postal reminder plus telephone reminder. The study population comprised families who had used CAMHS. The study took a narrow NHS perspective and included only costs associated with administration, calls, stamps, and business reply envelopes. The time horizon was 4 weeks. The study reported outcomes in terms of cost per returned completed questionnaire. The study found that mailing plus postal reminder was extendedly dominated by other strategies and the incremental cost-effectiveness ratio of mailing plus postal reminder plus telephone reminder (versus mailing only) was £10.52 per additional completed and returned questionnaire.
See the economic evidence tables in appendix H and economic evidence profiles in appendix I.
Economic model
No economic modelling was undertaken for this review because the committee agreed that other topics were higher priorities for economic evaluation.
The committee’s discussion of the evidence
Interpreting the evidence
The outcomes that matter most
The aim of this review was to identify what is the best method to measure babies, children and young people’s experience of healthcare, and therefore acceptability to the respondent and response rate were prioritised as critical outcomes by the committee. Acceptability is paramount to ensure healthcare services are not using experience measuring tools that children or young people find difficult to complete, or do not want to complete. Survey response rate is linked to acceptability but is also a critical outcome in its own right. The use of measurement tools that have a high response rate is likely to lead to the most representative and informative data.
The committee also agreed that measurements relating to the mode effect of a survey question (that is, data accuracy and data equivalence), and the time taken to complete the survey were important outcomes to capture. Mode effect was considered an important aspect that could considerably affect the reliability of an experience measuring tool when transferred between different formats. Time taken to complete the survey was also considered an important outcome because long surveys could result in respondent fatigue, and respondents tend to provide incomplete or less accurate responses the further through the questionnaire they get. Additionally, there may be a time factor to be considered for the healthcare practitioner who is implementing the questionnaire.
The quality of the evidence
The quality of included studies was assessed using GRADE methodology. Evidence was considered very low quality. There were concerns regarding the risk of bias, namely in the lack of blinding (both possible assessment and measurement bias) and indirectness (specific population of children and young people attending CAMHS). Importantly, families were sent two types of questionnaire – a health outcomes questionnaire and an experience questionnaire. A ‘return’ was counted if either of these was returned, and no further information was provided regarding the proportions of returned questionnaire types. Additionally, very serious imprecision was found in the estimate of effects for all comparisons. This was due to the adjusted sample size, which was small, of the included cluster RCT.
It should also be noted that the 6-month post assessment questionnaire was sent to families of CAMHS attendees. There is no information provided about who received the correspondence, nor who answered the questionnaire (for example, whether it was the young person or another family member).
No effectiveness data was found for acceptability to respondent, mode effect, or time taken to complete the survey.
Benefits and harms
The committee noted that the included study (Horn 2010) suggested that telephone follow-up may increase response rates to a postal survey compared to the control group, and compared to a postal reminder, but that due to the limitations with this study, such as the uncertainty over who had actually completed the surveys, the small study size, and the overall very low quality of the study, they were unable to use it as a basis for specific recommendations.
In addition to the evidence from the systematic review there was also evidence from the reference and focus groups and from the national surveys of children and young people’s experience. The 11-14 years old reference group had suggested a variety of methods to collect feedback from children and young people including face-to face, using a variety of computer-based methods, audio surveys, token voting boxes and by having surveys delivered by other young people. The reference group thought it was best to carry out experience surveys while children or young people were still receiving care, and not leave it until later. The reference group also provided suggestions for the questions that should be included on experience surveys and these included numerical rankings, closed questions and open questions. The young people thought the surveys should be easy, smooth, positive, simple and quick.
The evidence from the national surveys of children and young people’s experience also found that young people were keen to provide feedback (positive and negative) on healthcare experience. There were a variety of views on the timing for collecting feedback but most young people thought it should be at regular intervals during treatment, or at the end of treatment. As with the reference groups there were suggestions on the methods and content of the feedback – with most support for open questions or qualitative questions, and for questions covering many aspects of healthcare provision such as interactions with healthcare professionals, involvement in decision-making, the environment, food, privacy, and entertainment and social activities. There was also evidence that children and young people wanted to be told how their feedback had been implemented.
There were also a number of comments on complaint systems, with young people reporting that they were difficult to access, there were barriers to making complaints, concerns that no action was taken, or that complaints could lead to repercussions, and the committee therefore made a specific recommendation on the provision of accessible complaints systems for children and young people.
Based on the evidence from the reference and focus groups and the review of national surveys, as well as their knowledge and expertise, the committee agreed that it was possible to make some good practice recommendations. They agreed that it was good practice to collect feedback from children and young people, and the parents or carers of babies and young children, and that while some NHS organisations already had these systems in place, making a recommendation to this effect would encourage all organisations to do this. The committee also noted the importance of the United Nations Convention on the Rights of the Child which states that children have the right to ‘express their views, feelings and wishes in all matters relating to them, and to have their views considered and taken seriously.’ They agreed that this recommendation was therefore in accordance with this convention.
The committee did not feel that the evidence was strong enough to recommend one method for collecting feedback over another, but agreed that there were some common principles that could be applied, such as the need to develop the assessment tools in conjunction with babies, children and young people to make sure they were acceptable, to adapt tools to allow those with disabilities or communication difficulties to provide feedback, to ensure that feedback was obtained at an appropriate time, to ensure feedback was obtained from a representative population (for example children from under-represented groups, parents and carers to represent babies), and to use techniques to maximize response rates. The committee discussed how to identify these under-represented groups but were aware of pro-active methods that could be used such as outreach work to engage and ask opinions from people who are not accessing services, targeting economically deprived areas, using index of multiple deprivation for schools and home addresses, and using snowball sampling.
The committee drew on the evidence for review question 5.1 which had identified that children and young people want feedback on their input, and the effect it has had on the design of services. The committee agreed that this would also be likely to be true about information collected about healthcare experience, and therefore they made a recommendation that information should be fed back to children and young people and parents or carers of babies and young children about the actions that had been taken.
The committee discussed if there were any potential harms from their recommendations. They identified that the need to provide feedback, especially if follow-up mechanisms were used to improve response rates, may lead to children and young people feeling under pressure to complete surveys at times in their life where they are dealing with health issues. The committee were also concerned that it was difficult to obtain responses from a truly representative sample (for example those with no fixed abode, those who had communication difficulties) and that this might lead to the results of surveys not being reflective of the whole population of babies, children and young people.
Due to the lack of published evidence available for this review and the fact that there are no comparative studies comparing different methods of measuring healthcare experience, the committee made a research recommendation.
Cost effectiveness and resource use
Evidence from a published cost-effectiveness study showed that, if complete and accurate telephone contact details were available, mailing plus postal reminder plus telephone reminder was potentially a cost-effective way to increase response rates compared to mailing only and mailing plus postal reminder. However, it was difficult to judge the cost-effectiveness as the cost per quality-adjusted life year (QALY) could not be estimated from the information provided in the publication. Also, this evidence was based on a single small UK study with potentially severe limitations and the committee could not draw any firm conclusions from this.
The committee discussed the fact that administering any kind of method to measure experience, and then to analyse the results, would have associated costs, although this would be likely to be similar across the different methods. Feedback data is routinely collected across most of the health service and therefore the recommendations do not represent a change in practice. Actions taken in response to this feedback may also have associated costs, however, this would apply to all methods. The committee also discussed the fact that sometimes feedback from service-users suggested ways that costs could be reduced, or could point out things that did not need to be done and would potentially result in the cost savings to the health service.
Other factors the committee took into account
The committee discussed the lack of evidence available for this review, and were aware that there was research on different methods of measuring the experience of babies, children and young people, but these were non-comparative studies and so had not met the protocol criteria for inclusion in the review.
The committee were also aware of an ongoing study comparing the administration of the National Inpatient Survey via online and paper formats, but it was unlikely this study would be published before the publication of the guideline.
The committee were aware that the right to complain about healthcare services was contained with the NHS constitution, including how complaints should be handled, and therefore they did not make detailed recommendations about complaint systems.
Recommendations supported by this evidence review
This evidence review supports recommendations 1.7.5 to 1.7.9 and the research recommendation on measuring experience of healthcare.
References
Horn 2010
Horn, R., Jones, S., Warren, K., The Cost-Effectiveness of Postal and Telephone Methodologies in Increasing Routine Outcome Measurement Response Rates in CAMHS, Child and Adolescent Mental Health, 15(1), 60–63, 2010. [PubMed: 32847211]
Appendices
Appendix A. Review protocol
Appendix B. Literature search strategies
Appendix C. Clinical evidence study selection
Appendix D. Clinical evidence tables
Appendix E. Forest plots
Forest plots for review question: How can the experience of babies, children and young people be measured so as to improve their experience of healthcare?
No meta-analysis was conducted for this review question and so there are no forest plots.
Appendix F. GRADE tables
Appendix G. Economic evidence study selection
Economic evidence study selection for review question: How can the experience of babies, children and young people be measured so as to improve their experience of healthcare?
One global search was conducted for this review question. See supplementary material 6 for further information
Appendix H. Economic evidence tables
Appendix I. Economic evidence profiles
Appendix J. Economic analysis
Economic evidence analysis for review question: How can the experience of babies, children and young people be measured so as to improve their experience of healthcare?
No economic analysis was conducted for this review question.
Appendix K. Excluded studies
Excluded studies for review question: How can the experience of babies, children and young people be measured so as to improve their experience of healthcare?
Clinical studies
Table 13
Excluded studies and reasons for their exclusion.
Economic studies
Table 14
Excluded studies and reasons for their exclusion.
Appendix L. Research recommendations
Appendix M. Evidence from reference groups and focus groups
Appendix N. Evidence from national surveys
Final
Evidence reviews underpinning recommendations 1.7.5 to 1.7.9 and research recommendations in the NICE guideline
These evidence reviews were developed by the National Guideline Alliance which is a part of the Royal College of Obstetricians and Gynaecologists
Disclaimer: The recommendations in this guideline represent the view of NICE, arrived at after careful consideration of the evidence available. When exercising their judgement, professionals are expected to take this guideline fully into account, alongside the individual needs, preferences and values of their patients or service users. The recommendations in this guideline are not mandatory and the guideline does not override the responsibility of healthcare professionals to make decisions appropriate to the circumstances of the individual patient, in consultation with the patient and/or their carer or guardian.
Local commissioners and/or providers have a responsibility to enable the guideline to be applied when individual health professionals and their patients or service users wish to use it. They should do so in the context of local and national priorities for funding and developing services, and in light of their duties to have due regard to the need to eliminate unlawful discrimination, to advance equality of opportunity and to reduce health inequalities. Nothing in this guideline should be interpreted in a way that would be inconsistent with compliance with those duties.
NICE guidelines cover health and care in England. Decisions on how they apply in other UK countries are made by ministers in the Welsh Government, Scottish Government, and Northern Ireland Executive. All NICE guidance is subject to regular review and may be updated or withdrawn.