Cover of Evidence reviews for computer-based tools for speech and language therapy

Evidence reviews for computer-based tools for speech and language therapy

Stroke rehabilitation in adults (update)

Evidence review K

NICE Guideline, No. 236

London: National Institute for Health and Care Excellence (NICE); .
ISBN-13: 978-1-4731-5460-5
Copyright © NICE 2023.

1. Computer-based tools for speech and language therapy

1.1. Review question

In people with aphasia after stroke, what is the clinical and cost effectiveness of computer-based tools to augment speech and language therapy?

1.1.1. Introduction

Speech and language therapy after stroke is provided in hospitals and in the community to help people with resulting communication disorders to improve their speech/language impairment, their ability to communicate and participate in their everyday roles and activities. It is generally accepted that improvement requires practice, and that rehabilitation is more effective in higher doses. Providing therapy and practice opportunities in sufficient dose can be a challenge in clinical practice due to limitations on therapy resources and distance between patients and therapists in some community settings. In addition, people with communication needs often wish to continue to work on their speech/language for longer than therapy is available for and look for alternative ways to support them in doing this. A growing number of computer software programmes, apps and online therapy tools are commercially available (see aphasia therapy software finder https://www.aphasiasoftwarefinder.org). These tools are used by some therapists and patients to increase therapy practice opportunities either as home practice between therapy sessions or after face-to-face therapy has ended. Computer tools also offer a large range of practice material, practice material can be personalised, and some tools provide useful feedback.

This review has been prompted by publication of new evidence about effectiveness, and by an increasing interest in using computer tools to increase dose and to provide therapy remotely as was required during the COVID-19 pandemic.

1.1.2. Summary of the protocol

Table 1. PICO characteristics of review question.

Table 1

PICO characteristics of review question.

For full details see the review protocol in Appendix A.

1.1.3. Methods and process

This evidence review was developed using the methods and process described in Developing NICE guidelines: the manual. Methods specific to this review question are described in the review protocol in Appendix A and the methods document.

Declarations of interest were recorded according to NICE’s conflicts of interest policy.

1.1.4. Effectiveness evidence

1.1.4.1. Included studies

Twenty-two randomised control trial studies (including 2 cross-over trials and 3 quasi-randomised trials) (27 papers) were included in the review;4, 68, 11, 13, 14, 19, 20, 2328, 33, 37, 39, 42, 45, 47, 48 these are summarised in Table 2 below. Evidence from these studies is summarised in the clinical evidence summary below (Table 3).

3 quasi-randomised trials6, 25, 47 were included. Due to the limited evidence investigating computer-based tools for speech and language therapy, it was agreed to include these studies but ensure that they were downgraded sufficiently for risk of bias due to the randomisation process. Evidence was available for all outcomes apart from carer generic health-related quality of life.

Population factors

The majority of studies included people with aphasia4, 68, 11, 13, 14, 19, 20, 2326, 33, 39, 42, 47, 48. However, studies occasionally included a mixture of people with aphasia or cognitive communication27, mixture of people with aphasia or aphasia and apraxia of speech37, people with dysarthria28 or people with apraxia of speech45. Severity of communication difficulty was rarely reported, but when it was included people with mild communication difficulties39 or with a mixture of different severities37, 42. Additionally, the majority of studies included people in the chronic phase after stroke4, 68, 11, 13, 14, 19, 2527, 37, 39, 45, 47 with only occasional studies including people in the subacute phase or a mixture of people in the chronic and subacute phases20, 23, 28, 33, 48

Types of computer-based tools

The types of computer-based tools used varied between studies with no consistently used interventions. The method of therapy used varied including:

  • Word finding therapy4, 19, 37, 47
  • Reading therapy68
  • Comprehension therapy14
  • Expressive language/communication26, 45
  • Articulation therapy28
  • Other (cognitive therapy)23
  • Combinations of approaches11, 13, 20, 24, 25, 27, 33, 39, 42, 48

There was a mixture of therapies being delivered in person6, 7, 11, 13, 14, 20, 23, 39, 42, remotely4, 8, 2428, 33, 37, 45, 47, 48 (implementing telerehabilitation technology) or a combination of both37.

Intensity of therapy

The therapies were delivered at a range of different intensities. Studies investigated the following total number of therapy hours:

Inconsistency

The majority of outcomes included only one study. However, occasionally meta-analysis was possible. Occasionally this would lead to heterogeneity. This could not be resolved by subgroup or sensitivity analysis, with the majority of outcomes containing an insufficient number of studies to allow valid conclusions on the analyses to be drawn. Therefore, outcomes were downgraded for inconsistency.

See also the study selection flow chart in Appendix C, study evidence tables in Appendix D, forest plots in Appendix E and GRADE tables in Appendix F.

1.1.4.2. Excluded studies

Two Cochrane reviews3, 46 were identified and excluded from this review. For Brady 20163 this was due to the review including all speech and language therapy studies for people with aphasia, rather than just those that had computer-based tools being implemented. For West 200546 this included all speech and language therapy studies for people with apraxia of speech, rather than just those that had computer-based tools being implemented. In both cases, the citation lists of both studies were checked for relevant studies which were included if appropriate.

See the excluded studies list in Appendix J.

1.1.5. Summary of studies included in the effectiveness evidence

Table 2. Summary of studies included in the evidence review.

Table 2

Summary of studies included in the evidence review.

See Appendix D for full evidence tables.

1.1.5.1. Summary matrix
Table 3. Summary matrix of computer-based tools for speech and language therapy compared to each comparison groups.

Table 3

Summary matrix of computer-based tools for speech and language therapy compared to each comparison groups.

1.1.6. Summary of the effectiveness evidence

Table 4. Clinical evidence summary: computer-based tools for speech and language therapy compared to speech and language therapy without computer-based tools (usual care).

Table 4

Clinical evidence summary: computer-based tools for speech and language therapy compared to speech and language therapy without computer-based tools (usual care).

Table 5. Clinical evidence summary: computer-based tools for speech and language therapy compared to social support/stimulation.

Table 5

Clinical evidence summary: computer-based tools for speech and language therapy compared to social support/stimulation.

Table 6. Clinical evidence summary: computer-based tools for speech and language therapy compared to no treatment.

Table 6

Clinical evidence summary: computer-based tools for speech and language therapy compared to no treatment.

Table 7. Clinical evidence summary: computer-based tools for speech and language therapy compared to placebo.

Table 7

Clinical evidence summary: computer-based tools for speech and language therapy compared to placebo.

See Appendix F for full GRADE tables.

1.1.7. Economic evidence

1.1.7.1. Included studies

Two health economic studies were included in this review.21, 22 These studies were economic evaluations of a pilot feasibility trial (CACTUS)39 and randomised controlled trial (Big CACTUS)34of the StepByStep computer program both of which were included in the clinical review. Both economic analyses were included in the review as:

  • the CACTUS trial assessed computer exercises (3 days per week was recommended over 5-month period) that contained a combination of word finding and reading therapies,
  • while BIG CACTUS assessed word-finding therapy computer exercises only and recommended that participants practice daily over 6-period.

These studies are summarised in the health economic evidence profile below (Table 8) and the health economic evidence table in Appendix H.

1.1.7.2. Excluded studies

No relevant health economic studies were excluded due to assessment of limited applicability or methodological limitations.

See also the health economic study selection flow chart in Appendix G.

1.1.8. Summary of included economic evidence

Table 8. Health economic evidence profile: Computer-based tools for speech and language therapy versus usual care.

Table 8

Health economic evidence profile: Computer-based tools for speech and language therapy versus usual care.

1.1.9. Economic model

This area was not prioritised for new cost-effectiveness analysis.

1.1.10. Unit costs

Relevant unit costs are provided below to aid consideration of cost effectiveness.

Table 9. Unit costs of health care professionals who may be involved in delivering interventions involving computer-based tools for speech and language therapy.

Table 9

Unit costs of health care professionals who may be involved in delivering interventions involving computer-based tools for speech and language therapy.

Interventions involving computer-based tools for speech and language therapy require additional resource use over usual care. Studies included in the clinical review reported varied resource use (see Table 2 for details). Key differences in resource use were due to the following factors:

  • The type of computer tool used varied across studies; Table 10 provides example costs associated with some of the tools that were assessed in the clinical review, with the cost per patient depending on both the type of software and whether multiple licences are purchased at once.
  • Variation in method of delivery of therapy sessions: there was a mixture of studies assessing therapies delivered either in person or remotely, with one reporting a combination of both37 Therapy delivered remotely is considered to be less resource intensive compared to face-to-face therapy.
  • The frequency and duration of the intervention being delivered, with sessions ranging from 20-90 minutes, occurring 2-6 days per week, In the included clinical studies, the interventions were delivered for between 4-13 weeks.
  • Staff who delivered the intervention varied as studies reported either physiotherapists, occupational therapists, or trained instructors. Palmer 202038 reported the use of SLTs and SLT assistants as well as trained volunteers to deliver the intervention.
  • Study setting: interventions were conducted in hospitals, community centres, and outpatient rehabilitation centres. Non-clinical settings will incur lower or no costs compared to clinical settings.
  • Additional resource use required to deliver the intervention, such as staff-training costs and information or instructional materials. Table 11 shows the summary costs provided in Marshall 2020,26 which assessed the home-based EVA Park virtual reality program. This study also calculated the total per participant cost for the intervention (assuming 16 participants) was £1,364 when including hardware costs and £114 for an average online attendance (excluding hardware).

Table 10. Example costs of computer-based tools for the treatment of aphasia.

Table 10

Example costs of computer-based tools for the treatment of aphasia.

Table 11. Summary costs from Marshall 2020.

Table 11

Summary costs from Marshall 2020.

1.1.11. Evidence statements

Effectiveness/Qualitative
Economic
  • One cost-utility analysis found that in post-stroke adults with aphasia, computerised word-finding therapy was not cost-effective when compared to usual care alone (ICER of £42,686 per QALY gained) or when compared to attention control plus usual care (ICER of £40,164 per QALY gained). This study was assessed as directly applicable with potentially serious limitations.
  • One cost-utility analysis found that in post-stroke adults with aphasia, computerised word-finding and reading therapy was cost-effective when compared to usual care alone (ICER of £3,058 per QALY gained). This study was assessed as partially applicable with potentially serious limitations.

1.1.12. The committee’s discussion and interpretation of the evidence

1.1.12.1. The outcomes that matter most

The committee included the following outcomes: person/participant generic health-related quality of life, carer generic health-related quality of life, communication outcomes, including: overall language ability, impairment specific measures (such as naming, auditory comprehension, reading, expressive language and speech impairment and activity for people with dysarthria) and functional communication, communication related quality of life, psychological distress (including depression, anxiety and distress) and discontinuation. All outcomes were considered equally important for decision making and therefore have all been rated as critical.

Person/participant health-related quality of life outcomes were considered particularly important as a holistic measure of the impact on the person’s quality of living. However, the committee acknowledged that generic measures may be more responsive to physical changes after stroke and less responsive to communication changes, and this may affect the interpretation of the outcome. In particular, for EQ-5D, the committee noted that there are no subscales specific to communication, which makes it hard to relate to speech and language therapy. In response to this, communication related quality of life scores were also included. Communication outcomes were key to this review as a direct answer to the question. Psychological distress was included as a response to the significant psychological distress that can be experienced by people with communication difficulties that may be resolved by the treatment. Discontinuation was considered as a measure of adherence to the treatment with the acknowledgement that there are unlikely to be significant adverse events as a result of the treatment. Mortality was not considered as it was deemed unlikely to be a result of the treatment. However, if mortality was a reason for discontinuation, then this was highlighted to the committee during their deliberation.

The committee chose to investigate these outcomes at less than 3 months and more than and equal to 3 months, as they considered that there could be a difference in the short term and long term effects of the interventions, in particular for people who have had an acute stroke where effects at less than 3 months could be very different then effects at greater than 3 months. With regards to communication difficulties, this may be seen at 3 months, in contrast to other reviews for this guideline where 6 months was used.

The evidence for this question was limited, with some outcomes not being reported. No study investigated the effects of interventions on carer generic health-related quality of life and the anxiety and distress sections of psychological distress. Outcomes were reported at both less than 3 months and more than and equal to 3 months.

1.1.12.2. The quality of the evidence

Twenty randomised controlled trial studies (including 1 cross-over trial and three quasi-randomised trials) were included in the review. The 3 quasi-randomised trials were included due to the limited evidence investigating computer-based tools for speech and language therapy. However, the limitations produced by the study design was reflected in the risk of bias assessment. Non-randomised studies were considered for this review. However, none were identified that fulfilled the protocol criteria.

The quality of the evidence ranged from high to very low quality, most of the evidence being of low quality. Outcomes were commonly downgraded due to risk of bias (mainly due to bias arising from the randomisation process, bias due to deviations from the intended intervention and bias due to missing outcome data) and imprecision. No outcomes were affected by indirectness.

Some outcomes were downgraded for inconsistency. However, this was less common as meta-analysis was not possible for the majority of outcomes, with only 1 study being included in most outcomes. Where heterogeneity was identified, subgroup and sensitivity analyses did not resolve this mainly due to the limited number of studies making it not possible to form valid subgroups. In general, the majority of studies included people with aphasia, with a minority including people with dysarthria, people with apraxia of speech and a combination of people with other communication difficulties and aphasia. The majority of studies included people in the chronic phase after stroke, with only occasional studies including people in the subacute phase. The types of computer-based tools used varied across the studies, with the majority including a combination of approaches. There was a mixture of therapies being delivered in person and being delivered remotely. The amount of therapy varied between studies ranging from less than and equal to 10 hours to more than and equal to 30 hours.

The majority of the studies included a small number of participants (the majority including 10 to 20 participants in each study arm), while few studies included a larger number of participants (at most around 100 participants in each study arm).

These factors introduced additional uncertainty in the results. The effects on risk of bias did not appear to influence the direction of the effect in the trials. The committee took all these factors into account when interpreting the evidence.

The committee concluded that the evidence was of sufficient quality to make recommendations. They acknowledged the varied quality of the evidence and the heterogeneity in the interventions being compared in this analysis. They committee noted the study size and variations that may occur from studies conducted outside of an NHS-based healthcare setting. However, a large multi-site NIHR funded study37 recently took place in the United Kingdom which included a health economic analysis. The study reported the use of a word finding computer-based therapy compared to social support/stimulation and speech and language therapy without computer-based tools. The study reported many of the outcomes included in this review and was of low risk of bias. Therefore, the committee gave this study greater consideration in their decision making.

1.1.12.2.1. Computer-based tools compared to speech and language therapy without computer-based tools

The majority of identified evidence was considered to be categorised in this comparison. When compared to speech and language therapy without computer-based tools, 39 outcomes were reported that ranged between high and very low quality. Where downgraded, outcomes were commonly downgraded due to risk of bias (due to a mixture of bias arising from the randomisation process, bias due to deviations from the intended intervention, bias due to missing outcome data and bias in measurement of the outcome) and imprecision. Two outcomes were downgraded for inconsistency due to the outcomes including a mixture of studies reporting zero events in at least 1 study arm and studies reporting events in both study arms.

1.1.12.2.2. Computer-based tools compared to social support/stimulation

When compared to social support/stimulation, 7 outcomes were reported that ranged from high to very low quality. When downgraded, outcomes were commonly downgraded due to risk of bias (due to bias arising from the randomisation process) and imprecision. Two outcomes were downgraded for inconsistency either as heterogeneity was observed and not resolved by sensitivity analysis or subgroup analysis or that the outcome included a mixture of studies reporting zero events in at least 1 study arm and studies reporting events in both study arms.

1.1.12.2.3. Computer-based tools compared to no treatment

When compared to no treatment, 11 outcomes were reported that ranged from low to very low quality, with the majority being of very low quality. Outcomes were commonly downgraded due to risk of bias (due to a mixture of bias arising from the randomisation process, bias due to deviations from the intended interventions, bias due to missing outcome data and bias in measurement of the outcome) and imprecision. Two outcomes were downgraded for inconsistency as heterogeneity was observed and not resolved by sensitivity analysis or subgroup analysis.

1.1.12.2.4. Computer-based tools compared to placebo

When compared to placebo, 5 outcomes were reported that ranged from low to very low quality, with the majority being of very low quality. Outcomes were commonly downgraded due to risk of bias (due to a mixture of bias arising from the randomisation process, bias due to deviations from the intended interventions, bias due to missing outcome data and bias in measurement of the outcome) and imprecision. One outcome was downgraded for inconsistency either as heterogeneity was observed and not resolved by sensitivity analysis or subgroup analysis.

1.1.12.3. Benefits and harms
1.1.12.3.1. Key uncertainties

The committee agreed that there was significant heterogeneity in the interventions included in the analysis, reflecting the complexity and range of speech and language therapy needs that can be targeted by computerised therapy. The interventions varied from computer programs aiming to deliver speech and language therapy to telerehabilitation approaches aiming to support speech and language therapist to deliver therapy over long distances. A subgroup analysis for remote delivery compared to in person delivery of therapy did not resolve any heterogeneity in the analysis. Furthermore, the types of computer programs used to deliver therapy varied significantly. While some focussed on specific methods of therapy (for example: word finding therapy) others included a mixture of approaches aiming for more holistic delivery of therapy. A subgroup analysis for the method of therapy did not resolve any heterogeneity in the analysis.

The comparisons included varied within groups. For the computer-based tools compared to speech and language therapy without computer-based tools comparison, comparisons could be split into two categories:

  • Speech and language therapy with computer-based tools compared to equal amounts of therapy without computer-based tools (intensity and duration matched)
  • Speech and language therapy with computer-based tools in addition to speech and language therapy delivered in person compared to in person delivery only (usual care with additional computer-based tools)

The committee noted that computer-based tools for speech and language therapy would most likely not be used as the only speech and language therapy for a person. Speech and language therapy with computer-based tools can often allow for training in activities where repetition is required, but it is often harder to adapt to the person’s needs. The approach can make it harder for the person after stroke to feel they are receiving adequate attention if it is not adequately supported by a health care professional or is not person centred, and this may reduce their motivation to continue with the computer therapy. The committee noted that personalisation was possible with some computer software, but this will incur additional costs for staff to be involved with this process (including additional time with people to discuss how the therapy is going). The approaches used in the studies varied.

The committee noted that the evidence included mostly small studies with very few participants and so it was difficult to make firm conclusions about the efficacy of the intervention. The majority of interventions appeared to include components of word finding, but there were very few interventions looking at other methods of therapy. In addition, the majority of evidence was for people with aphasia with very few studies involving people with other types of speech and language difficulties (such as dysarthria and apraxia of speech). The committee agreed that additional research with larger sample sizes, computerised therapy focussed on other aspects of speech and language impairment, and ways to support use of new speech and language skills in everyday communication situations would be important for future work.

1.1.12.3.2. Computer-based tools compared to speech and language therapy without computer-based tools, social support/stimulation, no treatment and placebo

When compared to speech and language therapy without computer-based tools, clinically important benefits were seen for psychological distress – depression and discontinuation at less than 3 months and more than and equal to 3 months. Unclear effects where some outcomes indicated a clinically important benefit of computer-based tools, while others indicated no clinically important difference was seen for naming at less than 3 months and more than and equal to 3 months and expressive language at more than and equal to 3 months. An unclear effect where some outcomes indicated a clinically important benefit of computer-based tools (including 30 participants), while others indicated a clinically important benefit of speech and language therapy without computer-based tools (including 198 participants) was seen for person/participant generic health-related quality of life at more than and equal to 3 months. No clinically important difference was seen for overall language ability, reading functional communication and communication related quality of life at less than 3 months and more than and equal to 3 months and auditory comprehension, expressive language, speech impairment – dysarthria and activity – dysarthria at less than 3 months. An unclear effect where some outcomes indicated no clinically important difference, while others indicated a clinically important benefit of speech and language therapy without computer-based tools was seen for auditory comprehension at more than and equal to 3 months.

When compared to social support/stimulation, clinically important benefits were seen for naming at less than 3 months and more than and equal to 3 months. No clinically important difference was seen in person/participant generic health-related quality of life, functional communication and communication related quality of life at more than and equal to 3 months and discontinuation at less than 3 months and more than and equal to 3 months. When compared to no treatment, clinically important benefits were seen for naming and communicated related quality of life at less than 3 months. No clinically important difference was seen in overall language ability, auditory comprehension, expressive language, functional communication, depression and discontinuation at less than 3 months. When compared to placebo, no clinically important difference was seen for overall language ability at less than 3 months and more than and equal to 3 months and naming at less than 3 months. Clinically important harms of computer-based tools were seen in discontinuation at less than 3 months and more than and equal to 3 months.

The committee noted that the evidence was complicated to examine due to the variety of computer-based tools being meta-analysed that were examining different techniques. The intervention of note had a high degree of interventional complexity that made it complicated to fully understand using this analysis. However, the committee weighed up the benefits and the harms from the evidence available. Benefits were seen in naming therapies that were either focussed on word finding or included word finding as a component. The committee noted that this was realistic but highlighted that this did not necessarily make a difference on a person’s ability to communicate. They noted that word finding may be useful for finding specific words, but not necessarily to use those words in communication and required extra support to put those words into context. No clinically important differences were seen in functional communication, which may indicate that the ability to use words in context may not have been achieved with these therapies.

The committee noted that the outcome reported for person/participant generic health-related quality of life was EQ-5D, that did not specifically include a subscale for communication. Due to this, it is difficult to conclude that the interventions are or are not effective based on this outcome. Therefore, the committee did not give the outcome a large weighting in their decision when making recommendations.

The committee considered the clinically important harm in discontinuation when computer-based tools were compared to placebo. People dropped out for unclear reasons during the first 2 weeks of therapy in 1 study in the group using computer-based tools, which may reflect dissatisfaction with the computer-based therapy though this is uncertain. Weighing up this evidence against the potential evidence of benefits, the committee decided that the evidence of benefit outweighed the potential for harm from this. If people found that computer-based tools were not suitable for them then they could work with their therapist to explore other methods of therapy, including methods that do not use computer-based tools.

The committee agreed that computer-based tools for speech and language therapy should be used as an adjunct to speech and language therapy, not alone. There was insufficient evidence of clinically important changes in anything except in improving word finding. Most of the evidence was from small studies and it was not possible to make recommendations, either positive or negative, for other uses of computer-based speech and language tools. Based on this they agreed that computer-based tools could be considered where word finding is an important aim for the person after stroke and they should be used as an adjunct to therapy delivered by a speech and language therapist. However, there should also be additional research with larger sample sizes investigating the other potential uses of computer-based tools for speech and language therapy to gain a complete understanding of the effect of the interventions.

1.1.12.4. Cost effectiveness and resource use

The economic evidence review included 2 published studies with relevant comparisons. These studies were economic evaluations of a pilot feasibility trial (CACTUS) and a randomised controlled trial (Big CACTUS) of the StepByStep computer program, respectively - both of which were included in the clinical review. The StepByStep software allowed for participants to receive supported self-managed intensive speech practice at home. Both studies were UK model-based cost-utility analyses with lifetime horizons, although the interventions differed slightly as described in the following paragraphs.

The CACTUS trial compared the StepByStep approach (computer exercises, support from an SLT and a volunteer who practiced carryover activities face to face) to usual stimulation, which included activities that provided general language stimulation, such as communication support groups and conversation, as well as reading and writing activities. The analysis included a three-state Markov model with month-long cycles, whereby participants could transition from their initial aphasia health state to a response state (defined as a ≥17% increase in proportion of words named correctly at 5 months), or to death. Patients in the response state could relapse to the aphasia state or die. Utility weights were assigned to response and no response states to estimate QALYs, which were measured using a pictorial version of EQ-5D-3L (adapted for this study to be accessible to patients with aphasia) collected at baseline and at 5-and 8-months. 5-month utility data was then extrapolated to a lifetime horizon with 0.08% monthly relapse rate applied. Intervention costs included computers and microphones provided to participants, as well StepByStep software and training for speech and language therapists (SLTs). Healthcare resource use between both groups was also compared using patient and carer diaries collected at 5 months post-randomisation. After 5 months, resource use costs were assumed to be the same for both groups by applying 5-month resource use estimates collected from the control group. The results of the CACTUS trial suggested that StepByStep was cost-effective, with an incremental cost of only £437 for an incremental QALY gain of 0.14, producing an incremental cost-effectiveness ratio (ICER) of £3,058 per QALY gained. Probabilistic sensitivity analyses also suggested that the probability of the intervention being cost-effective was 75.8% at a £20,000 threshold. However, deterministic sensitivity analyses found that the base case results were sensitive to utility gain (for example, utility gain of ≤0.01 resulted in ICER of >£20,000) and relapse rate parameters (for example, relapse rate of >30% resulted in ICER of >£20,000).

This study was assessed as partially applicable for this review, as 2010-unit costs may not reflect the current UK NHS context and the year in which resource use estimates were collected was not reported. Potentially serious limitations were also identified, as the lifetime model was based on an RCT with a short follow up (8 months) and focused on one piece of software which limits interpretation for the wider evidence base identified in the clinical review. Additional limitations included: resource use estimates were taken from a self-reported questionnaire not from a systematic review; utility of non-responders assumed to be equal for both trial groups, overlooking the possibility that non-responder utility scores could be lower in the intervention group; the definition of a “good response” was arbitrarily defined, and how the accessible version of the EQ-5D-3L questionnaire is yet to be validated, although this did allow for utility scores to be elicited directly from people with aphasia. Finally, it should be noted that the sample size of the CACTUS trial was small (n=34) and aimed to assess the feasibility of a rigorous RCT of a self-managed computer therapy. Therefore, it cannot be expected to provide conclusive cost-effectiveness results.

For this reason, an economic evaluation of Big CACTUS trial was conducted. The trial compared the StepByStep program to both usual care and an attention control arm, who received puzzle books and monthly supportive telephone calls plus usual care. The StepByStep intervention was delivered both remote and in-person, supported by volunteers and SLT assistants. The Markov model included with 3-month cycles where all participants begin in the ‘aphasia’ health state but differed from the model used in the CACTUS trial, as it included two tunnel heath states for ‘good response’ (defined as a ≥10% increase in words correctly found on a naming test and/or a 0.5 increase on the Therapy Outcomes Measures activity scale) at 6 and 9 months from baseline. No new responses were assumed to occur after 12 months – participants either remained in the ‘good response (12 months and beyond), relapsed to the ‘Aphasia’ health state or die. People in the ‘Aphasia’ health state at 12 months either remain in that health state or die. Utility weights were assigned to response and no response states to estimate QALYs, which were measured using an adapted pictorial version of EQ-5D-5L collected at baseline, 6, 9 and 12 months. EQ-5D-5L scores were also mapped to EQ-5D-5L using an algorithm by Van Hout 201244. The relapse rate observed between 9 and 12 months was assumed to remain constant for the remainder of the modelled period, hence it was assumed that good responses were lost over time. Only intervention costs were incorporated into the model, which included hardware and software costs (computers, including StepByStep software licences, headphones, puzzle books), SLT training costs and volunteer time/travel costs for SLTs and SLT assistants.

The results found that StepByStep was not cost-effective when compared to usual care, as the QALY gain associated with the intervention was small (0.017) relative to the incremental cost (£733), resulting in an ICER of £42,686 per QALY gained. The same result was found when the intervention was compared to the active control group (£40,165 per QALY gained). The active control group was also dominated by usual care, having higher costs (£695) and lower QALYs (−0.0001). The probability that usual care was cost-effective was 56% at a £20,000 threshold, compared to 22% for both the active control and StepByStep groups. The only cost-effective result identified for the StepByStep intervention was when only patient subgroups with moderate word finding difficulties were assessed, which reported an ICER of £13,673 per QALY gained when compared to the active control group, and £21,262 per QALY gained for StepByStep compared to usual care alone. Alternative costing assumptions (including the inclusion of volunteer costs) did not change conclusions on cost-effectiveness. The study was deemed as directly applicable with potentially serious limitations for the following reasons: This lifetime model was based on an RCT with a short follow up (12 months) and assessed a single piece of software; the health-related quality of life benefit of a “good response” for the StepByStep intervention was small and uncertain; only direct intervention costs were included as Big CACTUS did not collect data on wider resource use (due to the CACTUS pilot study reporting no important differences in indirect resource use) and the how the accessible version of the EQ-5D-5L questionnaire is yet to be validated.

In addition to the economic evidence, unit costs of computer-based tools and health care professionals that were reported in the clinical studies were presented to aid committee discussion. Additional resource use would be required for computer-based therapy, and variation in resource use across studies reported in the clinical review highlighted the uncertainty towards the potential resource impact of these interventions on the NHS. For example, the cost per patient for these tools depends on both the type of software and whether multiple licences are purchased at once. The intervention setting would also affect the resource impact, as the clinical studies reported interventions that were conducted in hospitals, community centres, and outpatient rehabilitation centres, as well as those that were delivered remotely. Non-clinical settings will incur lower or no costs compared to clinical settings, while remote-based therapies are considered to be less resource intensive compared to face-to-face therapy. Differences in the frequency and duration of therapy delivery were also reported, with sessions ranging from 20-90 minutes, occurring 2-6 days per week, for a total of 4-13 weeks. Staff who delivered the intervention varied as studies reported using physiotherapists, occupational therapists, or trained instructors. The Big CACTUS RCT also reported the use of SLTs and SLT assistants as well as trained volunteers to deliver the intervention. Studies also reported other various resource use requirements, such as staff-training costs and information or instructional materials.

The committee discussed economic evidence, noting that the results of the two included studies could not be used to reflect the cost-effectiveness of the wider evidence base as they assessed a single computer program that required substantial resource use in terms of hardware and software costs compared to other interventions identified in the clinical review. Neither version of the StepByStep program is widely available as part of current practice which would increase the resource impact if recommended. Further uncertainty of the cost-effectiveness was raised when considering the variation in the delivery and resource use requirements of the interventions reported in the clinical studies. The committee agreed that there would be a resource impact for providing computer-based therapy as this is not routinely used in current practice.

Although the clinical studies varied in quality, with significant uncertainty due to the complexity of the interventions, clinically important benefits were seen for naming when interventions focused on or included word finding as a component. This led the committee to agree that computer-based interventions aimed at improving naming skills may be useful as additional therapy, as the majority of studies provided computer-based therapy in addition to face-to-face speech and language therapy. The committee also specified that such interventions should be adapted to the needs of the person (for example, word finding activities that include terms which are important to the user). Considering the uncertainty of the clinical evidence and limited economic evidence, the committee proposed a ‘consider’ recommendation for computer-based therapy programmes tailored to individual goals in relation to naming in addition to face-to-face speech and language therapy.

1.1.12.5. Other factors the committee took into account

The committee noted the potential inequity of using programs that are only available in English and notes that there will be some people who cannot access this due to speaking other languages. They noted the complexities for multilingual people who may have therapy focussed on their use of English instead of including all languages that a person may speak. Computer-based tools may exacerbate this inequity in care and so the committee highlighted that it is important to consider all languages that a person speaks and providing holistic support for the person.

The committee noted that computer-based tools may not be accessible for all people, dependent on multiple factors including their access to technology due to cost and computer literacy. Hospitals may be able to lend out technology and provide additional support to people to use it, but it was noted that there may be a geographic variation in the effect of this with a greater requirement for technology to be leant out in areas where there is greater socioeconomic deprivation.

The committee agreed that computer-based tools should not be used as the only speech and language therapy someone should be offered, and that all people who require speech and language therapy should receive support from a speech and language therapist. However, there is currently insufficient available speech therapist time in many Stroke Units, and computer-based tools could be an important means of increasing the intensity of therapy someone could receive (see Evidence review E).

The committee noted that there could be wider effect on psychological outcomes. Some outcomes for evidence were not available for this review, such as outcomes on psychological distress for group-based computer-based tools. The committee discussed how this may help with psychological wellbeing by integrating with other people after stroke.

1.1.13. Recommendations supported by this evidence review

This evidence review supports recommendation 1.12.8 and the research recommendation on computer-based speech and language therapy.

1.1.14. References

1.
Alshreef A, Bhadhuri A, Latimer N, al. e. A study to assess the clinical and cost-effectiveness of aphasia computer treatment versus usual stimulation or attention control long term post stroke (Big CACTUS). 2019. Available from: https://njl-admin​.nihr​.ac.uk/document/download/2030782
2.
Ara R, Brazier JE. Populating an economic model with health state utility values: moving toward better practice. Value in Health. 2010; 13(5):509–518 [PubMed: 20230546]
3.
Brady M, Kelly H, Godwin J, Enderby P, Campbell P. Speech and language therapy for aphasia following stroke. Cochrane Database of Systematic Reviews. 2016; (6) [PMC free article: PMC8078645] [PubMed: 27245310]
4.
Braley M, Pierce JS, Saxena S, De Oliveira E, Taraboanta L, Anantha V et al. A Virtual, Randomized, Control Trial of a Digital Therapeutic for Speech, Language, and Cognitive Intervention in Post-stroke Persons With Aphasia. Frontiers in neurology [electronic resource]. 2021; 12:626780 [PMC free article: PMC7907641] [PubMed: 33643204]
5.
Brønnum-Hansen H, Davidsen M, Thorvaldsen P. Long-term survival and causes of death after stroke. Stroke. 2001; 32(9):2131–2136 [PubMed: 11546907]
6.
Caute A, Woolf C, Wilson S, Stokes C, Monnelly K, Cruice M et al. Technology-Enhanced Reading Therapy for People With Aphasia: Findings From a Quasirandomized Waitlist Controlled Study. Journal of Speech Language & Hearing Research. 2019; 62(12):4382–4416 [PubMed: 31765277]
7.
Cherney LR. Oral reading for language in aphasia (ORLA): evaluating the efficacy of computer-delivered therapy in chronic nonfluent aphasia. Topics in Stroke Rehabilitation. 2010; 17(6):423–431 [PubMed: 21239366]
8.
Cherney LR, Lee JB, Kim KA, van Vuuren S. Web-based Oral Reading for Language in Aphasia (Web ORLA R): A pilot randomized control trial. Clinical Rehabilitation. 2021; 35(7):976–987 [PubMed: 33472420]
9.
Claro Software. Claro software pricing Available from: https://www​.clarosoftware.com/pricing/ Last accessed: 01/02/2023.
10.
Curtis L, Burns A. Unit costs of health and social care 2018. Canterbury. Personal Social Services Research Unit University of Kent, 2018. Available from: https://www​.pssru.ac​.uk/project-pages/unit-costs​/unit-costs-2018/
11.
De Luca R, Aragona B, Leonardi S, Torrisi M, Galletti B, Galletti F et al. Computerized Training in Poststroke Aphasia: What About the Long-Term Effects? A Randomized Clinical Trial. Journal of Stroke and Cerebrovascular Diseases. 2018; 27(8):2271–2276 [PubMed: 29880209]
12.
Department for Transport. Values of Time and Vehicle Operating Costs TAG Unit 3.5.6 2014. Available from: https://webarchive​.nationalarchives​.gov.uk​/ukgwa/20140304110038/http://www​.dft.gov​.uk/webtag/documents​/expert/pdf/U3_5_6-Jan-2014.pdf
13.
Elhakeem ES, Saeed SS, Elsalakawy RN, Elmaghraby RM, Ashmawy GA. Post-stroke aphasia rehabilitation using computer-based Arabic software program: a randomized controlled trial. Egyptian journal of otolaryngology. 2021; 37(1):77
14.
Fleming V, Brownsett S, Krason A, Maegli MA, Coley-Fisher H, Ong YH et al. Efficacy of spoken word comprehension therapy in patients with chronic aphasia: a cross-over randomised controlled trial with structural imaging. Journal of Neurology, Neurosurgery and Psychiatry. 2020; 05:05 [PMC free article: PMC7611712] [PubMed: 33154182]
15.
GOV.UK. Expenses and Benefits: Business Travel Mileage for Employees’ Own Vehicles. 2018. Available from: https://www​.gov.uk/expenses-and-benefits-business-travel-mileage/rules-for-tax Last accessed: 01/02/2023.
16.
Hernández Alava M, Pudney S, Wailoo A. Estimating the relationship between EQ-5D-5L and EQ-5D-3L: results from an English population study Policy Research Unit in Economic Evaluation of Health and Care Interventions. Universities of Sheffield and York. 2020.
17.
18.
Jones K, Burns A. Unit costs of health and social care 2021. Canterbury. Personal Social Services Research Unit University of Kent, 2021. Available from: https://www​.pssru.ac​.uk/project-pages/unit-costs​/unit-costs-of-health-and-social-care-2021/
19.
Katz RC, Wertz RT, Lewis SM, Esparza C, Goldojarb M. A comparison of computerized reading treatment, computer stimulation, and no treatment for aphasia. Clinical aphasiology: volume 19. 1991:243–254
20.
Kesav P, Vrinda SL, Sukumaran S, Sarma PS, Sylaja PN. Effectiveness of speech language therapy either alone or with add-on computer-based language therapy software (Malayalam version) for early post stroke aphasia: A feasibility study. Journal of the Neurological Sciences. 2017; 380:137–141 [PubMed: 28870554]
21.
Latimer NR, Bhadhuri A, Alshreef A, Palmer R, Cross E, Dimairo M et al. Self-managed, computerised word finding therapy as an add-on to usual care for chronic aphasia post-stroke: An economic evaluation. Clinical Rehabilitation. 2021; 35(5):703–717 [PMC free article: PMC8073872] [PubMed: 33233972]
22.
Latimer NR, Dixon S, Palmer R. Cost-utility of self-managed computer therapy for people with aphasia. International Journal of Technology Assessment in Health Care. 2013; 29(4):402–409 [PubMed: 24290333]
23.
Liu M, Qian Q, Wang W, Chen L, Wang L, Zhou Y et al. Improvement in language function in patients with aphasia using computer-assisted executive function training: A controlled clinical trial. Pm & R. 2021; 26:26 [PubMed: 34310072]
24.
Maresca G, Maggio MG, Latella D, Cannavo A, De Cola MC, Portaro S et al. Toward Improving Poststroke Aphasia: A Pilot Study on the Growing Use of Telerehabilitation for the Continuity of Care. Journal of Stroke and Cerebrovascular Diseases. 2019; 28(10):104303 [PubMed: 31371144]
25.
Marshall J, Booth T, Devane N, Galliers J, Greenwood H, Hilari K et al. Evaluating the Benefits of Aphasia Intervention Delivered in Virtual Reality: Results of a Quasi-Randomised Study. PLoS ONE [Electronic Resource]. 2016; 11(8):e0160381 [PMC free article: PMC4982664] [PubMed: 27518188]
26.
Marshall J, Devane N, Talbot R, Caute A, Cruice M, Hilari K et al. A randomised trial of social support group intervention for people with aphasia: A Novel application of virtual reality. PLoS ONE [Electronic Resource]. 2020; 15(9):e0239715 [PMC free article: PMC7514104] [PubMed: 32970784]
27.
Meltzer JA, Baird AJ, Steele RD, Harvey SJ. Computer-based treatment of poststroke language disorders: a non-inferiority study of telerehabilitation compared to in-person service delivery. Aphasiology. 2018; 32(3):290–311
28.
Mitchell C, Bowen A, Tyson S, Conroy P. A feasibility randomized controlled trial of ReaDySpeech for people with dysarthria after stroke. Clinical Rehabilitation. 2018; 32(8):1037–1046 [PMC free article: PMC6088453] [PubMed: 29278019]
29.
Mitchell C, Bowen A, Tyson S, Conroy P. ReaDySpeech for people with dysarthria after stroke: protocol for a feasibility randomised controlled trial. Pilot & Feasibility Studies. 2018; 4:25 [PMC free article: PMC5520339] [PubMed: 28748108]
30.
National Institute for Health and Care Excellence. Developing NICE guidelines: the manual [updated January 2022]. London. National Institute for Health and Care Excellence, 2014. Available from: https://www​.nice.org.uk/process/pmg20
31.
Office for National Statistics. Interim Life tables, United Kingdom, Based on data for the years 2007–2009. 2010. Available from: http://www​.ons.gov.uk​/ons/rel/lifetables​/interim-life-tables​/interim-life-tables/index.html Last accessed: 01/02/2023.
32.
Office for National Statistics. National life tables – UK: 2014 to 2016. 2017. Available from: https://www​.ons.gov.uk​/peoplepopulationandcommunity​/birthsdeathsandmarriages​/lifeexpectancies​/bulletins​/nationallifetablesunitedkingdom​/2014to2016 Last accessed: 01/02/2023.
33.
Ora HP, Kirmess M, Brady MC, Partee I, Hognestad RB, Johannessen BB et al. The effect of augmented speech-language therapy delivered by telerehabilitation on poststroke aphasia-a pilot randomized controlled trial. Clinical Rehabilitation. 2020; 34(3):369–381 [PubMed: 31903800]
34.
Ora HP, Kirmess M, Brady MC, Winsnes IE, Hansen SM, Becker F. Telerehabilitation for aphasia - protocol of a pragmatic, exploratory, pilot randomized controlled trial. Trials [Electronic Resource]. 2018; 19(1):208 [PMC free article: PMC5880095] [PubMed: 29606148]
35.
Organisation for Economic Co-operation and Development (OECD). Purchasing power parities (PPP). 2012. Available from: http://www​.oecd.org/std/ppp Last accessed: 01/05/2023.
36.
Palmer R, Cooper C, Enderby P, Brady M, Julious S, Bowen A et al. Clinical and cost effectiveness of computer treatment for aphasia post stroke (Big CACTUS): study protocol for a randomised controlled trial. Trials [Electronic Resource]. 2015; 16:18 [PMC free article: PMC4318176] [PubMed: 25623162]
37.
Palmer R, Dimairo M, Cooper C, Enderby P, Brady M, Bowen A et al. Self-managed, computerised speech and language therapy for patients with chronic aphasia post-stroke compared with usual care or attention control (Big CACTUS): a multicentre, single-blinded, randomised controlled trial. Lancet Neurology. 2019; 18(9):821–833 [PMC free article: PMC6700375] [PubMed: 31397288]
38.
Palmer R, Dimairo M, Latimer N, Cross E, Brady M, Enderby P et al. Computerised speech and language therapy or attention control added to usual care for people with long-term post-stroke aphasia: the Big CACTUS three-arm RCT. Health Technology Assessment (Winchester, England). 2020; 24(19):1–176 [PMC free article: PMC7232133] [PubMed: 32369007]
39.
Palmer R, Enderby P, Cooper C, Latimer N, Julious S, Paterson G et al. Computer therapy compared with usual care for people with long-standing aphasia poststroke: a pilot randomized controlled trial. Stroke. 2012; 43(7):1904–1911 [PubMed: 22733794]
40.
Powerwolf Solutions. PowerAFA - Aphasia Software. 2022. Available from: https://www​.powerwolf​.it/ENG/PowerAFA.htm Last accessed: 01/02/2023.
41.
Shirley Ryan Ability Lab. ORLA: Oral Reading for Language in Aphasia. 2022. Available from: https://www​.sralab.org​/academy/bookstore​/orlatm-oral-reading-language-aphasia-center-aphasia-research Last accessed: 01/02/2023.
42.
Spaccavento S, Falcone R, Cellamare F, Picciola E, Glueckauf RL. Effects of computer-based therapy versus therapist-mediated therapy in stroke-related aphasia: Pilot non-inferiority study. Journal of Communication Disorders. 2021; 94:106158 [PubMed: 34673449]
43.
Steps Consulting Ltd. StepByStep aphasia therapy. 2018. Available from: https:​//aphasia-software.com/ Last accessed: 01/02/2023.
44.
van Hout B, Janssen MF, Feng YS, Kohlmann T, Busschbach J, Golicki D et al. Interim scoring for the EQ-5D-5L: mapping the EQ-5D-5L to EQ-5D-3L value sets. Value in Health. 2012; 15(5):708–715 [PubMed: 22867780]
45.
Varley R, Cowell PE, Dyson L, Inglis L, Roper A, Whiteside SP. Self-Administered Computer Therapy for Apraxia of Speech: Two-Period Randomized Control Trial With Crossover. Stroke. 2016; 47(3):822–828 [PubMed: 26797664]
46.
West C, Hesketh A, Vail A, Bowen A. Interventions for apraxia of speech following stroke. Cochrane Database of Systematic Reviews. 2005; (4) [PMC free article: PMC8769681] [PubMed: 16235357]
47.
Woolf C, Caute A, Haigh Z, Galliers J, Wilson S, Kessie A et al. A comparison of remote therapy, face to face therapy and an attention control intervention for people with aphasia: a quasi-randomised controlled feasibility study. Clinical Rehabilitation. 2016; 30(4):359–373 [PubMed: 25911523]
48.
Zhou Q, Lu X, Zhang Y, Sun Z, Li J, Zhu Z. Telerehabilitation Combined Speech-Language and Cognitive Training Effectively Promoted Recovery in Aphasia Patients. Frontiers in Psychology. 2018; 9:2312 [PMC free article: PMC6262900] [PubMed: 30524349]

Appendices

Appendix B. Literature search strategies

B.1. Clinical search literature search strategy (PDF, 180K)

B.2. Health Economics literature search strategy (PDF, 181K)

Appendix D. Effectiveness evidence

Download PDF (1.0M)

Appendix G. Economic evidence study selection

Figure 1. Flow chart of health economic study selection for the guideline (PDF, 193K)

Appendix I. Health economic model

Modelling was not prioritised for this question.

Appendix J. Excluded studies

Clinical studies

Table 19. Studies excluded from the clinical review.

Table 19

Studies excluded from the clinical review.

Health Economic studies

Download PDF (159K)

Appendix K. Research recommendations – full details

K.1. Research recommendation (PDF, 202K)