Abstract / Description of output
Background
In recent years, there has been a shift towards a more integrated and person-centered health system as a means to empower patients and their caregivers to be engaged partners in their care (1) (2). A key aspect of this partnership is to make available information and resources needed to help patients and families play this more active role (3). Furthermore, patients have higher satisfaction and utilize health care more effectively when they have guidance in managing their conditions(4,5). Guidance and partnership may be particularly important for patients with complex care needs who may be managing multiple health and social challenges at a time(6). These individuals with multiple chronic conditions and biopsychosocial problems have challenges that can lead to functional decline, poor quality of life, and high healthcare utilization(7).
A person-centered goal-oriented approach has been regarded as a preferred approach to caring for individuals with complex care needs (5,8,9). Health authorities are looking to adopt person-centered care developing care plans that are attuned to patients’ long-term health-related goals (10) and to deliver such care providers are increasingly adopting information and communication technologies (ICT) in their practices (11,12). Among the ICT-based solutions being explored, mobile health (mHealth) apps have emerged as valuable to support the delivery of integrated and person-centered care and support patients’ health goals (13–15). Some studies have shown mHealth solutions to be effective in developing complex patients’ self-management skills and improve their quality of life (16,17).
Several mHealth apps have successfully supported patients’ achievement of health goals related to weight management, physical activity, diet, and smoking cessation (14,18). Despite promising results, interventions like these experience a high attrition rate(19) (20). Attrition in this context is defined as a phenomenon in which one initially commits to a web-based (or mHealth) intervention but subsequently discontinues using the platform (21). Among mHealth research studies, high rates of attrition have a substantial impact on the integrity of collected data, the amount of missing data, and biased responses. While a common challenge, there is a limited number of research studies conducted on the determinants of abandonment of web-based, or mHealth-platform (22)(23).
One potential driver of attrition is system usability (21). Nielson (24) defines usability as a quality attribute that assesses how easy it is to use an interface of a web-based system or platform. In current literature, the majority of usability studies evaluate usability based on operational ease, learnability, and understandability of apps, and this concept is often measured at the end of the intervention (25). Lessons learned from previous reviews identify that combining multiple usability tools, adopting iterative measurement of usability, and going beyond the questionnaire method to test usability is needed to understand usability in a meaningful way (25,26).
This paper explores the links between usability and attrition by adopting a novel blended approach that combines data from surveys (capturing usability) and the mHealth system usage-logs (capturing attrition). In addition, the presented study draws on qualitative research memos to provide relevant contextual data regarding patient engagement with technology in a real-world and complex environment.
Research Objective:
This a sub-study of a pragmatic stepped-wedge evaluation of the electronic Patient Reported Outcomes (ePRO) app, a goal-oriented mobile application for patients with complex conditions (27–29). This sub-study aims to answer the following research questions: 1) how does patient-reported usability change over one-year intervention period?; 2) what is the participant attrition rate of the ePRO app over one year study period?
Methods:
Study Design:
The evaluation of the ePRO tool involves a pragmatic trial using a stepped-wedge randomized design [9] with an embedded case study. The trial was conducted across Ontario, Canada in six primary care practices over 15 months. Based on a random number generator, each site was assigned to either the early intervention group (n=3) or the late intervention group (n=3). Hereafter, the early intervention group will be referred to as Group 1, and the late intervention group as Group 2. Group 1 sites remained in the control period for 3 months followed by a 12 month intervention period. Group 2 sites remained in the control period for 6 months, followed by a 9 month intervention period. Figure 1 shows the stepped-wedge research design.
We conducted a secondary data analysis of the intervention arm of the trial data. For this sub-study, patient attrition is measured using device-generated usage logs; usability is measured using the patient-reported post-study system usability questionnaire (PSSUQ) collected once every 3 months during the intervention period. Research memos capturing patients’ and their providers’ interaction with the research team are used to contextualize the usability-related study data. The research memos contain information about patients’ or their providers’ experiencing technical difficulties, observational data from patient-provider interactions.
In recent years, there has been a shift towards a more integrated and person-centered health system as a means to empower patients and their caregivers to be engaged partners in their care (1) (2). A key aspect of this partnership is to make available information and resources needed to help patients and families play this more active role (3). Furthermore, patients have higher satisfaction and utilize health care more effectively when they have guidance in managing their conditions(4,5). Guidance and partnership may be particularly important for patients with complex care needs who may be managing multiple health and social challenges at a time(6). These individuals with multiple chronic conditions and biopsychosocial problems have challenges that can lead to functional decline, poor quality of life, and high healthcare utilization(7).
A person-centered goal-oriented approach has been regarded as a preferred approach to caring for individuals with complex care needs (5,8,9). Health authorities are looking to adopt person-centered care developing care plans that are attuned to patients’ long-term health-related goals (10) and to deliver such care providers are increasingly adopting information and communication technologies (ICT) in their practices (11,12). Among the ICT-based solutions being explored, mobile health (mHealth) apps have emerged as valuable to support the delivery of integrated and person-centered care and support patients’ health goals (13–15). Some studies have shown mHealth solutions to be effective in developing complex patients’ self-management skills and improve their quality of life (16,17).
Several mHealth apps have successfully supported patients’ achievement of health goals related to weight management, physical activity, diet, and smoking cessation (14,18). Despite promising results, interventions like these experience a high attrition rate(19) (20). Attrition in this context is defined as a phenomenon in which one initially commits to a web-based (or mHealth) intervention but subsequently discontinues using the platform (21). Among mHealth research studies, high rates of attrition have a substantial impact on the integrity of collected data, the amount of missing data, and biased responses. While a common challenge, there is a limited number of research studies conducted on the determinants of abandonment of web-based, or mHealth-platform (22)(23).
One potential driver of attrition is system usability (21). Nielson (24) defines usability as a quality attribute that assesses how easy it is to use an interface of a web-based system or platform. In current literature, the majority of usability studies evaluate usability based on operational ease, learnability, and understandability of apps, and this concept is often measured at the end of the intervention (25). Lessons learned from previous reviews identify that combining multiple usability tools, adopting iterative measurement of usability, and going beyond the questionnaire method to test usability is needed to understand usability in a meaningful way (25,26).
This paper explores the links between usability and attrition by adopting a novel blended approach that combines data from surveys (capturing usability) and the mHealth system usage-logs (capturing attrition). In addition, the presented study draws on qualitative research memos to provide relevant contextual data regarding patient engagement with technology in a real-world and complex environment.
Research Objective:
This a sub-study of a pragmatic stepped-wedge evaluation of the electronic Patient Reported Outcomes (ePRO) app, a goal-oriented mobile application for patients with complex conditions (27–29). This sub-study aims to answer the following research questions: 1) how does patient-reported usability change over one-year intervention period?; 2) what is the participant attrition rate of the ePRO app over one year study period?
Methods:
Study Design:
The evaluation of the ePRO tool involves a pragmatic trial using a stepped-wedge randomized design [9] with an embedded case study. The trial was conducted across Ontario, Canada in six primary care practices over 15 months. Based on a random number generator, each site was assigned to either the early intervention group (n=3) or the late intervention group (n=3). Hereafter, the early intervention group will be referred to as Group 1, and the late intervention group as Group 2. Group 1 sites remained in the control period for 3 months followed by a 12 month intervention period. Group 2 sites remained in the control period for 6 months, followed by a 9 month intervention period. Figure 1 shows the stepped-wedge research design.
We conducted a secondary data analysis of the intervention arm of the trial data. For this sub-study, patient attrition is measured using device-generated usage logs; usability is measured using the patient-reported post-study system usability questionnaire (PSSUQ) collected once every 3 months during the intervention period. Research memos capturing patients’ and their providers’ interaction with the research team are used to contextualize the usability-related study data. The research memos contain information about patients’ or their providers’ experiencing technical difficulties, observational data from patient-provider interactions.
Original language | English |
---|---|
Journal | Digital Health |
Volume | 7 |
DOIs | |
Publication status | Published - 5 Oct 2021 |