Background: SMS text messaging- and internet-based self-reporting systems can supplement existing vaccine safety surveillance systems, but real-world participation patterns have not been assessed at scale.
Objective: This study aimed to describe the participation rates of a new SMS text messaging- and internet-based self-reporting system called the Kaiser Permanente Side Effect Monitor (KPSEM) within a large integrated health care system.
Methods: We conducted a prospective cohort study of Kaiser Permanente Southern California (KPSC) patients receiving a COVID-19 vaccination from April 23, 2021, to July 31, 2023. Patients received invitations through flyers, SMS text messages, emails, or patient health care portals. After consenting, patients received regular surveys to assess adverse events up to 5 weeks after each dose. Linkage with medical records provided demographic and clinical data. In this study, we describe KPSEM participation rates, defined as providing consent and completing at least 1 survey within 35 days of COVID-19 vaccination.
Results: Approximately, 8% (164,636/2,091,975) of all vaccinated patients provided consent and completed at least 1 survey within 35 days. The lowest participation rates were observed for parents of children aged 12-17 years (1349/152,928, 0.9% participation rate), and the highest participation was observed among older adults aged 61-70 years (39,844/329,487, 12.1%). Persons of non-Hispanic White race were more likely to participate compared with other races and ethnicities (13.1% vs 3.9%-7.5%, respectively; P<.001). In addition, patients residing in areas with a higher neighborhood deprivation index were less likely to participate (5.1%, 16,503/323,122 vs 10.8%, 38,084/352,939 in the highest vs lowest deprivation quintiles, respectively; P<.001). Invitations through the individual's Kaiser Permanente health care portal account and by SMS text message were associated with the highest participation rate (19.2%, 70,248/366,377 and 10.5%, 96,169/914,793, respectively), followed by email (19,464/396,912, 4.9%) and then QR codes on flyers (25,882/2,091,975, 1.2%). SMS text messaging-based surveys demonstrated the highest sustained daily response rates compared with internet-based surveys.
Conclusions: This real-world prospective study demonstrated that a novel digital vaccine safety self-reporting system implemented through an integrated health care system can achieve high participation rates. Linkage with participants' electronic health records is another unique benefit of this surveillance system. We also identified lower participation among selected vulnerable populations, which may have implications when interpreting data collected from similar digital systems.
This paper proposes an approach to assess digital health readiness in clinical settings to understand how prepared, experienced, and equipped individual people are to participate in digital health activities. Existing digital health literacy and telehealth prediction tools exist but do not assess technological aptitude for particular tasks or incorporate available electronic health record data to improve efficiency and efficacy. As such, we propose a multidomain digital health readiness assessment that incorporates a person's stated goals and motivations for use of digital health, a focused digital health literacy assessment, passively collected data from the electronic health record, and a focused aptitude assessment for critical skills needed to achieve a person's goals. This combination of elements should allow for easy integration into clinical workflows and make the assessment as actionable as possible for health care providers and in-clinic digital health navigators. Digital health readiness profiles could be used to match individuals with support interventions to promote the use of digital tools like telehealth, mobile apps, and remote monitoring, especially for those who are motivated but do not have adequate experience. Moreover, while effective and holistic digital health readiness assessments could contribute to increased use and greater equity in digital health engagement, they must also be designed with inclusivity in mind to avoid worsening known disparities in digital health care.
Background: Usability has been touted as one determiner of success of mobile health (mHealth) interventions. Multiple systematic reviews of usability assessment approaches for different mHealth solutions for physical rehabilitation are available. However, there is a lack of synthesis in this portion of the literature, which results in clinicians and developers devoting a significant amount of time and effort in analyzing and summarizing a large body of systematic reviews.
Objective: This study aims to summarize systematic reviews examining usability assessment instruments, or measurements tools, in mHealth interventions including physical rehabilitation.
Methods: An umbrella review was conducted according to a published registered protocol. A topic-based search of PubMed, Cochrane, IEEE Xplore, Epistemonikos, Web of Science, and CINAHL Complete was conducted from January 2015 to April 2023 for systematic reviews investigating usability assessment instruments in mHealth interventions including physical exercise rehabilitation. Eligibility screening included date, language, participant, and article type. Data extraction and assessment of the methodological quality (AMSTAR 2 [A Measurement Tool to Assess Systematic Reviews 2]) was completed and tabulated for synthesis.
Results: A total of 12 systematic reviews were included, of which 3 (25%) did not refer to any theoretical usability framework and the remaining (n=9, 75%) most commonly referenced the ISO framework. The sample referenced a total of 32 usability assessment instruments and 66 custom-made, as well as hybrid, instruments. Information on psychometric properties was included for 9 (28%) instruments with satisfactory internal consistency and structural validity. A lack of reliability, responsiveness, and cross-cultural validity data was found. The methodological quality of the systematic reviews was limited, with 8 (67%) studies displaying 2 or more critical weaknesses.
Conclusions: There is significant diversity in the usability assessment of mHealth for rehabilitation, and a link to theoretical models is often lacking. There is widespread use of custom-made instruments, and preexisting instruments often do not display sufficient psychometric strength. As a result, existing mHealth usability evaluations are difficult to compare. It is proposed that multimethod usability assessment is used and that, in the selection of usability assessment instruments, there is a focus on explicit reference to their theoretical underpinning and acceptable psychometric properties. This could be facilitated by a closer collaboration between researchers, developers, and clinicians throughout the phases of mHealth tool development.
Trial registration: PROSPERO CRD42022338785; https://www.crd.york.ac.uk/prospero/#recordDetails.