Madeline Ratoza, Rupal M Patel, Julia Chevan, Wayne Brewer, Katy Mitchell
Background: Access to rehabilitation services is a critical yet under-studied dimension of health equity. Among the 6 domains of access, health care provider availability, defined as the presence of sufficient health care providers to meet population needs, is particularly underexplored in rehabilitation professions such as physical and occupational therapy. Current data reporting often lacks the geographic granularity required for effective workforce planning.
Objective: The purpose of this study was to demonstrate the feasibility of mapping rehabilitation provider availability at the census tract level using geographic information systems and integrating public licensure and population data to inform equitable workforce planning.
Methods: A descriptive, cross-sectional study was conducted using publicly available state licensure data for physical and occupational therapists and demographic data from the American Community Survey. Residential addresses of rehabilitation providers were geocoded and matched to 2020 census tracts. Population-to-provider ratios were calculated and mapped using choropleth and bivariate mapping techniques. Population-to-provider ratios were calculated per tract and summarized overall and by rurality using 2020 Rural-Urban Commuting Area (RUCA) codes (urban: RUCA of 1-3; rural: RUCA of ≥4). The spatial dependence of ratios was tested using a spatial autocorrelation statistic, the global Moran I, in ArcGIS Pro using edge contiguity neighbors and row standardization.
Results: Across 6896 tracts, ratios ranged from 4.5 to 11,147 persons per provider (median 1131, IQR 537-2501). By rurality, urban tracts (n=5734, 83.1%) had a median ratio of 1141 (IQR 2054), and rural tracts (n=1162, 16.9%) had a median ratio of 1093 (IQR 1690), indicating a broadly similar central tendency with somewhat greater variability in urban areas. The population-to-provider ratio exhibited significant positive spatial autocorrelation (global Moran I=0.305; Z=40.28; P<.001), consistent with clustered pockets of high and low availability rather than random dispersion.
Conclusions: A replicable geographic information system protocol can integrate licensure and demographic data to produce interpretable population-to-provider metrics and spatial diagnostics at the census-tract level. In Texas, rehabilitation workforce availability is spatially clustered and not explained solely by an urban-rural divide, underscoring the value of small-area mapping for equitable workforce planning and policy decisions.
{"title":"Mapping the Availability of Rehabilitation Providers Using Public Licensure and Population Data for a Geographic Information System-Based Approach to Workforce Planning: Cross-Sectional Feasibility Study.","authors":"Madeline Ratoza, Rupal M Patel, Julia Chevan, Wayne Brewer, Katy Mitchell","doi":"10.2196/85025","DOIUrl":"10.2196/85025","url":null,"abstract":"<p><strong>Background: </strong>Access to rehabilitation services is a critical yet under-studied dimension of health equity. Among the 6 domains of access, health care provider availability, defined as the presence of sufficient health care providers to meet population needs, is particularly underexplored in rehabilitation professions such as physical and occupational therapy. Current data reporting often lacks the geographic granularity required for effective workforce planning.</p><p><strong>Objective: </strong>The purpose of this study was to demonstrate the feasibility of mapping rehabilitation provider availability at the census tract level using geographic information systems and integrating public licensure and population data to inform equitable workforce planning.</p><p><strong>Methods: </strong>A descriptive, cross-sectional study was conducted using publicly available state licensure data for physical and occupational therapists and demographic data from the American Community Survey. Residential addresses of rehabilitation providers were geocoded and matched to 2020 census tracts. Population-to-provider ratios were calculated and mapped using choropleth and bivariate mapping techniques. Population-to-provider ratios were calculated per tract and summarized overall and by rurality using 2020 Rural-Urban Commuting Area (RUCA) codes (urban: RUCA of 1-3; rural: RUCA of ≥4). The spatial dependence of ratios was tested using a spatial autocorrelation statistic, the global Moran I, in ArcGIS Pro using edge contiguity neighbors and row standardization.</p><p><strong>Results: </strong>Across 6896 tracts, ratios ranged from 4.5 to 11,147 persons per provider (median 1131, IQR 537-2501). By rurality, urban tracts (n=5734, 83.1%) had a median ratio of 1141 (IQR 2054), and rural tracts (n=1162, 16.9%) had a median ratio of 1093 (IQR 1690), indicating a broadly similar central tendency with somewhat greater variability in urban areas. The population-to-provider ratio exhibited significant positive spatial autocorrelation (global Moran I=0.305; Z=40.28; P<.001), consistent with clustered pockets of high and low availability rather than random dispersion.</p><p><strong>Conclusions: </strong>A replicable geographic information system protocol can integrate licensure and demographic data to produce interpretable population-to-provider metrics and spatial diagnostics at the census-tract level. In Texas, rehabilitation workforce availability is spatially clustered and not explained solely by an urban-rural divide, underscoring the value of small-area mapping for equitable workforce planning and policy decisions.</p>","PeriodicalId":14841,"journal":{"name":"JMIR Formative Research","volume":"9 ","pages":"e85025"},"PeriodicalIF":2.0,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12775756/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145810292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rosario Yslado-Méndez, Stefan Escobar-Agreda, David Villarreal-Zegarra, Wilfredo Manuel Trejo Flores, Junior Duberli Sánchez-Broncano, Ana Lucia Vilela-Estrada, Jovanna Hasel Olivares Córdova, C Mahony Reategui-Rivera, Claudia Alvarez-Yslado, Leonardo Rojas-Mezarina
{"title":"Correction: Effectiveness, Usability, and Satisfaction of a Self-Administered Digital Intervention for Reducing Depression, Anxiety, and Stress in a University Community in the Andean Region of Peru: Randomized Controlled Trial.","authors":"Rosario Yslado-Méndez, Stefan Escobar-Agreda, David Villarreal-Zegarra, Wilfredo Manuel Trejo Flores, Junior Duberli Sánchez-Broncano, Ana Lucia Vilela-Estrada, Jovanna Hasel Olivares Córdova, C Mahony Reategui-Rivera, Claudia Alvarez-Yslado, Leonardo Rojas-Mezarina","doi":"10.2196/87717","DOIUrl":"10.2196/87717","url":null,"abstract":"","PeriodicalId":14841,"journal":{"name":"JMIR Formative Research","volume":"9 ","pages":"e87717"},"PeriodicalIF":2.0,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12721221/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145810287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Philipp Schaper, Alexander Hanke, Stephan Jonas, Leon Nissen, Lara Marie Reimer, Florian Schweizer, Michael Wagner, Kristin Rolke, Carolin Rosendahl, Judith Tillmann, Klaus Weckbecker, Jochen René Thyrian
Background: Digital short cognitive tests administered by medical assistants (MAs) in general practitioners' (GPs) practices have great potential for the timely identification of patients with dementia, because they can lead to targeted specialist referrals or to immediate reassurance of patients regarding their perceived concerns. However, integration of this testing approach into clinical practice requires good usability for the test itself, especially for cognitively impaired older adults.
Objective: In this implementation study, the digital version of the Montreal Cognitive Assessment (MoCA) Duo was conducted by MAs in general practice. We tested if the interaction with the test is associated with usability problems for the patients and aimed to find additional relevant constructs that should be considered for the potential implementation of such digital tests into clinical practice. We focused the study on subjective success, usability, and workload as well as their association with the result of the cognitive test to assess whether the MoCA Duo can be implemented into general practice.
Methods: In total, 10 GPs took part in the study. Within their practices, 299 GP patients (aged 51-97 years) with cognitive concerns completed the MoCA Duo administered by MAs. Subsequently, patients and MAs completed digital questionnaires regarding the interaction with the app. Usability was measured using the adapted System Usability Scale, and perceived workload using the National Aeronautics and Space Administration Task Load Index. For the perceived workload, we included an assessment of the patient by the MA. Results of the MoCA Duo were supplied to the GPs for their consultation with the patient.
Results: The results indicated good usability for the MoCA Duo. Self-assessment by the patients indicated that 64% (191/299) could perform in the test to the best of their ability, affected by their MoCA score. We found significant higher usability ratings by patients with better MoCA scores as well as by younger patients. Furthermore, the perceived workload showed overall medium workload. We found significant correlations between the subjective perceived workload of the patients and the assessment by MAs. Self-assessments as well as assessments by the MAs were significantly influenced by the MoCA scores and the age of the participants.
Conclusions: The results indicate good usability of the digital MoCA within the sample, supporting the idea that the resulting scores are adequate to assess cognitive status without dependence on technological affinity. Furthermore, the results highlight the relevance of heterogenous samples for comparable evaluation studies, based on the significant effect of cognitive status and age on usability and workload.
{"title":"Usability of a Tablet-Based Cognitive Assessment Administered by Medical Assistants in General Practice: Implementation Study.","authors":"Philipp Schaper, Alexander Hanke, Stephan Jonas, Leon Nissen, Lara Marie Reimer, Florian Schweizer, Michael Wagner, Kristin Rolke, Carolin Rosendahl, Judith Tillmann, Klaus Weckbecker, Jochen René Thyrian","doi":"10.2196/76010","DOIUrl":"10.2196/76010","url":null,"abstract":"<p><strong>Background: </strong>Digital short cognitive tests administered by medical assistants (MAs) in general practitioners' (GPs) practices have great potential for the timely identification of patients with dementia, because they can lead to targeted specialist referrals or to immediate reassurance of patients regarding their perceived concerns. However, integration of this testing approach into clinical practice requires good usability for the test itself, especially for cognitively impaired older adults.</p><p><strong>Objective: </strong>In this implementation study, the digital version of the Montreal Cognitive Assessment (MoCA) Duo was conducted by MAs in general practice. We tested if the interaction with the test is associated with usability problems for the patients and aimed to find additional relevant constructs that should be considered for the potential implementation of such digital tests into clinical practice. We focused the study on subjective success, usability, and workload as well as their association with the result of the cognitive test to assess whether the MoCA Duo can be implemented into general practice.</p><p><strong>Methods: </strong>In total, 10 GPs took part in the study. Within their practices, 299 GP patients (aged 51-97 years) with cognitive concerns completed the MoCA Duo administered by MAs. Subsequently, patients and MAs completed digital questionnaires regarding the interaction with the app. Usability was measured using the adapted System Usability Scale, and perceived workload using the National Aeronautics and Space Administration Task Load Index. For the perceived workload, we included an assessment of the patient by the MA. Results of the MoCA Duo were supplied to the GPs for their consultation with the patient.</p><p><strong>Results: </strong>The results indicated good usability for the MoCA Duo. Self-assessment by the patients indicated that 64% (191/299) could perform in the test to the best of their ability, affected by their MoCA score. We found significant higher usability ratings by patients with better MoCA scores as well as by younger patients. Furthermore, the perceived workload showed overall medium workload. We found significant correlations between the subjective perceived workload of the patients and the assessment by MAs. Self-assessments as well as assessments by the MAs were significantly influenced by the MoCA scores and the age of the participants.</p><p><strong>Conclusions: </strong>The results indicate good usability of the digital MoCA within the sample, supporting the idea that the resulting scores are adequate to assess cognitive status without dependence on technological affinity. Furthermore, the results highlight the relevance of heterogenous samples for comparable evaluation studies, based on the significant effect of cognitive status and age on usability and workload.</p>","PeriodicalId":14841,"journal":{"name":"JMIR Formative Research","volume":"9 ","pages":"e76010"},"PeriodicalIF":2.0,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12770926/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145810285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yue Sun, Shijie Hou, Siye Chen, Minmin Leng, Zhiwen Wang
<p><strong>Background: </strong>Health recommender systems (HRSs) are digital platforms designed to deliver personalized health information, resources, and interventions tailored to users' specific needs. However, existing evaluations of HRSs largely focus on algorithmic performance, with limited scientific evidence supporting user-centered assessment approaches and insufficiently defined evaluation metrics. Moreover, no unified or scientifically validated framework currently exists for evaluating these systems, resulting in limited cross-study comparability and constraining regulatory and implementation decision-making.</p><p><strong>Objective: </strong>This study aimed to develop a comprehensive, consensus-based evaluation index system for HRSs grounded in the health technology assessment (HTA) framework.</p><p><strong>Methods: </strong>This cross-sectional study used a 2-round Delphi process conducted with 18 experts comprising clinicians, digital health researchers, and policymakers who possessed relevant professional experience and domain knowledge in HRSs. The age range of the experts was between 30 and 58 years, with 67% (n=12) of them possessing over 10 years of professional experience. On the basis of literature analysis and HTA principles, a preliminary indicator set comprising 5 primary and 16 secondary indicators was constructed. Experts rated the importance of each indicator using a 5-point Likert scale and provided qualitative suggestions for refinement. After the Delphi process, the analytic hierarchy process was applied to determine indicator weights and assess consistency.</p><p><strong>Results: </strong>The Delphi survey reached full participation in the first round (18/18, 100%) and maintained an 88.9% (16/18) response rate in the second round. The final evaluation index system of HRSs contained 5 first-level indicators (performance, effectiveness, safety, economy, and social appropriateness) and 18 second-level indicators. The mean importance scores of the second-level indicators ranged from 4.25 (SD 0.45) to 5.00 (SD 0.00), with coefficients of variation between 0.000 and 0.220. Among the first-level indicators, safety received the highest weight (0.289), followed by social appropriateness (0.251), effectiveness (0.193), performance (0.136), and economy (0.132).</p><p><strong>Conclusions: </strong>This study presents an evaluation index system for HRSs grounded in the HTA framework and validated through expert consensus. The resulting framework not only provides actionable guidance for the design, optimization, and implementation of HRSs but also fills a methodological gap in the field by offering quantifiable, hierarchical evaluation indicators with validated weighting. Future research will involve iterative refinement and empirical validation of the system in real-world deployment settings, thereby enabling continuous improvement and facilitating the establishment of unified evaluation standards for HRS research and practic
{"title":"Development of an Evaluation Index System for Health Recommender Systems Based on the Health Technology Assessment Framework: Cross-Sectional Delphi Study.","authors":"Yue Sun, Shijie Hou, Siye Chen, Minmin Leng, Zhiwen Wang","doi":"10.2196/79997","DOIUrl":"10.2196/79997","url":null,"abstract":"<p><strong>Background: </strong>Health recommender systems (HRSs) are digital platforms designed to deliver personalized health information, resources, and interventions tailored to users' specific needs. However, existing evaluations of HRSs largely focus on algorithmic performance, with limited scientific evidence supporting user-centered assessment approaches and insufficiently defined evaluation metrics. Moreover, no unified or scientifically validated framework currently exists for evaluating these systems, resulting in limited cross-study comparability and constraining regulatory and implementation decision-making.</p><p><strong>Objective: </strong>This study aimed to develop a comprehensive, consensus-based evaluation index system for HRSs grounded in the health technology assessment (HTA) framework.</p><p><strong>Methods: </strong>This cross-sectional study used a 2-round Delphi process conducted with 18 experts comprising clinicians, digital health researchers, and policymakers who possessed relevant professional experience and domain knowledge in HRSs. The age range of the experts was between 30 and 58 years, with 67% (n=12) of them possessing over 10 years of professional experience. On the basis of literature analysis and HTA principles, a preliminary indicator set comprising 5 primary and 16 secondary indicators was constructed. Experts rated the importance of each indicator using a 5-point Likert scale and provided qualitative suggestions for refinement. After the Delphi process, the analytic hierarchy process was applied to determine indicator weights and assess consistency.</p><p><strong>Results: </strong>The Delphi survey reached full participation in the first round (18/18, 100%) and maintained an 88.9% (16/18) response rate in the second round. The final evaluation index system of HRSs contained 5 first-level indicators (performance, effectiveness, safety, economy, and social appropriateness) and 18 second-level indicators. The mean importance scores of the second-level indicators ranged from 4.25 (SD 0.45) to 5.00 (SD 0.00), with coefficients of variation between 0.000 and 0.220. Among the first-level indicators, safety received the highest weight (0.289), followed by social appropriateness (0.251), effectiveness (0.193), performance (0.136), and economy (0.132).</p><p><strong>Conclusions: </strong>This study presents an evaluation index system for HRSs grounded in the HTA framework and validated through expert consensus. The resulting framework not only provides actionable guidance for the design, optimization, and implementation of HRSs but also fills a methodological gap in the field by offering quantifiable, hierarchical evaluation indicators with validated weighting. Future research will involve iterative refinement and empirical validation of the system in real-world deployment settings, thereby enabling continuous improvement and facilitating the establishment of unified evaluation standards for HRS research and practic","PeriodicalId":14841,"journal":{"name":"JMIR Formative Research","volume":"9 ","pages":"e79997"},"PeriodicalIF":2.0,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12721223/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145804500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Per Nilsen, Petra Svedberg, Ingrid Larsson, Lena Petersson, Jens Nygren, Emilie Steerling, Margit Neher
Background: The integration of artificial intelligence (AI) in radiology has advanced significantly, but research on how it affects the daily work of radiology staff is limited.
Objective: This study aimed to explore the experiences of radiology staff on the integration of an AI application in a radiology department in Sweden. This understanding is essential for developing strategies to address potential challenges in AI integration and optimize the use of AI applications in radiology practice.
Methods: This qualitative case study was conducted in a single radiology department with 40 employees in 1 hospital in southwestern Sweden. The study concerned the integration of an AI-powered medical imaging software designed to assist radiologists in analyzing and interpreting medical images. Using a qualitative design, interviews were conducted with 7 radiologists (physicians), 4 radiologic technologists, and 1 physician assistant. Their experience within radiology varied between 13 months and 38 years. The data were analyzed using qualitative content analysis.
Results: Participants cited numerous strengths and advantages of significant value in integrating AI into radiology practice. Numerous challenges were also revealed in terms of difficulties associated with choosing, acquiring, and deploying the AI application and operational issues in radiology practice. They discussed experiences with diverse strategies to facilitate the integration of AI in radiology and to address various challenges and problems.
Conclusions: The findings illustrate how AI integration was experienced in 1 hospital. While not generalizable, the study provides insights that may be useful for similar settings. Radiology staff believed AI integration enhanced decision-making and quality of care, but they encountered challenges from preadoption to routine use of AI in radiology practice. Strategies such as internal training and workflow adaptation facilitated the successful integration of AI in radiology.
{"title":"Radiology Staff Experiences With Integrating Artificial Intelligence Into Radiology Practice in a Swedish Hospital: Qualitative Case Study.","authors":"Per Nilsen, Petra Svedberg, Ingrid Larsson, Lena Petersson, Jens Nygren, Emilie Steerling, Margit Neher","doi":"10.2196/77843","DOIUrl":"10.2196/77843","url":null,"abstract":"<p><strong>Background: </strong>The integration of artificial intelligence (AI) in radiology has advanced significantly, but research on how it affects the daily work of radiology staff is limited.</p><p><strong>Objective: </strong>This study aimed to explore the experiences of radiology staff on the integration of an AI application in a radiology department in Sweden. This understanding is essential for developing strategies to address potential challenges in AI integration and optimize the use of AI applications in radiology practice.</p><p><strong>Methods: </strong>This qualitative case study was conducted in a single radiology department with 40 employees in 1 hospital in southwestern Sweden. The study concerned the integration of an AI-powered medical imaging software designed to assist radiologists in analyzing and interpreting medical images. Using a qualitative design, interviews were conducted with 7 radiologists (physicians), 4 radiologic technologists, and 1 physician assistant. Their experience within radiology varied between 13 months and 38 years. The data were analyzed using qualitative content analysis.</p><p><strong>Results: </strong>Participants cited numerous strengths and advantages of significant value in integrating AI into radiology practice. Numerous challenges were also revealed in terms of difficulties associated with choosing, acquiring, and deploying the AI application and operational issues in radiology practice. They discussed experiences with diverse strategies to facilitate the integration of AI in radiology and to address various challenges and problems.</p><p><strong>Conclusions: </strong>The findings illustrate how AI integration was experienced in 1 hospital. While not generalizable, the study provides insights that may be useful for similar settings. Radiology staff believed AI integration enhanced decision-making and quality of care, but they encountered challenges from preadoption to routine use of AI in radiology practice. Strategies such as internal training and workflow adaptation facilitated the successful integration of AI in radiology.</p>","PeriodicalId":14841,"journal":{"name":"JMIR Formative Research","volume":"9 ","pages":"e77843"},"PeriodicalIF":2.0,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12721490/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145810290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
<p><strong>Background: </strong>Mindfulness-based interventions (MBIs) have been shown to improve university students' well-being. However, previous studies have not systematically explored factors that can facilitate or hinder engagement in MBIs among Saudi university students, nor how MBIs can be culturally adapted to meet their needs.</p><p><strong>Objective: </strong>This study aimed to (1) explore the perspectives of Saudi female university students about factors influencing engagement with MBIs, (2) explore the cultural appropriateness of MBIs, and (3) systematically identify recommendations for developing a culturally appropriate MBI.</p><p><strong>Methods: </strong>A qualitative research approach was used to collect data using semistructured individual interviews and focus groups. Two established frameworks for behavioral interventions were applied to guide the interview topics and data analysis. The COM-B (Capability, Opportunity, and Motivation Domains of Behavior Change) model was applied to identify potential enablers and barriers influencing students' engagement with MBIs. The cultural adaptation framework by Bernal et al was used to explore the cultural appropriateness of MBIs. Subsequently, recommendations for developing MBIs, with a specific focus on an online version, were systematically formulated using the Theory and Techniques Tool. Data were analyzed using mixed inductive-deductive thematic analysis.</p><p><strong>Results: </strong>Fourteen Saudi female university students (mean age 24, SD 4.9 years) participated in semistructured interviews and focus groups. Numerous potential enablers and barriers to MBI engagement were identified. Factors that may influence engagement pertained to capability (variation in knowledge of mindfulness), opportunity (anticipated difficulty finding time), and motivation (variation in anticipated and experienced benefits of mindfulness). Participants also highlighted several considerations that may enhance the cultural relevance of MBIs, drawing on the cultural adaptation domains by Bernal et al. These included the importance of aligning MBIs with the local cultural context, incorporating metaphors and examples rooted in Saudi and Arab culture, and accommodating students' preferences for the duration of MBIs. Key recommendations for developing culturally appropriate MBIs for Saudi university students included providing clear information to improve understanding of mindfulness, providing practical strategies and skills to overcome barriers such as time constraints, delivering MBIs in both Arabic and English, and ensuring that MBIs' content aligns with local cultural values and contexts.</p><p><strong>Conclusions: </strong>Findings and recommendations aim to enhance the feasibility, acceptability, engagement, and effectiveness of MBIs among Saudi university students, particularly female students. However, whether they do in fact achieve these aims is unknown. Future research should endeavor to evalu
{"title":"Facilitators, Barriers, and Cultural Appropriateness of Mindfulness-Based Interventions Among Saudi Female University Students: Qualitative Study.","authors":"Duaa H Alrashdi, Carly Meyer, Rebecca L Gould","doi":"10.2196/78532","DOIUrl":"10.2196/78532","url":null,"abstract":"<p><strong>Background: </strong>Mindfulness-based interventions (MBIs) have been shown to improve university students' well-being. However, previous studies have not systematically explored factors that can facilitate or hinder engagement in MBIs among Saudi university students, nor how MBIs can be culturally adapted to meet their needs.</p><p><strong>Objective: </strong>This study aimed to (1) explore the perspectives of Saudi female university students about factors influencing engagement with MBIs, (2) explore the cultural appropriateness of MBIs, and (3) systematically identify recommendations for developing a culturally appropriate MBI.</p><p><strong>Methods: </strong>A qualitative research approach was used to collect data using semistructured individual interviews and focus groups. Two established frameworks for behavioral interventions were applied to guide the interview topics and data analysis. The COM-B (Capability, Opportunity, and Motivation Domains of Behavior Change) model was applied to identify potential enablers and barriers influencing students' engagement with MBIs. The cultural adaptation framework by Bernal et al was used to explore the cultural appropriateness of MBIs. Subsequently, recommendations for developing MBIs, with a specific focus on an online version, were systematically formulated using the Theory and Techniques Tool. Data were analyzed using mixed inductive-deductive thematic analysis.</p><p><strong>Results: </strong>Fourteen Saudi female university students (mean age 24, SD 4.9 years) participated in semistructured interviews and focus groups. Numerous potential enablers and barriers to MBI engagement were identified. Factors that may influence engagement pertained to capability (variation in knowledge of mindfulness), opportunity (anticipated difficulty finding time), and motivation (variation in anticipated and experienced benefits of mindfulness). Participants also highlighted several considerations that may enhance the cultural relevance of MBIs, drawing on the cultural adaptation domains by Bernal et al. These included the importance of aligning MBIs with the local cultural context, incorporating metaphors and examples rooted in Saudi and Arab culture, and accommodating students' preferences for the duration of MBIs. Key recommendations for developing culturally appropriate MBIs for Saudi university students included providing clear information to improve understanding of mindfulness, providing practical strategies and skills to overcome barriers such as time constraints, delivering MBIs in both Arabic and English, and ensuring that MBIs' content aligns with local cultural values and contexts.</p><p><strong>Conclusions: </strong>Findings and recommendations aim to enhance the feasibility, acceptability, engagement, and effectiveness of MBIs among Saudi university students, particularly female students. However, whether they do in fact achieve these aims is unknown. Future research should endeavor to evalu","PeriodicalId":14841,"journal":{"name":"JMIR Formative Research","volume":"9 ","pages":"e78532"},"PeriodicalIF":2.0,"publicationDate":"2025-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12716633/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145793968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Unlabelled: Associations between eHealth literacy and mental health literacy were examined; no significant association was identified between overall eHealth and mental health literacy and only weak associations between specific skills were recorded. Results are interpreted in lieu of a difference between perceived ability and actual performance.
{"title":"Association Between eHealth Literacy and Mental Health Literacy: Cross-Sectional Study.","authors":"Efrat Neter, Refael Youngmann, Naama Gruper","doi":"10.2196/76812","DOIUrl":"10.2196/76812","url":null,"abstract":"<p><strong>Unlabelled: </strong>Associations between eHealth literacy and mental health literacy were examined; no significant association was identified between overall eHealth and mental health literacy and only weak associations between specific skills were recorded. Results are interpreted in lieu of a difference between perceived ability and actual performance.</p>","PeriodicalId":14841,"journal":{"name":"JMIR Formative Research","volume":"9 ","pages":"e76812"},"PeriodicalIF":2.0,"publicationDate":"2025-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12716232/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145793871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jean Woo, Ruby Yu, Maggie Wong, Ken Cheung, Nicole Fung
Unlabelled: With population aging, an increase in total life expectancy at birth (TLE) should ideally be accompanied by an equal increase in health span (HS), or by a trend in increasing HS/TLE ratio. Hong Kong has one of the longest life expectancies in the world; however, there is a trend of declining HS/TLE ratio, such that the absolute number of people with dependencies is increasing. To address this challenge, the World Health Organization proposed the model of integrated care for older people (ICOPE) that combines both health and social elements in community care, using the measurement of intrinsic capacity (IC) as a metric for monitoring the performance in different countries. The use of technology is essential in achieving a wide coverage of the population in assessing IC, followed by an individually tailored plan of action. This model can be adapted to different health and social care systems in different countries. Hong Kong has an extensive network of community centers, where the basic assessment may be based, followed by further assessments and personalized activities, and referral to medical professionals may only be needed in the presence of disease. Conversely, the medical sector may refer patients to the community for activities designed to optimize the various domains of IC. Such a model of care has the potential to address manpower shortage and mitigate inequalities in healthy aging, as well as enable the monitoring of physiological systems in community-dwelling adults using digital biomarkers as a metric of IC.
{"title":"mHealth as a Key Component of a New Model of Primary Care for Older Adults.","authors":"Jean Woo, Ruby Yu, Maggie Wong, Ken Cheung, Nicole Fung","doi":"10.2196/82262","DOIUrl":"10.2196/82262","url":null,"abstract":"<p><strong>Unlabelled: </strong>With population aging, an increase in total life expectancy at birth (TLE) should ideally be accompanied by an equal increase in health span (HS), or by a trend in increasing HS/TLE ratio. Hong Kong has one of the longest life expectancies in the world; however, there is a trend of declining HS/TLE ratio, such that the absolute number of people with dependencies is increasing. To address this challenge, the World Health Organization proposed the model of integrated care for older people (ICOPE) that combines both health and social elements in community care, using the measurement of intrinsic capacity (IC) as a metric for monitoring the performance in different countries. The use of technology is essential in achieving a wide coverage of the population in assessing IC, followed by an individually tailored plan of action. This model can be adapted to different health and social care systems in different countries. Hong Kong has an extensive network of community centers, where the basic assessment may be based, followed by further assessments and personalized activities, and referral to medical professionals may only be needed in the presence of disease. Conversely, the medical sector may refer patients to the community for activities designed to optimize the various domains of IC. Such a model of care has the potential to address manpower shortage and mitigate inequalities in healthy aging, as well as enable the monitoring of physiological systems in community-dwelling adults using digital biomarkers as a metric of IC.</p>","PeriodicalId":14841,"journal":{"name":"JMIR Formative Research","volume":"9 ","pages":"e82262"},"PeriodicalIF":2.0,"publicationDate":"2025-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12716121/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145793995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: Multiple-choice questions (MCQs) are essential in medical education for assessing knowledge and clinical reasoning. Traditional MCQ development involves expert reviews and revisions, which can be time-consuming and subject to bias. Large language models (LLMs) have emerged as potential tools for evaluating MCQ accuracy and efficiency. However, direct comparisons of these models in orthopedic MCQ assessments are limited.
Objective: This study compared the performance of ChatGPT and DeepSeek in terms of correctness, response time, and reliability when answering MCQs from an orthopedic examination for medical students.
Methods: This cross-sectional study included 209 orthopedic MCQs from summative assessments during the 2023-2024 academic year. ChatGPT (including the "Reason" function) and DeepSeek (including the "DeepThink" function) were used to identify the correct answers. Correctness and response times were recorded and compared using a χ2 test and Mann-Whitney U test where appropriate. The two LLMs' reliability was assessed using the Cohen κ coefficient. The MCQs incorrectly answered by both models were reviewed by orthopedic faculty to identify ambiguities or content issues.
Results: ChatGPT achieved a correctness rate of 80.38% (168/209), while DeepSeek achieved 74.2% (155/209; P=.04). ChatGPT's Reason function also outperformed DeepSeek's DeepThink function (177/209, 84.7% vs 168/209, 80.4%; P=.12). The average response time for ChatGPT was 10.40 (SD 13.29) seconds, significantly shorter than DeepSeek's 34.42 (SD 25.48) seconds (P<.001). Regarding reliability, ChatGPT demonstrated an almost perfect agreement (κ=0.81), whereas DeepSeek showed substantial agreement (κ=0.78). A completely false response was recorded in 7.7% (16/209) of responses for both models.
Conclusions: ChatGPT outperformed DeepSeek in correctness and response time, demonstrating its efficiency in evaluating orthopedic MCQs. This high reliability suggests its potential for integration into medical assessments. However, our results indicate that some MCQs will require revisions by instructors to improve their clarity. Further studies are needed to evaluate the role of artificial intelligence in other disciplines and to validate other LLMs.
{"title":"Comparing ChatGPT and DeepSeek for Assessment of Multiple-Choice Questions in Orthopedic Medical Education: Cross-Sectional Study.","authors":"Chirathit Anusitviwat, Sitthiphong Suwannaphisit, Jongdee Bvonpanttarananon, Boonsin Tangtrakulwanich","doi":"10.2196/75607","DOIUrl":"10.2196/75607","url":null,"abstract":"<p><strong>Background: </strong>Multiple-choice questions (MCQs) are essential in medical education for assessing knowledge and clinical reasoning. Traditional MCQ development involves expert reviews and revisions, which can be time-consuming and subject to bias. Large language models (LLMs) have emerged as potential tools for evaluating MCQ accuracy and efficiency. However, direct comparisons of these models in orthopedic MCQ assessments are limited.</p><p><strong>Objective: </strong>This study compared the performance of ChatGPT and DeepSeek in terms of correctness, response time, and reliability when answering MCQs from an orthopedic examination for medical students.</p><p><strong>Methods: </strong>This cross-sectional study included 209 orthopedic MCQs from summative assessments during the 2023-2024 academic year. ChatGPT (including the \"Reason\" function) and DeepSeek (including the \"DeepThink\" function) were used to identify the correct answers. Correctness and response times were recorded and compared using a χ2 test and Mann-Whitney U test where appropriate. The two LLMs' reliability was assessed using the Cohen κ coefficient. The MCQs incorrectly answered by both models were reviewed by orthopedic faculty to identify ambiguities or content issues.</p><p><strong>Results: </strong>ChatGPT achieved a correctness rate of 80.38% (168/209), while DeepSeek achieved 74.2% (155/209; P=.04). ChatGPT's Reason function also outperformed DeepSeek's DeepThink function (177/209, 84.7% vs 168/209, 80.4%; P=.12). The average response time for ChatGPT was 10.40 (SD 13.29) seconds, significantly shorter than DeepSeek's 34.42 (SD 25.48) seconds (P<.001). Regarding reliability, ChatGPT demonstrated an almost perfect agreement (κ=0.81), whereas DeepSeek showed substantial agreement (κ=0.78). A completely false response was recorded in 7.7% (16/209) of responses for both models.</p><p><strong>Conclusions: </strong>ChatGPT outperformed DeepSeek in correctness and response time, demonstrating its efficiency in evaluating orthopedic MCQs. This high reliability suggests its potential for integration into medical assessments. However, our results indicate that some MCQs will require revisions by instructors to improve their clarity. Further studies are needed to evaluate the role of artificial intelligence in other disciplines and to validate other LLMs.</p>","PeriodicalId":14841,"journal":{"name":"JMIR Formative Research","volume":"9 ","pages":"e75607"},"PeriodicalIF":2.0,"publicationDate":"2025-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12716854/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145793915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adam Forward, Gizem Koca, Aymane Sahli, Noreen Kamal
<p><strong>Background: </strong>Clinical registries are critical for monitoring processes of care in diseases and driving quality improvements. However, many smaller hospitals lack the required resources to collect the necessary data to contribute to registries.</p><p><strong>Objective: </strong>This study aims to design and evaluate a data collection tool for acute stroke treatment that streamlines the collection of process data and provides tools to aid clinician users while not interfering with clinical workflow. The evaluation will identify key design requirements that facilitate prospective data collection and add value for clinicians.</p><p><strong>Methods: </strong>We developed a prototype tool for testing using Figma Pro for use on an iPad. Clinicians were recruited through convenience sampling to test the prototype's use in a small-scale simulated clinical field experiment, during which participant were asked to think aloud and then complete a series of tasks to mimic a mock stroke treatment while inputting the required data into the prototype. Follow-up semistructured interviews were conducted to gain feedback on how the prototype integrated into the workflow and on the aspects of the prototype they felt helped and hindered their use of it. Qualitative data analysis combined review of the experiment recordings to identify the most frequent errors made during the scenario and deductive thematic analysis from the follow-up interviews to determine user needs for the following prototype iteration. The insights from the feedback identified design requirements that were implemented in the iterated design and documented to provide a reference for future product designers.</p><p><strong>Results: </strong>Three participants were recruited from 2 hospitals between April 18 and June 6, 2024, for the simulated field experiment. The scenario took 10-12 minutes, with 1.2-3.7 minutes spent using the prototype, depending on whether optional features such as the NIHSS (National Institute of Health Stroke Scale) calculator were used. The simple and condensed layout and features such as NIHSS calculators, benchmark metric timers, and the final pop-up summary received the most positive feedback from each participant. Issues identified included small target sizes causing higher error rates, lack of color in important features reducing their visibility, and grouping of mandatory and optional information field layouts leading to a disjointed flow. The key design requirements include prioritizing simple dynamic layouts, sufficient target sizes to prevent errors, useful features with clear visual cues, and prompt data feedback to facilitate seamless integration.</p><p><strong>Conclusions: </strong>A prospective data collection tool for clinicians to use during stroke treatment can add value for clinicians and, with further testing, can be integrated into workflow. The design requirements identified through this study can provide a basis for streamlining the col
{"title":"Identification of Design Requirements for a Software Application for Use by Clinicians That Collects Acute Stroke Treatment Data During Clinical Workflow: Pilot Study.","authors":"Adam Forward, Gizem Koca, Aymane Sahli, Noreen Kamal","doi":"10.2196/64800","DOIUrl":"10.2196/64800","url":null,"abstract":"<p><strong>Background: </strong>Clinical registries are critical for monitoring processes of care in diseases and driving quality improvements. However, many smaller hospitals lack the required resources to collect the necessary data to contribute to registries.</p><p><strong>Objective: </strong>This study aims to design and evaluate a data collection tool for acute stroke treatment that streamlines the collection of process data and provides tools to aid clinician users while not interfering with clinical workflow. The evaluation will identify key design requirements that facilitate prospective data collection and add value for clinicians.</p><p><strong>Methods: </strong>We developed a prototype tool for testing using Figma Pro for use on an iPad. Clinicians were recruited through convenience sampling to test the prototype's use in a small-scale simulated clinical field experiment, during which participant were asked to think aloud and then complete a series of tasks to mimic a mock stroke treatment while inputting the required data into the prototype. Follow-up semistructured interviews were conducted to gain feedback on how the prototype integrated into the workflow and on the aspects of the prototype they felt helped and hindered their use of it. Qualitative data analysis combined review of the experiment recordings to identify the most frequent errors made during the scenario and deductive thematic analysis from the follow-up interviews to determine user needs for the following prototype iteration. The insights from the feedback identified design requirements that were implemented in the iterated design and documented to provide a reference for future product designers.</p><p><strong>Results: </strong>Three participants were recruited from 2 hospitals between April 18 and June 6, 2024, for the simulated field experiment. The scenario took 10-12 minutes, with 1.2-3.7 minutes spent using the prototype, depending on whether optional features such as the NIHSS (National Institute of Health Stroke Scale) calculator were used. The simple and condensed layout and features such as NIHSS calculators, benchmark metric timers, and the final pop-up summary received the most positive feedback from each participant. Issues identified included small target sizes causing higher error rates, lack of color in important features reducing their visibility, and grouping of mandatory and optional information field layouts leading to a disjointed flow. The key design requirements include prioritizing simple dynamic layouts, sufficient target sizes to prevent errors, useful features with clear visual cues, and prompt data feedback to facilitate seamless integration.</p><p><strong>Conclusions: </strong>A prospective data collection tool for clinicians to use during stroke treatment can add value for clinicians and, with further testing, can be integrated into workflow. The design requirements identified through this study can provide a basis for streamlining the col","PeriodicalId":14841,"journal":{"name":"JMIR Formative Research","volume":"9 ","pages":"e64800"},"PeriodicalIF":2.0,"publicationDate":"2025-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12759296/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145793920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}