Background: The high prevalence of noncommunicable diseases and the growing importance of social media have prompted health care professionals (HCPs) to use social media to deliver health information aimed at reducing lifestyle risk factors. Previous studies have acknowledged that the identification of elements that influence user engagement metrics could help HCPs in creating engaging posts toward effective health promotion on social media. Nevertheless, few studies have attempted to comprehensively identify a list of elements in social media posts that could influence user engagement metrics.
Objective: This systematic review aimed to identify elements influencing user engagement metrics in social media posts by HCPs aimed to reduce lifestyle risk factors.
Methods: Relevant studies in English, published between January 2006 and June 2023 were identified from MEDLINE or OVID, Scopus, Web of Science, and CINAHL databases. Included studies were those that examined social media posts by HCPs aimed at reducing the 4 key lifestyle risk factors. Additionally, the studies also outlined elements in social media posts that influenced user engagement metrics. The titles, abstracts, and full papers were screened and reviewed for eligibility. Following data extraction, narrative synthesis was performed. All investigated elements in the included studies were categorized. The elements in social media posts that influenced user engagement metrics were identified.
Results: A total of 19 studies were included in this review. Investigated elements were grouped into 9 categories, with 35 elements found to influence user engagement. The 3 predominant categories of elements influencing user engagement were communication using supportive or emotive elements, communication aimed toward behavioral changes, and the appearance of posts. In contrast, the source of post content, social media platform, and timing of post had less than 3 studies with elements influencing user engagement.
Conclusions: Findings demonstrated that supportive or emotive communication toward behavioral changes and post appearance could increase postlevel interactions, indicating a favorable response from the users toward posts made by HCPs. As social media continues to evolve, these elements should be constantly evaluated through further research.
{"title":"Elements Influencing User Engagement in Social Media Posts on Lifestyle Risk Factors: Systematic Review.","authors":"Yan Yee Yip, Mohd Makmor-Bakry, Wei Wen Chong","doi":"10.2196/59742","DOIUrl":"https://doi.org/10.2196/59742","url":null,"abstract":"<p><strong>Background: </strong>The high prevalence of noncommunicable diseases and the growing importance of social media have prompted health care professionals (HCPs) to use social media to deliver health information aimed at reducing lifestyle risk factors. Previous studies have acknowledged that the identification of elements that influence user engagement metrics could help HCPs in creating engaging posts toward effective health promotion on social media. Nevertheless, few studies have attempted to comprehensively identify a list of elements in social media posts that could influence user engagement metrics.</p><p><strong>Objective: </strong>This systematic review aimed to identify elements influencing user engagement metrics in social media posts by HCPs aimed to reduce lifestyle risk factors.</p><p><strong>Methods: </strong>Relevant studies in English, published between January 2006 and June 2023 were identified from MEDLINE or OVID, Scopus, Web of Science, and CINAHL databases. Included studies were those that examined social media posts by HCPs aimed at reducing the 4 key lifestyle risk factors. Additionally, the studies also outlined elements in social media posts that influenced user engagement metrics. The titles, abstracts, and full papers were screened and reviewed for eligibility. Following data extraction, narrative synthesis was performed. All investigated elements in the included studies were categorized. The elements in social media posts that influenced user engagement metrics were identified.</p><p><strong>Results: </strong>A total of 19 studies were included in this review. Investigated elements were grouped into 9 categories, with 35 elements found to influence user engagement. The 3 predominant categories of elements influencing user engagement were communication using supportive or emotive elements, communication aimed toward behavioral changes, and the appearance of posts. In contrast, the source of post content, social media platform, and timing of post had less than 3 studies with elements influencing user engagement.</p><p><strong>Conclusions: </strong>Findings demonstrated that supportive or emotive communication toward behavioral changes and post appearance could increase postlevel interactions, indicating a favorable response from the users toward posts made by HCPs. As social media continues to evolve, these elements should be constantly evaluated through further research.</p>","PeriodicalId":16337,"journal":{"name":"Journal of Medical Internet Research","volume":"26 ","pages":"e59742"},"PeriodicalIF":5.8,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142693033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haeun Lee, Seok Kim, Hui-Woun Moon, Ho-Young Lee, Kwangsoo Kim, Se Young Jung, Sooyoung Yoo
<p><strong>Background: </strong>Accurate hospital length of stay (LoS) prediction enables efficient resource management. Conventional LoS prediction models with limited covariates and nonstandardized data have limited reproducibility when applied to the general population.</p><p><strong>Objective: </strong>In this study, we developed and validated a machine learning (ML)-based LoS prediction model for planned admissions using the Observational Medical Outcomes Partnership Common Data Model (OMOP CDM).</p><p><strong>Methods: </strong>Retrospective patient-level prediction models used electronic health record (EHR) data converted to the OMOP CDM (version 5.3) from Seoul National University Bundang Hospital (SNUBH) in South Korea. The study included 137,437 hospital admission episodes between January 2016 and December 2020. Covariates from the patient, condition occurrence, medication, observation, measurement, procedure, and visit occurrence tables were included in the analysis. To perform feature selection, we applied Lasso regularization in the logistic regression. The primary outcome was an LoS of 7 days or longer, while the secondary outcome was an LoS of 3 days or longer. The prediction models were developed using 6 ML algorithms, with the training and test set split in a 7:3 ratio. The performance of each model was evaluated based on the area under the receiver operating characteristic curve (AUROC) and the area under the precision-recall curve (AUPRC). Shapley Additive Explanations (SHAP) analysis measured feature importance, while calibration plots assessed the reliability of the prediction models. External validation of the developed models occurred at an independent institution, the Seoul National University Hospital.</p><p><strong>Results: </strong>The final sample included 129,938 patient entry events in the planned admissions. The Extreme Gradient Boosting (XGB) model achieved the best performance in binary classification for predicting an LoS of 7 days or longer, with an AUROC of 0.891 (95% CI 0.887-0.894) and an AUPRC of 0.819 (95% CI 0.813-0.826) on the internal test set. The Light Gradient Boosting (LGB) model performed the best in the multiclassification for predicting an LoS of 3 days or more, with an AUROC of 0.901 (95% CI 0.898-0.904) and an AUPRC of 0.770 (95% CI 0.762-0.779). The most important features contributing to the models were the operation performed, frequency of previous outpatient visits, patient admission department, age, and day of admission. The RF model showed robust performance in the external validation set, achieving an AUROC of 0.804 (95% CI 0.802-0.807).</p><p><strong>Conclusions: </strong>The use of the OMOP CDM in predicting hospital LoS for planned admissions demonstrates promising predictive capabilities for stays of varying durations. It underscores the advantage of standardized data in achieving reproducible results. This approach should serve as a model for enhancing operational efficiency and patie
背景:准确的住院时间(LoS)预测有助于实现有效的资源管理。传统的住院时间预测模型协变量有限且数据非标准化,应用于普通人群时可重复性有限:在这项研究中,我们利用观察性医疗结果合作组织通用数据模型(OMOP CDM),开发并验证了基于机器学习(ML)的计划入院 LoS 预测模型:回顾性患者水平预测模型使用了韩国首尔国立大学盆唐医院(SNUBH)转换为 OMOP CDM(5.3 版)的电子健康记录(EHR)数据。研究纳入了 2016 年 1 月至 2020 年 12 月期间的 137437 例入院病例。分析中包含了患者、病情发生、用药、观察、测量、手术和就诊发生表中的协变量。为了进行特征选择,我们在逻辑回归中应用了 Lasso 正则化。主要结果为 7 天或更长时间的 LoS,次要结果为 3 天或更长时间的 LoS。预测模型采用 6 种 ML 算法开发,训练集和测试集的比例为 7:3。每个模型的性能都是根据接收者操作特征曲线下面积(AUROC)和精确度-召回曲线下面积(AUPRC)进行评估的。Shapley Additive Explanations (SHAP) 分析衡量了特征的重要性,而校准图则评估了预测模型的可靠性。在首尔国立大学医院这一独立机构对所开发的模型进行了外部验证:最终样本包括计划入院的 129938 个患者入院事件。在内部测试集上,极梯度提升(XGB)模型在预测 7 天或更长时间的 LoS 的二元分类中表现最佳,AUROC 为 0.891(95% CI 0.887-0.894),AUPRC 为 0.819(95% CI 0.813-0.826)。轻梯度提升(LGB)模型在预测 3 天或以上 LoS 的多重分类中表现最佳,AUROC 为 0.901(95% CI 0.898-0.904),AUPRC 为 0.770(95% CI 0.762-0.779)。对模型有贡献的最重要特征是所做手术、以前门诊就诊频率、患者入院科室、年龄和入院日期。RF模型在外部验证集中表现出强劲的性能,AUROC达到0.804(95% CI 0.802-0.807):结论:使用 OMOP CDM 预测计划入院患者的 LoS 显示了对不同住院时间的预测能力。它强调了标准化数据在实现结果可重复性方面的优势。这种方法可作为提高医疗机构运营效率和患者护理协调的典范。
{"title":"Hospital Length of Stay Prediction for Planned Admissions Using Observational Medical Outcomes Partnership Common Data Model: Retrospective Study.","authors":"Haeun Lee, Seok Kim, Hui-Woun Moon, Ho-Young Lee, Kwangsoo Kim, Se Young Jung, Sooyoung Yoo","doi":"10.2196/59260","DOIUrl":"10.2196/59260","url":null,"abstract":"<p><strong>Background: </strong>Accurate hospital length of stay (LoS) prediction enables efficient resource management. Conventional LoS prediction models with limited covariates and nonstandardized data have limited reproducibility when applied to the general population.</p><p><strong>Objective: </strong>In this study, we developed and validated a machine learning (ML)-based LoS prediction model for planned admissions using the Observational Medical Outcomes Partnership Common Data Model (OMOP CDM).</p><p><strong>Methods: </strong>Retrospective patient-level prediction models used electronic health record (EHR) data converted to the OMOP CDM (version 5.3) from Seoul National University Bundang Hospital (SNUBH) in South Korea. The study included 137,437 hospital admission episodes between January 2016 and December 2020. Covariates from the patient, condition occurrence, medication, observation, measurement, procedure, and visit occurrence tables were included in the analysis. To perform feature selection, we applied Lasso regularization in the logistic regression. The primary outcome was an LoS of 7 days or longer, while the secondary outcome was an LoS of 3 days or longer. The prediction models were developed using 6 ML algorithms, with the training and test set split in a 7:3 ratio. The performance of each model was evaluated based on the area under the receiver operating characteristic curve (AUROC) and the area under the precision-recall curve (AUPRC). Shapley Additive Explanations (SHAP) analysis measured feature importance, while calibration plots assessed the reliability of the prediction models. External validation of the developed models occurred at an independent institution, the Seoul National University Hospital.</p><p><strong>Results: </strong>The final sample included 129,938 patient entry events in the planned admissions. The Extreme Gradient Boosting (XGB) model achieved the best performance in binary classification for predicting an LoS of 7 days or longer, with an AUROC of 0.891 (95% CI 0.887-0.894) and an AUPRC of 0.819 (95% CI 0.813-0.826) on the internal test set. The Light Gradient Boosting (LGB) model performed the best in the multiclassification for predicting an LoS of 3 days or more, with an AUROC of 0.901 (95% CI 0.898-0.904) and an AUPRC of 0.770 (95% CI 0.762-0.779). The most important features contributing to the models were the operation performed, frequency of previous outpatient visits, patient admission department, age, and day of admission. The RF model showed robust performance in the external validation set, achieving an AUROC of 0.804 (95% CI 0.802-0.807).</p><p><strong>Conclusions: </strong>The use of the OMOP CDM in predicting hospital LoS for planned admissions demonstrates promising predictive capabilities for stays of varying durations. It underscores the advantage of standardized data in achieving reproducible results. This approach should serve as a model for enhancing operational efficiency and patie","PeriodicalId":16337,"journal":{"name":"Journal of Medical Internet Research","volume":"26 ","pages":"e59260"},"PeriodicalIF":5.8,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142687255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chen Shang, Ya Yang, Chengcheng He, Junqi Feng, Yan Li, Meimei Tian, Zhanqi Zhao, Yuan Gao, Zhe Li
Background: The sleep status of patients in the surgical intensive care unit (ICU) significantly impacts their recoveries. However, the effects of surgical procedures on sleep are rarely studied.
Objective: This study aimed to investigate quantitatively the impact of traditional open surgery (TOS) versus minimally invasive surgery (MIS) on patients' first-night sleep status in a surgical ICU.
Methods: Patients transferred to the ICU after surgery were prospectively screened. The sleep status on the night of surgery was assessed by the patient- and nurse-completed Richards-Campbell Sleep Questionnaire (RCSQ) and Huawei wearable sleep monitoring wristband. Surgical types and sleep parameters were analyzed.
Results: A total of 61 patients were enrolled. Compared to patients in the TOS group, patients in the MIS group had a higher nurse-RCSQ score (mean 60.9, SD 16.9 vs mean 51.2, SD 17.3; P=.03), self-RCSQ score (mean 58.6, SD 16.2 vs mean 49.5, SD 14.8; P=.03), and Huawei sleep score (mean 77.9, SD 4.5 vs mean 68.6, SD 11.1; P<.001). Quantitative sleep analysis of Huawei wearable data showed a longer total sleep period (mean 503.0, SD 91.4 vs mean 437.9, SD 144.0 min; P=.04), longer rapid eye movement sleep period (mean 81.0, 52.1 vs mean 55.8, SD 44.5 min; P=.047), and higher deep sleep continuity score (mean 56.4, SD 7.0 vs mean 47.5, SD 12.1; P=.001) in the MIS group.
Conclusions: MIS, compared to TOS, contributed to higher sleep quality for patients in the ICU after surgery.
背景:外科重症监护室(ICU)患者的睡眠状况对其康复有重大影响。然而,有关外科手术对睡眠影响的研究却很少:本研究旨在定量研究传统开放手术(TOS)与微创手术(MIS)对外科重症监护病房患者第一晚睡眠状况的影响:方法:对手术后转入重症监护室的患者进行前瞻性筛查。方法:对手术后转入重症监护室的患者进行前瞻性筛选,通过由患者和护士填写的理查兹-坎贝尔睡眠问卷(RCSQ)和华为可穿戴睡眠监测腕带评估手术当晚的睡眠状况。对手术类型和睡眠参数进行了分析:共有 61 名患者入组。与TOS组患者相比,MIS组患者的护士RCSQ评分(平均60.9,SD16.9 vs 平均51.2,SD17.3;P=.03)、自我RCSQ评分(平均58.6,SD16.2 vs 平均49.5,SD14.8;P=.03)和华为睡眠评分(平均77.9,SD4.5 vs 平均68.6,SD11.1;PC结论:与TOS相比,MIS有助于提高重症监护病房患者术后的睡眠质量。
{"title":"Quantitative Impact of Traditional Open Surgery and Minimally Invasive Surgery on Patients' First-Night Sleep Status in the Intensive Care Unit: Prospective Cohort Study.","authors":"Chen Shang, Ya Yang, Chengcheng He, Junqi Feng, Yan Li, Meimei Tian, Zhanqi Zhao, Yuan Gao, Zhe Li","doi":"10.2196/56777","DOIUrl":"https://doi.org/10.2196/56777","url":null,"abstract":"<p><strong>Background: </strong>The sleep status of patients in the surgical intensive care unit (ICU) significantly impacts their recoveries. However, the effects of surgical procedures on sleep are rarely studied.</p><p><strong>Objective: </strong>This study aimed to investigate quantitatively the impact of traditional open surgery (TOS) versus minimally invasive surgery (MIS) on patients' first-night sleep status in a surgical ICU.</p><p><strong>Methods: </strong>Patients transferred to the ICU after surgery were prospectively screened. The sleep status on the night of surgery was assessed by the patient- and nurse-completed Richards-Campbell Sleep Questionnaire (RCSQ) and Huawei wearable sleep monitoring wristband. Surgical types and sleep parameters were analyzed.</p><p><strong>Results: </strong>A total of 61 patients were enrolled. Compared to patients in the TOS group, patients in the MIS group had a higher nurse-RCSQ score (mean 60.9, SD 16.9 vs mean 51.2, SD 17.3; P=.03), self-RCSQ score (mean 58.6, SD 16.2 vs mean 49.5, SD 14.8; P=.03), and Huawei sleep score (mean 77.9, SD 4.5 vs mean 68.6, SD 11.1; P<.001). Quantitative sleep analysis of Huawei wearable data showed a longer total sleep period (mean 503.0, SD 91.4 vs mean 437.9, SD 144.0 min; P=.04), longer rapid eye movement sleep period (mean 81.0, 52.1 vs mean 55.8, SD 44.5 min; P=.047), and higher deep sleep continuity score (mean 56.4, SD 7.0 vs mean 47.5, SD 12.1; P=.001) in the MIS group.</p><p><strong>Conclusions: </strong>MIS, compared to TOS, contributed to higher sleep quality for patients in the ICU after surgery.</p>","PeriodicalId":16337,"journal":{"name":"Journal of Medical Internet Research","volume":"26 ","pages":"e56777"},"PeriodicalIF":5.8,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142693037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
<p><strong>Background: </strong>Given the complexity and diversity of lichenoid vulvar disease (LVD) risk factors, it is crucial to actively explore these factors and construct personalized warning models using relevant clinical variables to assess disease risk in patients. Yet, to date, there has been insufficient research, both nationwide and internationally, on risk factors and warning models for LVD. In light of these gaps, this study represents the first systematic exploration of the risk factors associated with LVD.</p><p><strong>Objective: </strong>The risk factors of LVD in women were explored and a medically evidence-based warning model was constructed to provide an early alert tool for the high-risk target population. The model can be applied in the clinic to identify high-risk patients and evaluate its accuracy and practicality in predicting LVD in women. Simultaneously, it can also enhance the diagnostic and treatment proficiency of medical personnel in primary community health service centers, which is of great significance in reducing overall health care spending and disease burden.</p><p><strong>Methods: </strong>A total of 2990 patients who attended West China Second Hospital of Sichuan University from January 2013 to December 2017 were selected as the study candidates and were divided into 1218 cases in the normal vulvovagina group (group 0) and 1772 cases in the lichenoid vulvar disease group (group 1) according to the results of the case examination. We investigated and collected routine examination data from patients for intergroup comparisons, included factors with significant differences in multifactorial analysis, and constructed logistic regression, random forests, gradient boosting machine (GBM), adaboost, eXtreme Gradient Boosting, and Categorical Boosting analysis models. The predictive efficacy of these six models was evaluated using receiver operating characteristic curve and area under the curve.</p><p><strong>Results: </strong>Univariate analysis revealed that vaginitis, urinary incontinence, humidity of the long-term residential environment, spicy dietary habits, regular intake of coffee or caffeinated beverages, daily sleep duration, diabetes mellitus, smoking history, presence of autoimmune diseases, menopausal status, and hypertension were all significant risk factors affecting female LVD. Furthermore, the area under the receiver operating characteristic curve, accuracy, sensitivity, and F<sub>1</sub>-score of the GBM warning model were notably higher than the other 5 predictive analysis models. The GBM analysis model indicated that menopausal status had the strongest impact on female LVD, showing a positive correlation, followed by the presence of autoimmune diseases, which also displayed a positive dependency.</p><p><strong>Conclusions: </strong>In accordance with evidence-based medicine, the construction of a predictive warning model for female LVD can be used to identify high-risk populations at an early sta
{"title":"Development and Validation of a Machine Learning-Based Early Warning Model for Lichenoid Vulvar Disease: Prediction Model Development Study.","authors":"Jian Meng, Xiaoyu Niu, Can Luo, Yueyue Chen, Qiao Li, Dongmei Wei","doi":"10.2196/55734","DOIUrl":"https://doi.org/10.2196/55734","url":null,"abstract":"<p><strong>Background: </strong>Given the complexity and diversity of lichenoid vulvar disease (LVD) risk factors, it is crucial to actively explore these factors and construct personalized warning models using relevant clinical variables to assess disease risk in patients. Yet, to date, there has been insufficient research, both nationwide and internationally, on risk factors and warning models for LVD. In light of these gaps, this study represents the first systematic exploration of the risk factors associated with LVD.</p><p><strong>Objective: </strong>The risk factors of LVD in women were explored and a medically evidence-based warning model was constructed to provide an early alert tool for the high-risk target population. The model can be applied in the clinic to identify high-risk patients and evaluate its accuracy and practicality in predicting LVD in women. Simultaneously, it can also enhance the diagnostic and treatment proficiency of medical personnel in primary community health service centers, which is of great significance in reducing overall health care spending and disease burden.</p><p><strong>Methods: </strong>A total of 2990 patients who attended West China Second Hospital of Sichuan University from January 2013 to December 2017 were selected as the study candidates and were divided into 1218 cases in the normal vulvovagina group (group 0) and 1772 cases in the lichenoid vulvar disease group (group 1) according to the results of the case examination. We investigated and collected routine examination data from patients for intergroup comparisons, included factors with significant differences in multifactorial analysis, and constructed logistic regression, random forests, gradient boosting machine (GBM), adaboost, eXtreme Gradient Boosting, and Categorical Boosting analysis models. The predictive efficacy of these six models was evaluated using receiver operating characteristic curve and area under the curve.</p><p><strong>Results: </strong>Univariate analysis revealed that vaginitis, urinary incontinence, humidity of the long-term residential environment, spicy dietary habits, regular intake of coffee or caffeinated beverages, daily sleep duration, diabetes mellitus, smoking history, presence of autoimmune diseases, menopausal status, and hypertension were all significant risk factors affecting female LVD. Furthermore, the area under the receiver operating characteristic curve, accuracy, sensitivity, and F<sub>1</sub>-score of the GBM warning model were notably higher than the other 5 predictive analysis models. The GBM analysis model indicated that menopausal status had the strongest impact on female LVD, showing a positive correlation, followed by the presence of autoimmune diseases, which also displayed a positive dependency.</p><p><strong>Conclusions: </strong>In accordance with evidence-based medicine, the construction of a predictive warning model for female LVD can be used to identify high-risk populations at an early sta","PeriodicalId":16337,"journal":{"name":"Journal of Medical Internet Research","volume":"26 ","pages":"e55734"},"PeriodicalIF":5.8,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142693029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: Systemic inflammatory response syndrome (SIRS) is a serious postoperative complication among older adult surgical patients that frequently develops into sepsis or even death. Notably, the incidences of SIRS and sepsis steadily increase with age. It is important to identify the risk of postoperative SIRS for older adult patients at a sufficiently early stage, which would allow preemptive individualized enhanced therapy to be conducted to improve the prognosis of older adult patients. In recent years, machine learning (ML) models have been deployed by researchers for many tasks, including disease prediction and risk stratification, exhibiting good application potential.
Objective: We aimed to develop and validate an individualized predictive model to identify susceptible and high-risk populations for SIRS in older adult patients to instruct appropriate early interventions.
Methods: Data for surgical patients aged ≥65 years from September 2015 to September 2020 in 3 independent medical centers were retrieved and analyzed. The eligible patient cohort in the Third Affiliated Hospital of Sun Yat-sen University was randomly separated into an 80% training set (2882 patients) and a 20% internal validation set (720 patients). We developed 4 ML models to predict postoperative SIRS. The area under the receiver operating curve (AUC), F1 score, Brier score, and calibration curve were used to evaluate the model performance. The model with the best performance was further validated in the other 2 independent data sets involving 844 and 307 cases, respectively.
Results: The incidences of SIRS in the 3 medical centers were 24.3% (876/3602), 29.6% (250/844), and 6.5% (20/307), respectively. We identified 15 variables that were significantly associated with postoperative SIRS and used in 4 ML models to predict postoperative SIRS. A balanced cutoff between sensitivity and specificity was chosen to ensure as high a true positive as possible. The random forest classifier (RF) model showed the best overall performance to predict postoperative SIRS, with an AUC of 0.751 (95% CI 0.709-0.793), sensitivity of 0.682, specificity of 0.681, and F1 score of 0.508 in the internal validation set and higher AUCs in the external validation-1 set (0.759, 95% CI 0.723-0.795) and external validation-2 set (0.804, 95% CI 0.746-0.863).
Conclusions: We developed and validated a generalizable RF model to predict postoperative SIRS in older adult patients, enabling clinicians to screen susceptible and high-risk patients and implement early individualized interventions. An online risk calculator to make the RF model accessible to anesthesiologists and peers around the world was developed.
{"title":"Identification of a Susceptible and High-Risk Population for Postoperative Systemic Inflammatory Response Syndrome in Older Adults: Machine Learning-Based Predictive Model.","authors":"Haiyan Mai, Yaxin Lu, Yu Fu, Tongsen Luo, Xiaoyue Li, Yihan Zhang, Zifeng Liu, Yuenong Zhang, Shaoli Zhou, Chaojin Chen","doi":"10.2196/57486","DOIUrl":"10.2196/57486","url":null,"abstract":"<p><strong>Background: </strong>Systemic inflammatory response syndrome (SIRS) is a serious postoperative complication among older adult surgical patients that frequently develops into sepsis or even death. Notably, the incidences of SIRS and sepsis steadily increase with age. It is important to identify the risk of postoperative SIRS for older adult patients at a sufficiently early stage, which would allow preemptive individualized enhanced therapy to be conducted to improve the prognosis of older adult patients. In recent years, machine learning (ML) models have been deployed by researchers for many tasks, including disease prediction and risk stratification, exhibiting good application potential.</p><p><strong>Objective: </strong>We aimed to develop and validate an individualized predictive model to identify susceptible and high-risk populations for SIRS in older adult patients to instruct appropriate early interventions.</p><p><strong>Methods: </strong>Data for surgical patients aged ≥65 years from September 2015 to September 2020 in 3 independent medical centers were retrieved and analyzed. The eligible patient cohort in the Third Affiliated Hospital of Sun Yat-sen University was randomly separated into an 80% training set (2882 patients) and a 20% internal validation set (720 patients). We developed 4 ML models to predict postoperative SIRS. The area under the receiver operating curve (AUC), F<sub>1</sub> score, Brier score, and calibration curve were used to evaluate the model performance. The model with the best performance was further validated in the other 2 independent data sets involving 844 and 307 cases, respectively.</p><p><strong>Results: </strong>The incidences of SIRS in the 3 medical centers were 24.3% (876/3602), 29.6% (250/844), and 6.5% (20/307), respectively. We identified 15 variables that were significantly associated with postoperative SIRS and used in 4 ML models to predict postoperative SIRS. A balanced cutoff between sensitivity and specificity was chosen to ensure as high a true positive as possible. The random forest classifier (RF) model showed the best overall performance to predict postoperative SIRS, with an AUC of 0.751 (95% CI 0.709-0.793), sensitivity of 0.682, specificity of 0.681, and F<sub>1</sub> score of 0.508 in the internal validation set and higher AUCs in the external validation-1 set (0.759, 95% CI 0.723-0.795) and external validation-2 set (0.804, 95% CI 0.746-0.863).</p><p><strong>Conclusions: </strong>We developed and validated a generalizable RF model to predict postoperative SIRS in older adult patients, enabling clinicians to screen susceptible and high-risk patients and implement early individualized interventions. An online risk calculator to make the RF model accessible to anesthesiologists and peers around the world was developed.</p>","PeriodicalId":16337,"journal":{"name":"Journal of Medical Internet Research","volume":" ","pages":"e57486"},"PeriodicalIF":5.8,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142582714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pauline DeLange Martinez, Daniel Tancredi, Misha Pavel, Lorena Garcia, Heather M Young
<p><strong>Background: </strong>Studies show that the use of information and communications technologies (ICTs), including smartphones, tablets, computers, and the internet, varies by demographic factors such as age, gender, and educational attainment. However, the connections between ICT use and factors such as ethnicity and English proficiency, especially among Asian American older adults, remain less explored. The technology acceptance model (TAM) suggests that 2 key attitudinal factors, perceived usefulness (PU) and perceived ease of use (PEOU), influence technology acceptance. While the TAM has been adapted for older adults in China, Taiwan, Singapore, and Korea, it has not been tested among Asian American older adults, a population that is heterogeneous and experiences language barriers in the United States.</p><p><strong>Objective: </strong>This study aims to examine the relationships among demographics (age, gender, educational attainment, ethnicity, and English proficiency), PU, PEOU, and ICT use among low-income Asian American older adults. Two outcomes were examined: smartphone use and ICT use, each measured by years of experience and current frequency of use.</p><p><strong>Methods: </strong>This was a secondary data analysis from a cross-sectional baseline survey of the Lighthouse Project, which provided free broadband, ICT devices, and digital literacy training to residents living in 8 affordable senior housing communities across California. This analysis focused on Asian participants aged ≥62 years (N=392), specifically those of Korean, Chinese, Vietnamese, Filipino, and other Asian ethnicities (eg, Hmong and Japanese). Hypotheses were examined using descriptive statistics, correlation analysis, and hierarchical regression analysis.</p><p><strong>Results: </strong>Younger age, higher education, and greater English proficiency were positively associated with smartphone use (age: β=-.202; P<.001; education: β=.210; P<.001; and English proficiency: β=.124; P=.048) and ICT use (age: β=-.157; P=.002; education: β=.215; P<.001; and English proficiency: β=.152; P=.01). Male gender was positively associated with PEOU (β=.111; P=.047) but not with PU (β=-.031; P=.59), smartphone use (β=.023; P=.67), or ICT use (β=.078; P=.16). Ethnicity was a significant predictor of PU (F<sub>4,333</sub>=5.046; P<.001), PEOU (F<sub>4,345</sub>=4.299; P=.002), and ICT use (F<sub>4,350</sub>=3.177; P=.01), with Chinese participants reporting higher levels than Korean participants, who were the reference group (β=.143; P=.007). PU and PEOU were positively correlated with each other (r=0.139, 95% CI=0.037-0.237; P=.007), and both were significant predictors of smartphone use (PU: β=.158; P=.002 and PEOU: β=.166; P=.002) and ICT use (PU: β=.117; P=.02 and PEOU: β=0.22; P<.001), even when controlling for demographic variables.</p><p><strong>Conclusions: </strong>The findings support the use of the TAM among low-income Asian American older adults. In addition, eth
{"title":"Technology Acceptance Among Low-Income Asian American Older Adults: Cross-Sectional Survey Analysis.","authors":"Pauline DeLange Martinez, Daniel Tancredi, Misha Pavel, Lorena Garcia, Heather M Young","doi":"10.2196/52498","DOIUrl":"https://doi.org/10.2196/52498","url":null,"abstract":"<p><strong>Background: </strong>Studies show that the use of information and communications technologies (ICTs), including smartphones, tablets, computers, and the internet, varies by demographic factors such as age, gender, and educational attainment. However, the connections between ICT use and factors such as ethnicity and English proficiency, especially among Asian American older adults, remain less explored. The technology acceptance model (TAM) suggests that 2 key attitudinal factors, perceived usefulness (PU) and perceived ease of use (PEOU), influence technology acceptance. While the TAM has been adapted for older adults in China, Taiwan, Singapore, and Korea, it has not been tested among Asian American older adults, a population that is heterogeneous and experiences language barriers in the United States.</p><p><strong>Objective: </strong>This study aims to examine the relationships among demographics (age, gender, educational attainment, ethnicity, and English proficiency), PU, PEOU, and ICT use among low-income Asian American older adults. Two outcomes were examined: smartphone use and ICT use, each measured by years of experience and current frequency of use.</p><p><strong>Methods: </strong>This was a secondary data analysis from a cross-sectional baseline survey of the Lighthouse Project, which provided free broadband, ICT devices, and digital literacy training to residents living in 8 affordable senior housing communities across California. This analysis focused on Asian participants aged ≥62 years (N=392), specifically those of Korean, Chinese, Vietnamese, Filipino, and other Asian ethnicities (eg, Hmong and Japanese). Hypotheses were examined using descriptive statistics, correlation analysis, and hierarchical regression analysis.</p><p><strong>Results: </strong>Younger age, higher education, and greater English proficiency were positively associated with smartphone use (age: β=-.202; P<.001; education: β=.210; P<.001; and English proficiency: β=.124; P=.048) and ICT use (age: β=-.157; P=.002; education: β=.215; P<.001; and English proficiency: β=.152; P=.01). Male gender was positively associated with PEOU (β=.111; P=.047) but not with PU (β=-.031; P=.59), smartphone use (β=.023; P=.67), or ICT use (β=.078; P=.16). Ethnicity was a significant predictor of PU (F<sub>4,333</sub>=5.046; P<.001), PEOU (F<sub>4,345</sub>=4.299; P=.002), and ICT use (F<sub>4,350</sub>=3.177; P=.01), with Chinese participants reporting higher levels than Korean participants, who were the reference group (β=.143; P=.007). PU and PEOU were positively correlated with each other (r=0.139, 95% CI=0.037-0.237; P=.007), and both were significant predictors of smartphone use (PU: β=.158; P=.002 and PEOU: β=.166; P=.002) and ICT use (PU: β=.117; P=.02 and PEOU: β=0.22; P<.001), even when controlling for demographic variables.</p><p><strong>Conclusions: </strong>The findings support the use of the TAM among low-income Asian American older adults. In addition, eth","PeriodicalId":16337,"journal":{"name":"Journal of Medical Internet Research","volume":"26 ","pages":"e52498"},"PeriodicalIF":5.8,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142693040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Martin Vinther Bavngaard, Anne Lund, Björg Thordardottir, Erik Børve Rasmussen
<p><strong>Background: </strong>European health care systems regard information and communication technology as a necessity in supporting future health care provision by community home care services to home-dwelling older adults. Communication technology enabling synchronous communication between 2 or more human actors at a distance constitutes a significant component of this ambition, but few reviews have synthesized research relating to this particular type of technology. As evaluations of information and communication technology in health care services favor measurements of effectiveness over the experiences and dynamics of putting these technologies into use, the nuances involved in technology implementation processes are often omitted.</p><p><strong>Objective: </strong>This review aims to systematically identify and synthesize qualitative findings on the uses and experiences of synchronous communication technology for home-dwelling older adults in a home care services context.</p><p><strong>Methods: </strong>The review follows the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) 2020 checklist for reporting. We conducted a cross-disciplinary search in 5 databases for papers published between 2011 and 2023 that yielded 4210 citations. A total of 13 studies were included after 4 screening phases and a subsequent appraisal of methodological quality guided by the Critical Appraisal Skills Programme tool. From these, prespecified data were extracted and incorporated in a 3-stage thematic synthesis producing 4 analytical themes.</p><p><strong>Results: </strong>The first theme presented the multiple trajectories that older users' technology acceptance could take, namely straightforward, gradual, partial, and resistance laden, notwithstanding outright rejection. It also emphasized both instrumental and emotional efforts by the older adults' relatives in facilitating acceptance. Moving beyond acceptance, the second theme foregrounded the different types of work involved in attempts to integrate the technology by older users, their relatives, and health care providers. Theme 3 highlighted how the older users' physical and cognitive conditions formed a contextual backdrop challenging this integration work, together with challenges related to spatial context. Finally, consequences derived from taking the technology into use could be of a both enabling and complicating nature as integration reconfigured the way users related to themselves and each other.</p><p><strong>Conclusions: </strong>The acceptance and integration of synchronous communication technology for older adults involves multiple user groups in work tending to the technology, to the users themselves, and to each other through intergroup negotiations. This review's original contribution consists of its attention to the dynamics across different user groups in deriving consequences from using the technology in question, in addition to its assertion that such consequ
{"title":"The Uses and Experiences of Synchronous Communication Technology for Home-Dwelling Older Adults in a Home Care Services Context: Qualitative Systematic Review.","authors":"Martin Vinther Bavngaard, Anne Lund, Björg Thordardottir, Erik Børve Rasmussen","doi":"10.2196/59285","DOIUrl":"https://doi.org/10.2196/59285","url":null,"abstract":"<p><strong>Background: </strong>European health care systems regard information and communication technology as a necessity in supporting future health care provision by community home care services to home-dwelling older adults. Communication technology enabling synchronous communication between 2 or more human actors at a distance constitutes a significant component of this ambition, but few reviews have synthesized research relating to this particular type of technology. As evaluations of information and communication technology in health care services favor measurements of effectiveness over the experiences and dynamics of putting these technologies into use, the nuances involved in technology implementation processes are often omitted.</p><p><strong>Objective: </strong>This review aims to systematically identify and synthesize qualitative findings on the uses and experiences of synchronous communication technology for home-dwelling older adults in a home care services context.</p><p><strong>Methods: </strong>The review follows the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) 2020 checklist for reporting. We conducted a cross-disciplinary search in 5 databases for papers published between 2011 and 2023 that yielded 4210 citations. A total of 13 studies were included after 4 screening phases and a subsequent appraisal of methodological quality guided by the Critical Appraisal Skills Programme tool. From these, prespecified data were extracted and incorporated in a 3-stage thematic synthesis producing 4 analytical themes.</p><p><strong>Results: </strong>The first theme presented the multiple trajectories that older users' technology acceptance could take, namely straightforward, gradual, partial, and resistance laden, notwithstanding outright rejection. It also emphasized both instrumental and emotional efforts by the older adults' relatives in facilitating acceptance. Moving beyond acceptance, the second theme foregrounded the different types of work involved in attempts to integrate the technology by older users, their relatives, and health care providers. Theme 3 highlighted how the older users' physical and cognitive conditions formed a contextual backdrop challenging this integration work, together with challenges related to spatial context. Finally, consequences derived from taking the technology into use could be of a both enabling and complicating nature as integration reconfigured the way users related to themselves and each other.</p><p><strong>Conclusions: </strong>The acceptance and integration of synchronous communication technology for older adults involves multiple user groups in work tending to the technology, to the users themselves, and to each other through intergroup negotiations. This review's original contribution consists of its attention to the dynamics across different user groups in deriving consequences from using the technology in question, in addition to its assertion that such consequ","PeriodicalId":16337,"journal":{"name":"Journal of Medical Internet Research","volume":"26 ","pages":"e59285"},"PeriodicalIF":5.8,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142693044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Minjae Yoon, Ji Hyun Lee, In-Cheol Kim, Ju-Hee Lee, Mi-Na Kim, Hack-Lyoung Kim, Sunki Lee, In Jai Kim, Seonghoon Choi, Sung-Ji Park, Taeho Hur, Musarrat Hussain, Sungyoung Lee, Dong-Ju Choi
Background: Adherence to oral anticoagulant therapy is essential to prevent ischemic stroke in patients with atrial fibrillation (AF).
Objective: This study aimed to evaluate whether smartphone app-based interventions improve medication adherence in patients with AF.
Methods: This open-label, multicenter randomized controlled trial (ADHERE-App [Self-Awareness of Drug Adherence to Edoxaban Using an Automatic App Feedback System] study) enrolled patients with AF treated with edoxaban for stroke prevention. They were randomly assigned to app-conditioned feedback (intervention; n=248) and conventional treatment (control; n=250) groups. The intervention group received daily alerts via a smartphone app to take edoxaban and measure blood pressure and heart rate at specific times. The control group received only standard, guideline-recommended care. The primary end point was edoxaban adherence, measured by pill count at 3 or 6 months. Medication adherence and the proportion of adequate medication adherence, which was defined as ≥95% of continuous medication adherence, were evaluated.
Results: Medication adherence at 3 or 6 months was not significantly different between the intervention and control groups (median 98%, IQR 95%-100% vs median 98%, IQR 91%-100% at 3 months, P=.06; median 98%, IQR 94.5%-100% vs median 97.5%, IQR 92.8%-100% at 6 months, P=.15). However, the proportion of adequate medication adherence (≥95%) was significantly higher in the intervention group at both time points (76.8% vs 64.7% at 3 months, P=.01; 73.9% vs 61% at 6 months, P=.007). Among patients aged >65 years, the intervention group showed a higher medication adherence value and a higher proportion of adequate medication adherence (≥95%) at 6 months.
Conclusions: There was no difference in edoxaban adherence between the groups. However, the proportion of adequate medication adherence was higher in the intervention group, and the benefit of the smartphone app-based intervention on medication adherence was more pronounced among older patients than among younger patients. Given the low adherence to oral anticoagulants, especially among older adults, using a smartphone app may potentially improve medication adherence.
Trial registration: International Clinical Trials Registry Platform KCT0004754; https://cris.nih.go.kr/cris/search/detailSearch.do?seq=28496&search_page=L.
International registered report identifier (irrid): RR2-10.1136/bmjopen-2021-048777.
背景:坚持口服抗凝疗法对预防心房颤动患者缺血性中风至关重要:坚持口服抗凝药治疗对于预防心房颤动(房颤)患者缺血性中风至关重要:本研究旨在评估基于智能手机应用的干预措施能否改善房颤患者的服药依从性:这项开放标签、多中心随机对照试验(ADHERE-App[使用自动应用反馈系统自我认识依多沙班用药依从性]研究)招募了接受依多沙班治疗以预防中风的房颤患者。他们被随机分配到应用条件反馈组(干预组,人数=248)和常规治疗组(对照组,人数=250)。干预组每天通过智能手机应用程序收到服用埃多沙班的提醒,并在特定时间测量血压和心率。对照组只接受指南推荐的标准治疗。主要终点是依多沙班的依从性,通过3个月或6个月的服药次数来衡量。对用药依从性和充分用药依从性(定义为连续用药依从性≥95%)的比例进行了评估:干预组和对照组在 3 个月或 6 个月时的用药依从性无显著差异(3 个月时中位数 98%,IQR 95%-100% vs 中位数 98%,IQR 91%-100%,P=.06;6 个月时中位数 98%,IQR 94.5%-100% vs 中位数 97.5%,IQR 92.8%-100%,P=.15)。然而,干预组患者在两个时间点的充分用药依从性(≥95%)比例均显著高于干预组(3 个月时为 76.8% vs 64.7%,P=.01;6 个月时为 73.9% vs 61%,P=.007)。在年龄大于65岁的患者中,干预组的用药依从性值更高,6个月时充分用药依从性(≥95%)的比例更高:结论:干预组和干预组在依多沙班的依从性方面没有差异。结论:干预组患者的依多沙班依从性没有差异,但干预组患者的充分依从性更高,而且老年患者比年轻患者更容易从基于智能手机应用的干预中获益。鉴于口服抗凝药的依从性较低,尤其是在老年人中,使用智能手机应用可能会改善用药依从性:国际临床试验注册平台KCT0004754;https://cris.nih.go.kr/cris/search/detailSearch.do?seq=28496&search_page=L.International 注册报告标识符(irrid):RR2-10.1136/bmjopen-2021-048777.
{"title":"Smartphone App for Improving Self-Awareness of Adherence to Edoxaban Treatment in Patients With Atrial Fibrillation (ADHERE-App Trial): Randomized Controlled Trial.","authors":"Minjae Yoon, Ji Hyun Lee, In-Cheol Kim, Ju-Hee Lee, Mi-Na Kim, Hack-Lyoung Kim, Sunki Lee, In Jai Kim, Seonghoon Choi, Sung-Ji Park, Taeho Hur, Musarrat Hussain, Sungyoung Lee, Dong-Ju Choi","doi":"10.2196/65010","DOIUrl":"10.2196/65010","url":null,"abstract":"<p><strong>Background: </strong>Adherence to oral anticoagulant therapy is essential to prevent ischemic stroke in patients with atrial fibrillation (AF).</p><p><strong>Objective: </strong>This study aimed to evaluate whether smartphone app-based interventions improve medication adherence in patients with AF.</p><p><strong>Methods: </strong>This open-label, multicenter randomized controlled trial (ADHERE-App [Self-Awareness of Drug Adherence to Edoxaban Using an Automatic App Feedback System] study) enrolled patients with AF treated with edoxaban for stroke prevention. They were randomly assigned to app-conditioned feedback (intervention; n=248) and conventional treatment (control; n=250) groups. The intervention group received daily alerts via a smartphone app to take edoxaban and measure blood pressure and heart rate at specific times. The control group received only standard, guideline-recommended care. The primary end point was edoxaban adherence, measured by pill count at 3 or 6 months. Medication adherence and the proportion of adequate medication adherence, which was defined as ≥95% of continuous medication adherence, were evaluated.</p><p><strong>Results: </strong>Medication adherence at 3 or 6 months was not significantly different between the intervention and control groups (median 98%, IQR 95%-100% vs median 98%, IQR 91%-100% at 3 months, P=.06; median 98%, IQR 94.5%-100% vs median 97.5%, IQR 92.8%-100% at 6 months, P=.15). However, the proportion of adequate medication adherence (≥95%) was significantly higher in the intervention group at both time points (76.8% vs 64.7% at 3 months, P=.01; 73.9% vs 61% at 6 months, P=.007). Among patients aged >65 years, the intervention group showed a higher medication adherence value and a higher proportion of adequate medication adherence (≥95%) at 6 months.</p><p><strong>Conclusions: </strong>There was no difference in edoxaban adherence between the groups. However, the proportion of adequate medication adherence was higher in the intervention group, and the benefit of the smartphone app-based intervention on medication adherence was more pronounced among older patients than among younger patients. Given the low adherence to oral anticoagulants, especially among older adults, using a smartphone app may potentially improve medication adherence.</p><p><strong>Trial registration: </strong>International Clinical Trials Registry Platform KCT0004754; https://cris.nih.go.kr/cris/search/detailSearch.do?seq=28496&search_page=L.</p><p><strong>International registered report identifier (irrid): </strong>RR2-10.1136/bmjopen-2021-048777.</p>","PeriodicalId":16337,"journal":{"name":"Journal of Medical Internet Research","volume":"26 ","pages":"e65010"},"PeriodicalIF":5.8,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142682039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Martin Schmollinger, Jessica Gerstner, Eric Stricker, Alexander Muench, Benjamin Breckwoldt, Manuel Sigle, Peter Rosenberger, Robert Wunderlich
Background: Digitalization in disaster medicine holds significant potential to accelerate rescue operations and ultimately save lives. Mass casualty incidents demand rapid and accurate information management to coordinate effective responses. Currently, first responders manually record triage results on patient cards, and brief information is communicated to the command post via radio communication. Although this process is widely used in practice, it involves several time-consuming and error-prone tasks. To address these issues, we designed, implemented, and evaluated an app-based mobile triage system. This system allows users to document responder details, triage categories, injury patterns, GPS locations, and other important information, which can then be transmitted automatically to the incident commanders.
Objective: This study aims to design and evaluate an app-based mobile system as a triage and coordination tool for emergency and disaster medicine, comparing its effectiveness with the conventional paper-based system.
Methods: A total of 38 emergency medicine personnel participated in a within-subject experimental study, completing 2 triage sessions with 30 patient cards each: one session using the app-based mobile system and the other using the paper-based tool. The accuracy of the triages and the time taken for each session were measured. Additionally, we implemented the User Experience Questionnaire along with other items to assess participants' subjective ratings of the 2 triage tools.
Results: Our 2 (triage tool) × 2 (tool order) mixed multivariate analysis of variance revealed a significant main effect for the triage tool (P<.001). Post hoc analyses indicated that participants were significantly faster (P<.001) and more accurate (P=.005) in assigning patients to the correct triage category when using the app-based mobile system compared with the paper-based tool. Additionally, analyses showed significantly better subjective ratings for the app-based mobile system compared with the paper-based tool, in terms of both school grading (P<.001) and across all 6 scales of the User Experience Questionnaire (all P<.001). Of the 38 participants, 36 (95%) preferred the app-based mobile system. There was no significant main effect for tool order (P=.24) or session order (P=.06) in our model.
Conclusions: Our findings demonstrate that the app-based mobile system not only matches the performance of the conventional paper-based tool but may even surpass it in terms of efficiency and usability. This advancement could further enhance the potential of digitalization to optimize processes in disaster medicine, ultimately leading to the possibility of saving more lives.
{"title":"Evaluation of an App-Based Mobile Triage System for Mass Casualty Incidents: Within-Subjects Experimental Study.","authors":"Martin Schmollinger, Jessica Gerstner, Eric Stricker, Alexander Muench, Benjamin Breckwoldt, Manuel Sigle, Peter Rosenberger, Robert Wunderlich","doi":"10.2196/65728","DOIUrl":"10.2196/65728","url":null,"abstract":"<p><strong>Background: </strong>Digitalization in disaster medicine holds significant potential to accelerate rescue operations and ultimately save lives. Mass casualty incidents demand rapid and accurate information management to coordinate effective responses. Currently, first responders manually record triage results on patient cards, and brief information is communicated to the command post via radio communication. Although this process is widely used in practice, it involves several time-consuming and error-prone tasks. To address these issues, we designed, implemented, and evaluated an app-based mobile triage system. This system allows users to document responder details, triage categories, injury patterns, GPS locations, and other important information, which can then be transmitted automatically to the incident commanders.</p><p><strong>Objective: </strong>This study aims to design and evaluate an app-based mobile system as a triage and coordination tool for emergency and disaster medicine, comparing its effectiveness with the conventional paper-based system.</p><p><strong>Methods: </strong>A total of 38 emergency medicine personnel participated in a within-subject experimental study, completing 2 triage sessions with 30 patient cards each: one session using the app-based mobile system and the other using the paper-based tool. The accuracy of the triages and the time taken for each session were measured. Additionally, we implemented the User Experience Questionnaire along with other items to assess participants' subjective ratings of the 2 triage tools.</p><p><strong>Results: </strong>Our 2 (triage tool) × 2 (tool order) mixed multivariate analysis of variance revealed a significant main effect for the triage tool (P<.001). Post hoc analyses indicated that participants were significantly faster (P<.001) and more accurate (P=.005) in assigning patients to the correct triage category when using the app-based mobile system compared with the paper-based tool. Additionally, analyses showed significantly better subjective ratings for the app-based mobile system compared with the paper-based tool, in terms of both school grading (P<.001) and across all 6 scales of the User Experience Questionnaire (all P<.001). Of the 38 participants, 36 (95%) preferred the app-based mobile system. There was no significant main effect for tool order (P=.24) or session order (P=.06) in our model.</p><p><strong>Conclusions: </strong>Our findings demonstrate that the app-based mobile system not only matches the performance of the conventional paper-based tool but may even surpass it in terms of efficiency and usability. This advancement could further enhance the potential of digitalization to optimize processes in disaster medicine, ultimately leading to the possibility of saving more lives.</p>","PeriodicalId":16337,"journal":{"name":"Journal of Medical Internet Research","volume":" ","pages":"e65728"},"PeriodicalIF":5.8,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142545946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kaede Hasegawa, Niki O'Brien, Mabel Prendergast, Chris Agape Ajah, Ana Luisa Neves, Saira Ghafur
<p><strong>Background: </strong>Health care organizations globally have seen a significant increase in the frequency of cyberattacks in recent years. Cyberattacks cause massive disruptions to health service delivery and directly impact patient safety through disruption and treatment delays. Given the increasing number of cyberattacks in low- and middle-income countries (LMICs), there is a need to explore the interventions put in place to plan for cyberattacks and develop cyber resilience.</p><p><strong>Objective: </strong>This study aimed to describe cybersecurity interventions, defined as any intervention to improve cybersecurity in a health care organization, including but not limited to organizational strategy(ies); policy(ies); protocol(s), incident plan(s), or assessment process(es); framework(s) or guidelines; and emergency planning, implemented in LMICs to date and to evaluate their impact on the likelihood and impact of attacks. The secondary objective was to describe the main barriers and facilitators for the implementation of such interventions, where reported.</p><p><strong>Methods: </strong>A systematic search of the literature published between January 2017 and July 2024 was performed on Ovid Medline, Embase, Global Health, and Scopus using a combination of controlled terms and free text. A search of the gray literature within the same time parameters was undertaken on the websites of relevant stakeholder organizations to identify possible additional studies that met the inclusion criteria. Findings from included papers were mapped against the dimensions of the Essentials of Cybersecurity in Health Care Organizations (ECHO) framework and presented as a narrative synthesis.</p><p><strong>Results: </strong>We included 20 studies in this review. The sample size of the majority of studies (13/20, 65%) was 1 facility to 5 facilities, and the studies were conducted in 14 countries. Studies were categorized into the thematic dimensions of the ECHO framework, including context; governance; organizational strategy; risk management; awareness, education, and training; and technical capabilities. Few studies (6/20, 30%) discussed cybersecurity intervention(s) as the primary focus of the paper; therefore, information on intervention(s) implemented had to be deduced. There was no attempt to report on the impact and outcomes in all papers except one. Facilitators and barriers identified were grouped and presented across national or regional, organizational, and individual staff levels.</p><p><strong>Conclusions: </strong>This scoping review's findings highlight the limited body of research published on cybersecurity interventions implemented in health care organizations in LMICs and large heterogeneity across existing studies in interventions, research objectives, methods, and outcome measures used. Although complex and challenging, future research should specifically focus on the evaluation of cybersecurity interventions and their impact in order
{"title":"Cybersecurity Interventions in Health Care Organizations in Low- and Middle-Income Countries: Scoping Review.","authors":"Kaede Hasegawa, Niki O'Brien, Mabel Prendergast, Chris Agape Ajah, Ana Luisa Neves, Saira Ghafur","doi":"10.2196/47311","DOIUrl":"https://doi.org/10.2196/47311","url":null,"abstract":"<p><strong>Background: </strong>Health care organizations globally have seen a significant increase in the frequency of cyberattacks in recent years. Cyberattacks cause massive disruptions to health service delivery and directly impact patient safety through disruption and treatment delays. Given the increasing number of cyberattacks in low- and middle-income countries (LMICs), there is a need to explore the interventions put in place to plan for cyberattacks and develop cyber resilience.</p><p><strong>Objective: </strong>This study aimed to describe cybersecurity interventions, defined as any intervention to improve cybersecurity in a health care organization, including but not limited to organizational strategy(ies); policy(ies); protocol(s), incident plan(s), or assessment process(es); framework(s) or guidelines; and emergency planning, implemented in LMICs to date and to evaluate their impact on the likelihood and impact of attacks. The secondary objective was to describe the main barriers and facilitators for the implementation of such interventions, where reported.</p><p><strong>Methods: </strong>A systematic search of the literature published between January 2017 and July 2024 was performed on Ovid Medline, Embase, Global Health, and Scopus using a combination of controlled terms and free text. A search of the gray literature within the same time parameters was undertaken on the websites of relevant stakeholder organizations to identify possible additional studies that met the inclusion criteria. Findings from included papers were mapped against the dimensions of the Essentials of Cybersecurity in Health Care Organizations (ECHO) framework and presented as a narrative synthesis.</p><p><strong>Results: </strong>We included 20 studies in this review. The sample size of the majority of studies (13/20, 65%) was 1 facility to 5 facilities, and the studies were conducted in 14 countries. Studies were categorized into the thematic dimensions of the ECHO framework, including context; governance; organizational strategy; risk management; awareness, education, and training; and technical capabilities. Few studies (6/20, 30%) discussed cybersecurity intervention(s) as the primary focus of the paper; therefore, information on intervention(s) implemented had to be deduced. There was no attempt to report on the impact and outcomes in all papers except one. Facilitators and barriers identified were grouped and presented across national or regional, organizational, and individual staff levels.</p><p><strong>Conclusions: </strong>This scoping review's findings highlight the limited body of research published on cybersecurity interventions implemented in health care organizations in LMICs and large heterogeneity across existing studies in interventions, research objectives, methods, and outcome measures used. Although complex and challenging, future research should specifically focus on the evaluation of cybersecurity interventions and their impact in order","PeriodicalId":16337,"journal":{"name":"Journal of Medical Internet Research","volume":"26 ","pages":"e47311"},"PeriodicalIF":5.8,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142682035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}