Pub Date : 2025-08-01Epub Date: 2025-10-08DOI: 10.1055/a-2699-9179
Elham H Othman, Wasem I Al Haj, Mohammad R Alosta, Yousef Qan'Ir, Mohannad Eid Aburuz, Wesam Taher Almagharbeh
The current study examined the moderating effect of self-control on the relationship between attitudes toward cybersecurity and risky online behaviors among direct care nurses.A cross-sectional study collected data from 260 direct care nurses in Saudi Arabia using a self-reported questionnaire. Hierarchical multiple regression analysis and simple slope analysis examined the moderation effect of self-control on the relationship between attitudes toward cybersecurity and risky online behaviors.We found that a better attitude toward cybersecurity and greater self-control predicted lower risky online behaviors. Simple slope tests revealed a significant negative association between attitude toward cybersecurity and risky online behaviors at low levels of self-control, but this association disappears at high levels of self-control, meaning that high levels of self-control have a protective/moderating effect on the relationship between attitude toward cybersecurity and risky online behaviors.Self-control moderates the effect of attitudes on online practices. The negative attitudes' influence on risky online behaviors is stronger when self-control is low. On the other hand, at high levels of self-control, individuals may engage in safer practices regardless of their attitudes.
{"title":"Better Attitudes toward Cybersecurity and Greater Self-Control Predict Lower Risky Online Behaviors among Nurses.","authors":"Elham H Othman, Wasem I Al Haj, Mohammad R Alosta, Yousef Qan'Ir, Mohannad Eid Aburuz, Wesam Taher Almagharbeh","doi":"10.1055/a-2699-9179","DOIUrl":"10.1055/a-2699-9179","url":null,"abstract":"<p><p>The current study examined the moderating effect of self-control on the relationship between attitudes toward cybersecurity and risky online behaviors among direct care nurses.A cross-sectional study collected data from 260 direct care nurses in Saudi Arabia using a self-reported questionnaire. Hierarchical multiple regression analysis and simple slope analysis examined the moderation effect of self-control on the relationship between attitudes toward cybersecurity and risky online behaviors.We found that a better attitude toward cybersecurity and greater self-control predicted lower risky online behaviors. Simple slope tests revealed a significant negative association between attitude toward cybersecurity and risky online behaviors at low levels of self-control, but this association disappears at high levels of self-control, meaning that high levels of self-control have a protective/moderating effect on the relationship between attitude toward cybersecurity and risky online behaviors.Self-control moderates the effect of attitudes on online practices. The negative attitudes' influence on risky online behaviors is stronger when self-control is low. On the other hand, at high levels of self-control, individuals may engage in safer practices regardless of their attitudes.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":"16 4","pages":"1310-1318"},"PeriodicalIF":2.2,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12507492/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145253306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01Epub Date: 2025-08-29DOI: 10.1055/a-2591-4016
Nida Afzal, Amy D Nguyen, Annie Lau
Aging populations strain health care systems. Assisted Living Technologies (ALTs) emerge as a potential solution for promoting independent living among older adults. However, the real-world effect of ALTs remains unclear.This study explores benefits and challenges (anticipated and unanticipated) of ALTs for older adults and informal caregivers across three aged care settings (residential aged care facilities [RACFs], retirement villages [RVs], and home-dwelling communities [HDCs]) in Australia.Three ALTs (fall detection sensors, sleep monitors, and smartwatches) were deployed across three settings. NASSS framework (Non-adoption, Abandonment, Scale-up, Spread, and Sustainability), informed by sociotechnical theories, guided analysis of the interplay between technology, user needs, and caregiving context in ALTs implementation. Semistructured interviews with 14 older adults and 9 caregivers from 19 households explored user experiences. Benefits and challenges of using ALTs for older adults and informal caregivers were categorized using the consequences framework.Setting-specific challenges alongside common benefits and challenges across care settings were revealed. The NASSS framework analysis showed how technology limitations, user needs, and caregiving context influenced these outcomes. In RACFs, where residents receive constant nursing assistance, informal caregivers faced uncertainty regarding who was responsible for monitoring residents. In RVs, with a strong sense of community, informal caregivers (often neighbors) were more prone to overreacting to false alarms. Shared sleeping arrangements among HDCs made interpreting sleep data challenging.Implementing ALTs in elderly care settings requires a context-sensitive approach. In RACFs, clear role definitions for informal caregivers and staff are essential. For RVs, design should support help-seeking aligned with residents' social and geographical contexts. Home-dwelling settings may benefit from advanced sleep monitoring tailored to shared living arrangements. Future ALTs development should focus on real-world contexts to promote successful aging in place.
{"title":"Real-World Challenges of Using Assisted Living Technologies across Different Australian Aged Care Settings: A Qualitative Study of User Experiences.","authors":"Nida Afzal, Amy D Nguyen, Annie Lau","doi":"10.1055/a-2591-4016","DOIUrl":"https://doi.org/10.1055/a-2591-4016","url":null,"abstract":"<p><p>Aging populations strain health care systems. Assisted Living Technologies (ALTs) emerge as a potential solution for promoting independent living among older adults. However, the real-world effect of ALTs remains unclear.This study explores benefits and challenges (anticipated and unanticipated) of ALTs for older adults and informal caregivers across three aged care settings (residential aged care facilities [RACFs], retirement villages [RVs], and home-dwelling communities [HDCs]) in Australia.Three ALTs (fall detection sensors, sleep monitors, and smartwatches) were deployed across three settings. NASSS framework (Non-adoption, Abandonment, Scale-up, Spread, and Sustainability), informed by sociotechnical theories, guided analysis of the interplay between technology, user needs, and caregiving context in ALTs implementation. Semistructured interviews with 14 older adults and 9 caregivers from 19 households explored user experiences. Benefits and challenges of using ALTs for older adults and informal caregivers were categorized using the consequences framework.Setting-specific challenges alongside common benefits and challenges across care settings were revealed. The NASSS framework analysis showed how technology limitations, user needs, and caregiving context influenced these outcomes. In RACFs, where residents receive constant nursing assistance, informal caregivers faced uncertainty regarding who was responsible for monitoring residents. In RVs, with a strong sense of community, informal caregivers (often neighbors) were more prone to overreacting to false alarms. Shared sleeping arrangements among HDCs made interpreting sleep data challenging.Implementing ALTs in elderly care settings requires a context-sensitive approach. In RACFs, clear role definitions for informal caregivers and staff are essential. For RVs, design should support help-seeking aligned with residents' social and geographical contexts. Home-dwelling settings may benefit from advanced sleep monitoring tailored to shared living arrangements. Future ALTs development should focus on real-world contexts to promote successful aging in place.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":"16 4","pages":"930-942"},"PeriodicalIF":2.2,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12396903/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144975501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01Epub Date: 2025-05-12DOI: 10.1055/a-2605-1847
Jonathan M Beus, Mark Mai, Nikolay P Braykov, Swaminathan Kandaswamy, Edwin Ray, David B Cundiff, Paulette Djachechi, Sarah Thompson, Azade Tabaie, Ryan Birmingham, Rishi Kamaleswaran, Evan Orenstein
Central line-associated bloodstream infections (CLABSIs) are associated with substantial pediatric morbidity and mortality. The capacity to predict which children with central lines are at greatest risk of CLABSI could inform surveillance and prevention efforts. Our team previously published in silico predictive models for CLABSI.To prospectively implement a pediatric CLABSI predictive model and achieve adequate performance in offline validation for implementation in clinical practice.Most performant predictive models were deep learning models requiring substantial pre-processing of many features into 8-hour windows including the current day and up to 56 days prior for the current admission. To replicate this pre-processing, we created a novel infrastructure to (1) organize current-day data for all the relevant features and (2) create a staged historical data store for those same features with application programming interfaces to connect the two. We compared predictive performance of these scores for CLABSI in the next 48 hours with two labels, one based on manual review of positive blood cultures in children with central lines and another based on positive blood culture and receipt of at least 4 days of new IV antibiotics.The area under the receiver-operating characteristic (AUROC) fell from 0.97 from retrospective data to <0.60 despite multiple iterations of troubleshooting. Primary root causes included train/serve skew, feature leakage, and overfitting. Hypothesized secondary drivers were complex model specification, poor data governance, inadequate testing, challenging feature translation between real-time and historical data models, limited monitoring and logging infrastructure for troubleshooting, and suboptimal handoff between the model development and deployment teams.Bridging the gap from predictive model development to clinical deployment requires early and close coordination between data governance, data science, clinical informatics, and implementation engineers. Balancing predictive performance with implementation feasibility can accelerate the adoption of predictive clinical decision support systems.
{"title":"Performance Degradation between Development and Deployment of a Predictive Model for Central Line-Associated Bloodstream Infections in Hospitalized Children.","authors":"Jonathan M Beus, Mark Mai, Nikolay P Braykov, Swaminathan Kandaswamy, Edwin Ray, David B Cundiff, Paulette Djachechi, Sarah Thompson, Azade Tabaie, Ryan Birmingham, Rishi Kamaleswaran, Evan Orenstein","doi":"10.1055/a-2605-1847","DOIUrl":"10.1055/a-2605-1847","url":null,"abstract":"<p><p>Central line-associated bloodstream infections (CLABSIs) are associated with substantial pediatric morbidity and mortality. The capacity to predict which children with central lines are at greatest risk of CLABSI could inform surveillance and prevention efforts. Our team previously published <i>in silico</i> predictive models for CLABSI.To prospectively implement a pediatric CLABSI predictive model and achieve adequate performance in offline validation for implementation in clinical practice.Most performant predictive models were deep learning models requiring substantial pre-processing of many features into 8-hour windows including the current day and up to 56 days prior for the current admission. To replicate this pre-processing, we created a novel infrastructure to (1) organize current-day data for all the relevant features and (2) create a staged historical data store for those same features with application programming interfaces to connect the two. We compared predictive performance of these scores for CLABSI in the next 48 hours with two labels, one based on manual review of positive blood cultures in children with central lines and another based on positive blood culture and receipt of at least 4 days of new IV antibiotics.The area under the receiver-operating characteristic (AUROC) fell from 0.97 from retrospective data to <0.60 despite multiple iterations of troubleshooting. Primary root causes included train/serve skew, feature leakage, and overfitting. Hypothesized secondary drivers were complex model specification, poor data governance, inadequate testing, challenging feature translation between real-time and historical data models, limited monitoring and logging infrastructure for troubleshooting, and suboptimal handoff between the model development and deployment teams.Bridging the gap from predictive model development to clinical deployment requires early and close coordination between data governance, data science, clinical informatics, and implementation engineers. Balancing predictive performance with implementation feasibility can accelerate the adoption of predictive clinical decision support systems.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":" ","pages":"1192-1199"},"PeriodicalIF":2.2,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12473521/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144035943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01Epub Date: 2025-04-11DOI: 10.1055/a-2581-6172
John Will, Deborah Jacques, Denise Dauterman, Rachelle Torres, Glenn Doty, Kerry O'Brien, Lisa Groom
Nursing documentation burden is a growing point of concern in the United States health care system. Documentation in the electronic health record (EHR) is a contributor to perceptions of burden. Efficiency tools like flowsheet macros are one development intended to ease the burden of documentation.This study aimed to evaluate whether flowsheet macros, a documentation efficiency tool in the EHR that consolidates documentation into a single click, reduces the time spent on documentation activities and the EHR overall.Nurses in the health system were encouraged to create and utilize flowsheet macros for their documentation. Flowsheet documentation and time in system data for nurses' first and last shifts in the evaluation period were extracted from the EHR. Linear regression with control variables was utilized to understand if the utilization of flowsheet macros for documentation reduced the time spent in flowsheets or the EHR.The results of linear regression showed a significant, negative relationship between flowsheet macros use and time in flowsheets (adjusted odds ratio [AOR] = -0.291, 95% confidence interval [CI] = -0.342 to -0.240, p < 0.001). Flowsheet macros use and time in system also had a significant, negative relationship (AOR = -0.269, CI = -0.390 to -0.147, p ≤ 0.001). Subgroups for department specialties showed time savings in flowsheet activities for medical surgical, critical care, and obstetrics units, however, a significant relationship was not found in emergency and rehabilitation units.Utilization of flowsheet macros was associated with a decrease in the amount of time a nurse spends in both flowsheets and the EHR. Adoption and timesavings varied by the department setting, suggesting flowsheet macros may not be applicable to all patient types or conditions. Future research should investigate if the time savings from this tool yield benefits in perceptions of nurse documentation burden.
背景:护理文件负担是一个日益增长的点关注在美国医疗保健系统。电子健康记录(EHR)中的文件是造成负担观念的一个因素。像流程图宏这样的效率工具是一种旨在减轻文档负担的开发。目的:评估工作流宏,EHR中的文档效率工具,将文档整合到一次点击中,是否减少了文档活动和EHR整体花费的时间。方法:鼓励卫生系统中的护士创建和使用流程图宏进行文件记录。从电子病历中提取评估期护士首班和末班的流程文件和系统数据中的时间。利用控制变量的线性回归来了解使用流程图宏进行文档编制是否减少了花在流程图或电子病历上的时间。结果:线性回归结果显示,流程图宏的使用与流程图中的时间呈显著负相关(AOR = -0.291, CI = -0.342 - -0.240, p < 0.001)。流程图宏的使用和在系统中的时间也有显著的负相关(AOR = -0.269, CI = -0.390 - -0.147, p =)结论:流程图宏的使用与护士在流程图和电子病历中花费的时间的减少有关。采用和节省的时间因部门设置而异,这表明流程图宏可能不适用于所有患者类型或情况。未来的研究应该调查从这个工具中节省的时间是否对护士文件负担的感知产生好处。
{"title":"Improving Nurse Documentation Time via an Electronic Health Record Documentation Efficiency Tool.","authors":"John Will, Deborah Jacques, Denise Dauterman, Rachelle Torres, Glenn Doty, Kerry O'Brien, Lisa Groom","doi":"10.1055/a-2581-6172","DOIUrl":"10.1055/a-2581-6172","url":null,"abstract":"<p><p>Nursing documentation burden is a growing point of concern in the United States health care system. Documentation in the electronic health record (EHR) is a contributor to perceptions of burden. Efficiency tools like flowsheet macros are one development intended to ease the burden of documentation.This study aimed to evaluate whether flowsheet macros, a documentation efficiency tool in the EHR that consolidates documentation into a single click, reduces the time spent on documentation activities and the EHR overall.Nurses in the health system were encouraged to create and utilize flowsheet macros for their documentation. Flowsheet documentation and time in system data for nurses' first and last shifts in the evaluation period were extracted from the EHR. Linear regression with control variables was utilized to understand if the utilization of flowsheet macros for documentation reduced the time spent in flowsheets or the EHR.The results of linear regression showed a significant, negative relationship between flowsheet macros use and time in flowsheets (adjusted odds ratio [AOR] = -0.291, 95% confidence interval [CI] = -0.342 to -0.240, <i>p</i> < 0.001). Flowsheet macros use and time in system also had a significant, negative relationship (AOR = -0.269, CI = -0.390 to -0.147, <i>p</i> ≤ 0.001). Subgroups for department specialties showed time savings in flowsheet activities for medical surgical, critical care, and obstetrics units, however, a significant relationship was not found in emergency and rehabilitation units.Utilization of flowsheet macros was associated with a decrease in the amount of time a nurse spends in both flowsheets and the EHR. Adoption and timesavings varied by the department setting, suggesting flowsheet macros may not be applicable to all patient types or conditions. Future research should investigate if the time savings from this tool yield benefits in perceptions of nurse documentation burden.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":" ","pages":"796-803"},"PeriodicalIF":2.2,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12373461/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144008645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01Epub Date: 2025-06-10DOI: 10.1055/a-2630-4192
Rachel Y Lee, Kenrick D Cato, Patricia C Dykes, Graham Lowenthal, Haomiao Jia, Temiloluwa Daramola, Sarah C Rossetti
The CONCERN Early Warning System (CONCERN EWS) is an artificial intelligence-based clinical decision support system (AI-CDSS) for the prediction of clinical deterioration, leveraging signals from nursing documentation patterns. While a recent multisite randomized controlled trial (RCT) demonstrated its effectiveness in reducing inpatient mortality and length of stay, evaluating implementation outcomes is essential to ensure equitable results across patient populations.This study aims to (1) assess whether clinicians' usage of the CONCERN EWS, as measured by CONCERN Detailed Prediction Screen launches, varied by patient demographic characteristics, including sex, race, ethnicity, and primary language; (2) evaluate whether CONCERN EWS's effectiveness in reducing the risk of in-hospital mortality varied across patient demographic groups.We conducted a retrospective observational analysis of electronic health record log files and clinical outcomes from a multisite, pragmatic, cluster-RCT involving four hospitals across two health care systems. Equity in usage was assessed by comparing CONCERN Detailed Prediction Screen launches across demographic groups, and effectiveness was examined by comparing the risk of in-hospital mortality between intervention and usual care groups using Cox proportional hazards models adjusted for patient characteristics.Clinicians' CONCERN Detailed Prediction Screen launches did not significantly differ by patients' demographic characteristics, suggesting equitable usage. The CONCERN EWS was significantly associated with reduced risk of in-hospital mortality overall (adjusted hazard ratio [HR] = 0.644, 95% CI: 0.532-0.778, p < 0.0001), with consistent effectiveness across most groups. Notably, patients whose primary language was not English experienced a greater reduction of mortality risk compared to patients whose primary language was English (adjusted HR = 0.419, 95% CI: 0.287-0.610, p = 0.0082).This study presents a case of evaluating equity in AI-CDSS usage and effectiveness, contributing to the limited literature. While findings suggest equitable engagement and effectiveness, ongoing evaluations are needed to understand the observed variability and ensure responsible implementation.
{"title":"Evaluating Equity in Usage and Effectiveness of the CONCERN Early Warning System.","authors":"Rachel Y Lee, Kenrick D Cato, Patricia C Dykes, Graham Lowenthal, Haomiao Jia, Temiloluwa Daramola, Sarah C Rossetti","doi":"10.1055/a-2630-4192","DOIUrl":"10.1055/a-2630-4192","url":null,"abstract":"<p><p>The CONCERN Early Warning System (CONCERN EWS) is an artificial intelligence-based clinical decision support system (AI-CDSS) for the prediction of clinical deterioration, leveraging signals from nursing documentation patterns. While a recent multisite randomized controlled trial (RCT) demonstrated its effectiveness in reducing inpatient mortality and length of stay, evaluating implementation outcomes is essential to ensure equitable results across patient populations.This study aims to (1) assess whether clinicians' usage of the CONCERN EWS, as measured by CONCERN Detailed Prediction Screen launches, varied by patient demographic characteristics, including sex, race, ethnicity, and primary language; (2) evaluate whether CONCERN EWS's effectiveness in reducing the risk of in-hospital mortality varied across patient demographic groups.We conducted a retrospective observational analysis of electronic health record log files and clinical outcomes from a multisite, pragmatic, cluster-RCT involving four hospitals across two health care systems. Equity in usage was assessed by comparing CONCERN Detailed Prediction Screen launches across demographic groups, and effectiveness was examined by comparing the risk of in-hospital mortality between intervention and usual care groups using Cox proportional hazards models adjusted for patient characteristics.Clinicians' CONCERN Detailed Prediction Screen launches did not significantly differ by patients' demographic characteristics, suggesting equitable usage. The CONCERN EWS was significantly associated with reduced risk of in-hospital mortality overall (adjusted hazard ratio [HR] = 0.644, 95% CI: 0.532-0.778, <i>p</i> < 0.0001), with consistent effectiveness across most groups. Notably, patients whose primary language was not English experienced a greater reduction of mortality risk compared to patients whose primary language was English (adjusted HR = 0.419, 95% CI: 0.287-0.610, <i>p</i> = 0.0082).This study presents a case of evaluating equity in AI-CDSS usage and effectiveness, contributing to the limited literature. While findings suggest equitable engagement and effectiveness, ongoing evaluations are needed to understand the observed variability and ensure responsible implementation.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":" ","pages":"838-847"},"PeriodicalIF":2.2,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12349966/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144267694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01Epub Date: 2025-04-25DOI: 10.1055/a-2594-3722
Renee Potashner, Adam P Yan
Cancer staging is integral to ensuring cancer patients receive appropriate risk-adapted therapy. Discrete cancer staging using a structured staging form helps ensure accurate staging, provides a single source of truth for staging information, and allows for reporting to regulatory authorities. Our institution created pediatric oncology specific discrete staging forms that have been shared with the broader Epic community. By November 2023, baseline utilization of the staging form for patients with leukemia or lymphoma was 43%, and the override rate for our existing alert was 99.9%.Improve discrete documentation of cancer stage for patients with leukemia or lymphoma within 60 days following initiation of chemotherapy to >80% by July 2024 as measured by signed staging form.Model for improving plan-do-study-act (PDSA) cycles was implemented, and statistical process control charts were used to evaluate impact. The first intervention was educational training to oncology providers. The second PDSA cycle involved sharing monthly individual completion data with the primary oncologist regarding their personal patient metrics. The third PDSA cycle involved removing the interruptive alert.Within 6 months, documentation of primary oncologist improved from 86 to 100%, and initiation of staging form improved from 57 to 90%. Completion of signed cancer staging form reached 80%. Patients marked as not needing staging increased from 5 to 17%.Completion of a digital cancer staging form is important for continuity of care, and to facilitate reporting to regulatory authorities, though frequent interruptive alerts were an ineffective method for improving documentation. Education and data sharing increased staging completion to near target, with ongoing efforts to reach the goal of 80%.
{"title":"Improving Discrete Documentation of Cancer Staging-An Alert-Free Approach.","authors":"Renee Potashner, Adam P Yan","doi":"10.1055/a-2594-3722","DOIUrl":"10.1055/a-2594-3722","url":null,"abstract":"<p><p>Cancer staging is integral to ensuring cancer patients receive appropriate risk-adapted therapy. Discrete cancer staging using a structured staging form helps ensure accurate staging, provides a single source of truth for staging information, and allows for reporting to regulatory authorities. Our institution created pediatric oncology specific discrete staging forms that have been shared with the broader Epic community. By November 2023, baseline utilization of the staging form for patients with leukemia or lymphoma was 43%, and the override rate for our existing alert was 99.9%.Improve discrete documentation of cancer stage for patients with leukemia or lymphoma within 60 days following initiation of chemotherapy to >80% by July 2024 as measured by signed staging form.Model for improving plan-do-study-act (PDSA) cycles was implemented, and statistical process control charts were used to evaluate impact. The first intervention was educational training to oncology providers. The second PDSA cycle involved sharing monthly individual completion data with the primary oncologist regarding their personal patient metrics. The third PDSA cycle involved removing the interruptive alert.Within 6 months, documentation of primary oncologist improved from 86 to 100%, and initiation of staging form improved from 57 to 90%. Completion of signed cancer staging form reached 80%. Patients marked as not needing staging increased from 5 to 17%.Completion of a digital cancer staging form is important for continuity of care, and to facilitate reporting to regulatory authorities, though frequent interruptive alerts were an ineffective method for improving documentation. Education and data sharing increased staging completion to near target, with ongoing efforts to reach the goal of 80%.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":" ","pages":"1005-1013"},"PeriodicalIF":2.2,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12413274/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144053445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01Epub Date: 2025-05-21DOI: 10.1055/a-2617-6572
Eyal Klang, Jaskirat Gill, Aniket Sharma, Evan Leibner, Moein Sabounchi, Robert Freeman, Roopa Kohli-Seth, Patricia Kovatch, Alexander W Charney, Lisa Stump, David L Reich, Girish N Nadkarni, Ankit Sakhuja
Accurate discharge summaries are essential for effective communication between hospital and outpatient providers but generating them is labor-intensive. Large language models (LLMs), such as GPT-4, have shown promise in automating this process, potentially reducing clinician workload and improving documentation quality. A recent study using GPT-4 to generate discharge summaries via concatenated clinical notes found that while the summaries were concise and coherent, they often lacked comprehensiveness and contained errors. To address this, we evaluated a structured prompting strategy, summarize-then-prompt, which first generates concise summaries of individual clinical notes before combining them to create a more focused input for the LLM.The objective of this study was to assess the effectiveness of a novel prompting strategy, summarize-then-prompt, in generating discharge summaries that are more complete, accurate, and concise in comparison to the approach that simply concatenates clinical notes.We conducted a retrospective study comparing two prompting strategies: direct concatenation (M1) and summarize-then-prompt (M2). A random sample of 50 hospital stays was selected from a large hospital system. Three attending physicians independently evaluated the generated hospital course summaries for completeness, correctness, and conciseness using a 5-point Likert scale.The summarize-then-prompt strategy outperformed direct concatenation strategy in both completeness (4.28 ± 0.63 vs. 4.01 ± 0.69, p < 0.001) and correctness (4.37 ± 0.54 vs. 4.17 ± 0.57, p = 0.002) of the summarization of the hospital course. However, the two strategies showed no significant difference in conciseness (p = 0.308).Summarizing individual notes before concatenation improves LLM-generated discharge summaries, enhancing their completeness and accuracy without sacrificing conciseness. This approach may facilitate the integration of LLMs into clinical workflows, offering a promising strategy for automating discharge summary generation and could reduce clinician burden.
{"title":"Summarize-then-Prompt: A Novel Prompt Engineering Strategy for Generating High-Quality Discharge Summaries.","authors":"Eyal Klang, Jaskirat Gill, Aniket Sharma, Evan Leibner, Moein Sabounchi, Robert Freeman, Roopa Kohli-Seth, Patricia Kovatch, Alexander W Charney, Lisa Stump, David L Reich, Girish N Nadkarni, Ankit Sakhuja","doi":"10.1055/a-2617-6572","DOIUrl":"10.1055/a-2617-6572","url":null,"abstract":"<p><p>Accurate discharge summaries are essential for effective communication between hospital and outpatient providers but generating them is labor-intensive. Large language models (LLMs), such as GPT-4, have shown promise in automating this process, potentially reducing clinician workload and improving documentation quality. A recent study using GPT-4 to generate discharge summaries via concatenated clinical notes found that while the summaries were concise and coherent, they often lacked comprehensiveness and contained errors. To address this, we evaluated a structured prompting strategy, summarize-then-prompt, which first generates concise summaries of individual clinical notes before combining them to create a more focused input for the LLM.The objective of this study was to assess the effectiveness of a novel prompting strategy, summarize-then-prompt, in generating discharge summaries that are more complete, accurate, and concise in comparison to the approach that simply concatenates clinical notes.We conducted a retrospective study comparing two prompting strategies: direct concatenation (M1) and summarize-then-prompt (M2). A random sample of 50 hospital stays was selected from a large hospital system. Three attending physicians independently evaluated the generated hospital course summaries for completeness, correctness, and conciseness using a 5-point Likert scale.The summarize-then-prompt strategy outperformed direct concatenation strategy in both completeness (4.28 ± 0.63 vs. 4.01 ± 0.69, <i>p</i> < 0.001) and correctness (4.37 ± 0.54 vs. 4.17 ± 0.57, <i>p</i> = 0.002) of the summarization of the hospital course. However, the two strategies showed no significant difference in conciseness (<i>p</i> = 0.308).Summarizing individual notes before concatenation improves LLM-generated discharge summaries, enhancing their completeness and accuracy without sacrificing conciseness. This approach may facilitate the integration of LLMs into clinical workflows, offering a promising strategy for automating discharge summary generation and could reduce clinician burden.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":" ","pages":"1325-1331"},"PeriodicalIF":2.2,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12513772/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144121223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01Epub Date: 2025-08-14DOI: 10.1055/a-2576-1596
Safa Elkefi, Tiffany R Martinez, Talia Nadel, Antoinette M Schoenthaler, Devin M Mann, Saul Blecker
Uncontrolled hypertension is common and frequently related to inadequate adherence to prescribed medications, resulting in suboptimal blood pressure control and increased healthcare utilization. Although healthcare providers have the opportunity to improve medication adherence, they may lack the tools to address adherence at the point of care. This study aims to assess the usability of a digital tool designed to improve medication adherence and blood pressure control among patients with hypertension who are not adherent to therapy. By evaluating usability, the study seeks to refine the tool's design, underscore the role of technology in managing hypertension, and provide insights to inform clinical decisions.We performed qualitative usability testing of an electronic health record (EHR)-integrated intervention with medical assistants (MAs) and primary care providers (PCPs) from a large integrated health system. Usability was assessed with these end-users using the "think aloud" and "near live" approaches. This evaluation was guided by two frameworks: the End-User Computing Satisfaction Index (EUCSI) and the Technology Acceptance Model (TAM). Interviews were analyzed using a thematic analysis approach.Thematic saturation was reached after usability testing was performed with 10 participants, comprising 5 PCPs and 5 MAs. The study identified several strengths within the content, format, ease of use, timeliness, accuracy, and usefulness of the tool, including the user-friendly content presentation, the usefulness of adherence information, and timely alerts that fit into the workflow. Challenges centered around alert visibility and specificity of information.Leveraging the two conceptual frameworks (TAM and EUCSI) to test the usability of the medication adherence tool was helpful. The tool's several strengths and opportunities for improvement were found. The resulting suggestions will be used to support the enhancement of the design for optimal implementation in a clinical trial.
{"title":"Lessons Learned from the Usability Assessment of an EHR-Based Tool to Support Adherence to Antihypertensive Medications.","authors":"Safa Elkefi, Tiffany R Martinez, Talia Nadel, Antoinette M Schoenthaler, Devin M Mann, Saul Blecker","doi":"10.1055/a-2576-1596","DOIUrl":"10.1055/a-2576-1596","url":null,"abstract":"<p><p>Uncontrolled hypertension is common and frequently related to inadequate adherence to prescribed medications, resulting in suboptimal blood pressure control and increased healthcare utilization. Although healthcare providers have the opportunity to improve medication adherence, they may lack the tools to address adherence at the point of care. This study aims to assess the usability of a digital tool designed to improve medication adherence and blood pressure control among patients with hypertension who are not adherent to therapy. By evaluating usability, the study seeks to refine the tool's design, underscore the role of technology in managing hypertension, and provide insights to inform clinical decisions.We performed qualitative usability testing of an electronic health record (EHR)-integrated intervention with medical assistants (MAs) and primary care providers (PCPs) from a large integrated health system. Usability was assessed with these end-users using the \"think aloud\" and \"near live\" approaches. This evaluation was guided by two frameworks: the End-User Computing Satisfaction Index (EUCSI) and the Technology Acceptance Model (TAM). Interviews were analyzed using a thematic analysis approach.Thematic saturation was reached after usability testing was performed with 10 participants, comprising 5 PCPs and 5 MAs. The study identified several strengths within the content, format, ease of use, timeliness, accuracy, and usefulness of the tool, including the user-friendly content presentation, the usefulness of adherence information, and timely alerts that fit into the workflow. Challenges centered around alert visibility and specificity of information.Leveraging the two conceptual frameworks (TAM and EUCSI) to test the usability of the medication adherence tool was helpful. The tool's several strengths and opportunities for improvement were found. The resulting suggestions will be used to support the enhancement of the design for optimal implementation in a clinical trial.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":"16 4","pages":"760-768"},"PeriodicalIF":2.2,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12352985/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144856732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01Epub Date: 2025-05-05DOI: 10.1055/a-2599-6300
Alaa Albashayreh, Nahid Zeinali, Nanle Joseph Gusen, Yuwen Ji, Stephanie Gilbertson-White
Electronic health records (EHRs) contain valuable patient information, yet certain aspects of care remain infrequently documented and difficult to extract. Identifying these rarely documented elements requires advanced informatics approaches to uncover clinical documentation patterns that would otherwise remain inaccessible for research and quality improvement.This study developed and validated an informatics approach using natural language processing (NLP) to detect and characterize rarely documented elements in EHRs, using spiritual care documentation as an exemplar case.Using EHR data from a Midwestern US hospital (2010-2023), we fine-tuned Spiritual-BERT, an NLP model based on Bio-Clinical-BERT. The model was trained on 80% of a manually annotated, gold-standard corpus of EHR notes, and its performance was validated using the remaining 20% of the corpus, alongside 150 synthetic notes generated by GPT-4 and curated by clinical experts. We applied Spiritual-BERT to identify spiritual care documentation and analyzed patterns across diverse patient populations, provider roles, and clinical services.Spiritual-BERT demonstrated high accuracy in capturing spiritual care documentation (F1-scores: 0.938 internal validation, 0.832 external validation). Analysis of nearly 3.6 million EHR notes from 14,729 older adults revealed that 2% of clinical notes contained spiritual care references, while 73% of patients had spiritual care documented in at least one note. Significant variations were observed across provider types: chaplains documented spiritual care in 99.4% of their notes, compared to 1.7% for nurses and 1.2% for physicians. Documentation patterns also varied based on ethnicity, language, and medical diagnosis.This study demonstrates how advanced NLP techniques can effectively identify and characterize rarely documented elements in EHRs that would be challenging to detect through traditional methods. This approach revealed distinct documentation patterns across provider types, clinical settings, and patient characteristics, with promise for analyzing other under-documented clinical information.
{"title":"An Informatics Approach to Characterizing Rarely Documented Clinical Information in Electronic Health Records: Spiritual Care as an Exemplar.","authors":"Alaa Albashayreh, Nahid Zeinali, Nanle Joseph Gusen, Yuwen Ji, Stephanie Gilbertson-White","doi":"10.1055/a-2599-6300","DOIUrl":"10.1055/a-2599-6300","url":null,"abstract":"<p><p>Electronic health records (EHRs) contain valuable patient information, yet certain aspects of care remain infrequently documented and difficult to extract. Identifying these rarely documented elements requires advanced informatics approaches to uncover clinical documentation patterns that would otherwise remain inaccessible for research and quality improvement.This study developed and validated an informatics approach using natural language processing (NLP) to detect and characterize rarely documented elements in EHRs, using spiritual care documentation as an exemplar case.Using EHR data from a Midwestern US hospital (2010-2023), we fine-tuned Spiritual-BERT, an NLP model based on Bio-Clinical-BERT. The model was trained on 80% of a manually annotated, gold-standard corpus of EHR notes, and its performance was validated using the remaining 20% of the corpus, alongside 150 synthetic notes generated by GPT-4 and curated by clinical experts. We applied Spiritual-BERT to identify spiritual care documentation and analyzed patterns across diverse patient populations, provider roles, and clinical services.Spiritual-BERT demonstrated high accuracy in capturing spiritual care documentation (F1-scores: 0.938 internal validation, 0.832 external validation). Analysis of nearly 3.6 million EHR notes from 14,729 older adults revealed that 2% of clinical notes contained spiritual care references, while 73% of patients had spiritual care documented in at least one note. Significant variations were observed across provider types: chaplains documented spiritual care in 99.4% of their notes, compared to 1.7% for nurses and 1.2% for physicians. Documentation patterns also varied based on ethnicity, language, and medical diagnosis.This study demonstrates how advanced NLP techniques can effectively identify and characterize rarely documented elements in EHRs that would be challenging to detect through traditional methods. This approach revealed distinct documentation patterns across provider types, clinical settings, and patient characteristics, with promise for analyzing other under-documented clinical information.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":" ","pages":"1146-1156"},"PeriodicalIF":2.2,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12449108/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144032537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01Epub Date: 2025-08-20DOI: 10.1055/a-2595-0317
Mark S Iscoe, Carolina Diniz Hooper, Deborah R Levy, John Lutz, Hyung Paek, Christian Rose, Thomas Kannampallil, Daniella Meeker, James D Dziura, Edward R Melnick
In the emergency department-initiated buprenorphine for opioid use disorder (EMBED) trial, a clinical decision support (CDS) tool had no effect on rates of buprenorphine initiation in emergency department (ED) patients with opioid use disorder. The Agency for Healthcare Research and Quality (AHRQ) recently released a CDS Performance Measure Inventory to guide data-driven CDS development and evaluation. Through partner co-design, we tailored AHRQ inventory measures to evaluate EMBED CDS performance and drive improvements.Relevant AHRQ inventory measures were selected and adapted using a partner co-design approach grounded in consensus methodology, with three iterative, multidisciplinary partner working group sessions involving stakeholders from various roles and institutions; meetings were followed by postmeeting surveys. The co-design process was divided into conceptualization, specification, and evaluation phases building on the Centers for Medicare and Medicaid Services' measure life cycle framework. Final measures were evaluated in three EDs in a single health system from January 1, 2023, to December 31, 2024.The partner working group included 25 members. During conceptualization, 13 initial candidate metrics were narrowed to 6 priority categories. These were further specified and validated as the following measures, presented with preliminary values based on the use of the current (i.e., preoptimization) EMBED CDS: eligible encounters with CDS engagement, 5.0% (95% confidence interval: 4.3-5.8%); teamwork on ED initiation of buprenorphine, 39.9% (32.5-47.3%); proportion of eligible users who used EMBED, 58.3% (50.9-65.8%); time spent on EMBED, 29.0 seconds (20.4-37.7 seconds); proportion of buprenorphine orders placed through EMBED, 6.5% (3.4-9.6%); and task completion, 13.8% (8.9-18.7%) for buprenorphine order/prescription.A measurement science framework informed by partner co-design was a feasible approach to develop measures to guide CDS improvement. Subsequent research could adapt this approach to evaluate other CDS applications.
{"title":"A Measurement Science Framework to Optimize CDS for Opioid Use Disorder Treatment in the ED.","authors":"Mark S Iscoe, Carolina Diniz Hooper, Deborah R Levy, John Lutz, Hyung Paek, Christian Rose, Thomas Kannampallil, Daniella Meeker, James D Dziura, Edward R Melnick","doi":"10.1055/a-2595-0317","DOIUrl":"10.1055/a-2595-0317","url":null,"abstract":"<p><p>In the emergency department-initiated buprenorphine for opioid use disorder (EMBED) trial, a clinical decision support (CDS) tool had no effect on rates of buprenorphine initiation in emergency department (ED) patients with opioid use disorder. The Agency for Healthcare Research and Quality (AHRQ) recently released a CDS Performance Measure Inventory to guide data-driven CDS development and evaluation. Through partner co-design, we tailored AHRQ inventory measures to evaluate EMBED CDS performance and drive improvements.Relevant AHRQ inventory measures were selected and adapted using a partner co-design approach grounded in consensus methodology, with three iterative, multidisciplinary partner working group sessions involving stakeholders from various roles and institutions; meetings were followed by postmeeting surveys. The co-design process was divided into conceptualization, specification, and evaluation phases building on the Centers for Medicare and Medicaid Services' measure life cycle framework. Final measures were evaluated in three EDs in a single health system from January 1, 2023, to December 31, 2024.The partner working group included 25 members. During conceptualization, 13 initial candidate metrics were narrowed to 6 priority categories. These were further specified and validated as the following measures, presented with preliminary values based on the use of the current (i.e., preoptimization) EMBED CDS: eligible encounters with CDS engagement, 5.0% (95% confidence interval: 4.3-5.8%); teamwork on ED initiation of buprenorphine, 39.9% (32.5-47.3%); proportion of eligible users who used EMBED, 58.3% (50.9-65.8%); time spent on EMBED, 29.0 seconds (20.4-37.7 seconds); proportion of buprenorphine orders placed through EMBED, 6.5% (3.4-9.6%); and task completion, 13.8% (8.9-18.7%) for buprenorphine order/prescription.A measurement science framework informed by partner co-design was a feasible approach to develop measures to guide CDS improvement. Subsequent research could adapt this approach to evaluate other CDS applications.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":" ","pages":"1067-1076"},"PeriodicalIF":2.2,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12431813/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144975596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}