Pub Date : 2025-10-01Epub Date: 2025-10-28DOI: 10.1055/a-2706-3092
Thomas S Ledger, Sharifa Yeung, Yuen Su, Melissa T Baysari
There is limited literature on prefilled order sentences, a form of prescription prefilled with dosage, route, and frequency information, and none on their effect in a targeted setting for community-acquired pneumonia, for which reported compliance is poor.Prefilled orders incorporated within computerized provider order entry systems (CPOE) may facilitate compliance guidelines by acting as a form of clinical decision support (CDS), providing a default choice for prescribers. We aim to assess the effect of prefilled order sentences on guideline-compliant prescribing.Prospective observational study featuring introduction of prefilled order sentences relating to community-acquired pneumonia. To assess guideline compliance based on the CURB-65 score, a scoring tool was used to assess the severity of community-acquired pneumonia. A study period of 6 months was chosen based on a sample size of 164 records with power of 80% to detect a 20% change in admissions that had guideline-compliant prescribing.The intervention was implemented on February 28, 2023, and data were extracted 6 months before and 6 months after. A total of 11,682 prescriptions were identified before the intervention, and 14,726 after the intervention. After screening and review, this corresponded to 75 and 53 eligible admissions before and after the intervention, which was lower than the anticipated sample size. The mean age of patients was 76.6 years old (sd. 17.3, range 24-97 years). There was a significant difference between before and after samples in the presence of confusion (17.3% before, and 37.7% after; p = 0.009). There was no significant difference in the other parameters of the CURB-65 score in the before and after patient groups. A mild CURB-65 score was reported in 35% of admissions (n = 45), a moderate score in 26% (n = 33), and a score of severe in 39% (n = 50). Less than half of all admissions (46.9%) had prescriptions that were compliant to antibiotic guidelines. Following the intervention, there was a nonsignificant decrease in overall compliance, with 50.7% of admissions having compliant prescriptions before, and 41.5% after intervention.Although unable to reach our planned sample size, the introduction of prefilled order sentences did not change guideline-compliant prescribing. This likely reflects the fact that prefilled orders do not address more systemic barriers affecting antibiotic use and compliance to guidelines.
{"title":"Prefilled Order Sentences via Free-Text Search for Community-Acquired Pneumonia: A Prospective Observational Study.","authors":"Thomas S Ledger, Sharifa Yeung, Yuen Su, Melissa T Baysari","doi":"10.1055/a-2706-3092","DOIUrl":"10.1055/a-2706-3092","url":null,"abstract":"<p><p>There is limited literature on prefilled order sentences, a form of prescription prefilled with dosage, route, and frequency information, and none on their effect in a targeted setting for community-acquired pneumonia, for which reported compliance is poor.Prefilled orders incorporated within computerized provider order entry systems (CPOE) may facilitate compliance guidelines by acting as a form of clinical decision support (CDS), providing a default choice for prescribers. We aim to assess the effect of prefilled order sentences on guideline-compliant prescribing.Prospective observational study featuring introduction of prefilled order sentences relating to community-acquired pneumonia. To assess guideline compliance based on the CURB-65 score, a scoring tool was used to assess the severity of community-acquired pneumonia. A study period of 6 months was chosen based on a sample size of 164 records with power of 80% to detect a 20% change in admissions that had guideline-compliant prescribing.The intervention was implemented on February 28, 2023, and data were extracted 6 months before and 6 months after. A total of 11,682 prescriptions were identified before the intervention, and 14,726 after the intervention. After screening and review, this corresponded to 75 and 53 eligible admissions before and after the intervention, which was lower than the anticipated sample size. The mean age of patients was 76.6 years old (sd. 17.3, range 24-97 years). There was a significant difference between before and after samples in the presence of confusion (17.3% before, and 37.7% after; <i>p</i> = 0.009). There was no significant difference in the other parameters of the CURB-65 score in the before and after patient groups. A mild CURB-65 score was reported in 35% of admissions (<i>n</i> = 45), a moderate score in 26% (<i>n</i> = 33), and a score of severe in 39% (<i>n</i> = 50). Less than half of all admissions (46.9%) had prescriptions that were compliant to antibiotic guidelines. Following the intervention, there was a nonsignificant decrease in overall compliance, with 50.7% of admissions having compliant prescriptions before, and 41.5% after intervention.Although unable to reach our planned sample size, the introduction of prefilled order sentences did not change guideline-compliant prescribing. This likely reflects the fact that prefilled orders do not address more systemic barriers affecting antibiotic use and compliance to guidelines.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":"16 5","pages":"1486-1492"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12566920/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145394416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-06-16DOI: 10.1055/a-2635-3820
A Fischer Lees, Andrew White, Michael Leu, Jeff Robinson, M Kennedy Hall, Robert Doerning
Appropriate Use Criteria Clinical Decision Support (AUC CDS) was legislatively mandated in the United States in 2014, and multiple CDS vendors were designated as qualified Clinical Decision Support Mechanisms by the Centers for Medicare and Medicaid Services. Little is known about the costs and benefits of these systems in real-world settings.We evaluated the effectiveness of an AUC CDS system and the time costs it imposes on clinicians at an academic medical center.Our U.S. academic medical center's enterprise data warehouse was queried for AUC CDS alert events and timestamps occurring between July 1, 2021, and June 30, 2022. We calculated the percentage of altered orders and alert-related timespans, and used these to calculate CDS positive predictive value (PPV), time costs, and the cost-benefit ratio of minutes of provider time per altered order. Based on the medical literature and expert opinion on well-performing CDS, we hypothesized a CDS PPV of 8%.Overall PPV was 1%, leading us to reject our hypothesis that our CDS was well-performing (p < 0.001). Median time costs per alert were high (12 seconds load time, 2 seconds dwell time), yielding a CDS cost-benefit ratio of 38 provider minutes per altered order.Despite using one of three market-leading AUC CDS tools, our CDS demonstrated long load times, short dwell times, and low PPV. Provider attention is not free-policymakers should consider both CDS effectiveness and costs (including time costs) when designing AUC policy.
{"title":"The Costs and Benefits of Clinical Decision Support for Radiology Appropriate Use Criteria: A Retrospective Observational Study.","authors":"A Fischer Lees, Andrew White, Michael Leu, Jeff Robinson, M Kennedy Hall, Robert Doerning","doi":"10.1055/a-2635-3820","DOIUrl":"10.1055/a-2635-3820","url":null,"abstract":"<p><p>Appropriate Use Criteria Clinical Decision Support (AUC CDS) was legislatively mandated in the United States in 2014, and multiple CDS vendors were designated as qualified Clinical Decision Support Mechanisms by the Centers for Medicare and Medicaid Services. Little is known about the costs and benefits of these systems in real-world settings.We evaluated the effectiveness of an AUC CDS system and the time costs it imposes on clinicians at an academic medical center.Our U.S. academic medical center's enterprise data warehouse was queried for AUC CDS alert events and timestamps occurring between July 1, 2021, and June 30, 2022. We calculated the percentage of altered orders and alert-related timespans, and used these to calculate CDS positive predictive value (PPV), time costs, and the cost-benefit ratio of minutes of provider time per altered order. Based on the medical literature and expert opinion on well-performing CDS, we hypothesized a CDS PPV of 8%.Overall PPV was 1%, leading us to reject our hypothesis that our CDS was well-performing (<i>p</i> < 0.001). Median time costs per alert were high (12 seconds load time, 2 seconds dwell time), yielding a CDS cost-benefit ratio of 38 provider minutes per altered order.Despite using one of three market-leading AUC CDS tools, our CDS demonstrated long load times, short dwell times, and low PPV. Provider attention is not free-policymakers should consider both CDS effectiveness and costs (including time costs) when designing AUC policy.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":" ","pages":"1658-1663"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12594561/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144310651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-12-05DOI: 10.1055/a-2750-4422
Jessica Kemp, Hwayeon D Shin, Charlotte Pape, Alina Lee, Bay Bahri, Wei Wang, Sara Ling, Gillian Strudwick
Nurses are the largest group of electronic health record (EHR) users in Canada, yet their experiences with documentation burden remain underexplored. While EHR-generated usage data, such as audit logs and time-motion metrics, have been used to quantify documentation time, they are rarely used to better understand EHR inefficiencies and identify potential changes for nursing documentation and workflows. This approach may help address instances of documentation demands detracting from direct patient care and contributing to burnout, which has been largely reported by nurses.This study aimed to: (1) examine EHR utilization patterns and time spent by nurses across clinical venues and nurse types; (2) identify EHR areas contributing most to nursing workload; (3) determine predictors of EHR time; and (4) assess differences in usage patterns across venues.We analyzed 12 months of EHR usage data from nurses at Canada's largest academic mental health hospital using Cerner Advance (Oracle Health). Seven metrics were selected in collaboration with a Nursing Advisory Council. Regression and least-squares means comparisons were conducted using R, with venue and nurse type as predictors.Data from 840 nurses revealed significant differences in EHR usage across venues and nurse types. Mean active time per patient per shift was highest in inpatient (19.3 minutes), followed by emergency (14.8 minutes), and ambulatory settings (6.3 minutes). Registered Practical Nurses (RPNs) averaged more active EHR time (20.1 minutes) than Registered Nurses (16.4 minutes). Documentation time per patient was significantly different across venues (F [3,832] = 71.97, p < 0.001) and nurse types (p = 0.0018). PowerForms time also varied significantly (F [3,818] = 102.1, p < 0.001). These findings support targeted EHR optimization efforts based on clinical context and role.Significant variation exists in how nurses interact with EHRs, with documentation representing a substantial time burden, especially for RPNs and inpatient settings. These findings emphasize the need for venue and role-specific optimization strategies and underscore the importance of including nurses' voices in EHR design and quality improvement initiatives.
护士是加拿大最大的电子健康记录(EHR)用户群体,但他们在文件负担方面的经验仍未得到充分探讨。虽然EHR生成的使用数据(如审计日志和时间运动指标)已被用于量化记录时间,但它们很少用于更好地了解EHR的低效率,并确定护理文档和工作流程的潜在变化。这种方法可能有助于解决文件需求减损患者直接护理和导致倦怠的情况,这在很大程度上是由护士报告的。本研究的目的是:(1)研究不同临床场所和护士类型的护士使用电子病历的模式和时间;(2)确定对护理工作量贡献最大的电子病历领域;(3)确定电子病历时间的预测因子;(4)评估不同场馆使用模式的差异。我们使用Cerner Advance (Oracle health)分析了加拿大最大的学术精神健康医院护士12个月的电子病历使用数据。与护理咨询委员会合作选择了七个指标。回归和最小二乘方法采用R进行比较,以地点和护士类型为预测因子。来自840名护士的数据显示,不同场所和护士类型的电子病历使用存在显著差异。每班每位患者的平均活动时间在住院患者中最高(19.3分钟),其次是急诊(14.8分钟)和门诊(6.3分钟)。注册执业护士(rpn)平均活跃电子病历时间(20.1分钟)高于注册护士(16.4分钟)。每位患者的记录时间在不同地点有显著差异(F [3,832] = 71.97, p p = 0.0018)。PowerForms的时间也有显著差异(F [3,818] = 102.1, p
{"title":"Variations in Nursing Documentation Time in a Mental Health Setting: A Retrospective Observational Study of EHR Usage Data.","authors":"Jessica Kemp, Hwayeon D Shin, Charlotte Pape, Alina Lee, Bay Bahri, Wei Wang, Sara Ling, Gillian Strudwick","doi":"10.1055/a-2750-4422","DOIUrl":"10.1055/a-2750-4422","url":null,"abstract":"<p><p>Nurses are the largest group of electronic health record (EHR) users in Canada, yet their experiences with documentation burden remain underexplored. While EHR-generated usage data, such as audit logs and time-motion metrics, have been used to quantify documentation time, they are rarely used to better understand EHR inefficiencies and identify potential changes for nursing documentation and workflows. This approach may help address instances of documentation demands detracting from direct patient care and contributing to burnout, which has been largely reported by nurses.This study aimed to: (1) examine EHR utilization patterns and time spent by nurses across clinical venues and nurse types; (2) identify EHR areas contributing most to nursing workload; (3) determine predictors of EHR time; and (4) assess differences in usage patterns across venues.We analyzed 12 months of EHR usage data from nurses at Canada's largest academic mental health hospital using Cerner Advance (Oracle Health). Seven metrics were selected in collaboration with a Nursing Advisory Council. Regression and least-squares means comparisons were conducted using R, with venue and nurse type as predictors.Data from 840 nurses revealed significant differences in EHR usage across venues and nurse types. Mean active time per patient per shift was highest in inpatient (19.3 minutes), followed by emergency (14.8 minutes), and ambulatory settings (6.3 minutes). Registered Practical Nurses (RPNs) averaged more active EHR time (20.1 minutes) than Registered Nurses (16.4 minutes). Documentation time per patient was significantly different across venues (F [3,832] = 71.97, <i>p</i> < 0.001) and nurse types (<i>p</i> = 0.0018). PowerForms time also varied significantly (F [3,818] = 102.1, <i>p</i> < 0.001). These findings support targeted EHR optimization efforts based on clinical context and role.Significant variation exists in how nurses interact with EHRs, with documentation representing a substantial time burden, especially for RPNs and inpatient settings. These findings emphasize the need for venue and role-specific optimization strategies and underscore the importance of including nurses' voices in EHR design and quality improvement initiatives.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":"16 5","pages":"1799-1814"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12680479/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145688414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-10-24DOI: 10.1055/a-2702-1574
Morgan Botdorf, Kimberley Dickinson, Vitaly Lorman, Hanieh Razzaghi, Nicole Marchesani, Suchitra Rao, Colin Rogerson, Miranda Higginbotham, Asuncion Mejias, Daria Salyakina, Deepika Thacker, Dima Dandachi, Dimitri A Christakis, Emily Taylor, Hayden T Schwenk, Hiroki Morizono, Jonathan D Cogen, Nathan M Pajor, Ravi Jhaveri, Christopher B Forrest, L Charles Bailey
Long COVID, characterized by persistent or recurring symptoms post-COVID-19 infection, poses challenges for pediatric care and research due to the lack of a standardized clinical definition. Adult-focused phenotypes do not translate well to children, given developmental and physiological differences, and pediatric-specific phenotypes have not been compared with chart review.This study introduces and evaluates a pediatric-specific rule-based computable phenotype (CP) to identify long COVID using electronic health record data. We compare its performance to manual chart review.We applied the CP, composed of diagnostic codes empirically associated with long COVID, to 339,467 pediatric patients with SARS-CoV-2 infection in the RECOVER PCORnet EHR database. The CP identified 31,781 patients with long COVID. Clinicians conducted chart reviews on a subset of patients across 16 hospital systems to assess performance. We qualitatively reviewed discordant cases to understand differences between CP and clinician identification.Among the 651 reviewed patients (339 females, Mage = 10.10 years), the CP showed moderate agreement with clinician identification (accuracy = 0.62, positive predictive value [PPV] = 0.49, negative predictive value [NPV] = 0.75, sensitivity = 0.52, specificity = 0.84). Performance was largely consistent across age and dominant variant but varied by symptom cluster count. Most discrepancies between the CP and chart review occurred when the CP identified a case, but the clinician did not, often because clinicians attributed symptoms to preexisting conditions (73%). When clinicians identified cases missed by the CP, they often used broader symptom or timing criteria (69%). Model performance improved when the CP accounted for preexisting conditions (accuracy = 0.71, PPV = 0.65, NPV = 0.74, sensitivity = 0.59, specificity = 0.79).This study presents a CP for pediatric long COVID. While agreement with manual review was moderate, most discrepancies were explained by differences in interpreting symptoms when patients had preexisting conditions. Accounting for these conditions improved accuracy and highlights the need for a consensus definition. These findings support the development of reliable, scalable tools for pediatric long COVID research.
{"title":"Identifying Pediatric Long COVID: Comparing an EHR Algorithm to Manual Review.","authors":"Morgan Botdorf, Kimberley Dickinson, Vitaly Lorman, Hanieh Razzaghi, Nicole Marchesani, Suchitra Rao, Colin Rogerson, Miranda Higginbotham, Asuncion Mejias, Daria Salyakina, Deepika Thacker, Dima Dandachi, Dimitri A Christakis, Emily Taylor, Hayden T Schwenk, Hiroki Morizono, Jonathan D Cogen, Nathan M Pajor, Ravi Jhaveri, Christopher B Forrest, L Charles Bailey","doi":"10.1055/a-2702-1574","DOIUrl":"10.1055/a-2702-1574","url":null,"abstract":"<p><p>Long COVID, characterized by persistent or recurring symptoms post-COVID-19 infection, poses challenges for pediatric care and research due to the lack of a standardized clinical definition. Adult-focused phenotypes do not translate well to children, given developmental and physiological differences, and pediatric-specific phenotypes have not been compared with chart review.This study introduces and evaluates a pediatric-specific rule-based computable phenotype (CP) to identify long COVID using electronic health record data. We compare its performance to manual chart review.We applied the CP, composed of diagnostic codes empirically associated with long COVID, to 339,467 pediatric patients with SARS-CoV-2 infection in the RECOVER PCORnet EHR database. The CP identified 31,781 patients with long COVID. Clinicians conducted chart reviews on a subset of patients across 16 hospital systems to assess performance. We qualitatively reviewed discordant cases to understand differences between CP and clinician identification.Among the 651 reviewed patients (339 females, <i>M</i> <sub>age</sub> = 10.10 years), the CP showed moderate agreement with clinician identification (accuracy = 0.62, positive predictive value [PPV] = 0.49, negative predictive value [NPV] = 0.75, sensitivity = 0.52, specificity = 0.84). Performance was largely consistent across age and dominant variant but varied by symptom cluster count. Most discrepancies between the CP and chart review occurred when the CP identified a case, but the clinician did not, often because clinicians attributed symptoms to preexisting conditions (73%). When clinicians identified cases missed by the CP, they often used broader symptom or timing criteria (69%). Model performance improved when the CP accounted for preexisting conditions (accuracy = 0.71, PPV = 0.65, NPV = 0.74, sensitivity = 0.59, specificity = 0.79).This study presents a CP for pediatric long COVID. While agreement with manual review was moderate, most discrepancies were explained by differences in interpreting symptoms when patients had preexisting conditions. Accounting for these conditions improved accuracy and highlights the need for a consensus definition. These findings support the development of reliable, scalable tools for pediatric long COVID research.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":"16 5","pages":"1445-1456"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12552067/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145369125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-05-26DOI: 10.1055/a-2620-6221
Stephon Proctor, Bimal Desai
Clinical decision support systems (CDSS) are central to modern health care, but their effectiveness is compromised during system downtimes, which affect 96% of health care organizations. During these failures, clinicians lose access to critical decision-making tools like order sets, increasing the risk of medical errors. Traditional downtime solutions, such as paper-based protocols, are often impractical and difficult to maintain.This study introduces and evaluates Offsite Repository for Clinical Assets (ORCA), a resilient web-based solution designed to maintain access to electronic health record (EHR) order sets during system failures. We assessed its usability and effectiveness as a downtime decision support tool across various clinical settings.ORCA was developed based on an analysis of previous downtime incidents, replicating essential order set functionality while ensuring offsite accessibility. We conducted usability testing with 16 clinicians from diverse specialties, using structured tasks and think-aloud protocols. User feedback was collected through the Usability Metric for User Experience (UMUX) questionnaire and thematic analysis of interview transcripts.ORCA demonstrated strong usability (mean UMUX score: 86.2). Thematic analysis revealed key implementation challenges: system limitations, workflow integration, and interface navigation. Users valued ORCA's familiar interface and offsite accessibility but identified critical gaps in dynamic decision support capabilities.ORCA represents a viable approach to maintaining basic clinical decision support (CDS) during downtimes. However, significant challenges remain in replicating dynamic CDS features and ensuring effective integration with existing downtime procedures. These findings inform future development of resilient CDSS and highlight the importance of planned fallback pathways in clinical systems.
背景:临床决策支持系统(CDSS)是现代医疗保健的核心,但其有效性在系统停机期间受到损害,这影响了96%的医疗保健组织。在这些故障期间,临床医生无法访问诸如订单集之类的关键决策工具,从而增加了医疗差错的风险。传统的停机解决方案,如基于纸张的协议,通常不切实际且难以维护。目的:本研究介绍并评估了ORCA (Offsite Repository for Clinical Assets),这是一种弹性的基于网络的解决方案,旨在在系统故障期间保持对EHR订单集的访问。我们评估了它在各种临床环境中作为停机决策支持工具的可用性和有效性。方法:ORCA是在分析之前的停机事件的基础上开发的,在确保非现场可访问性的同时,复制了基本的订单集功能。我们对来自不同专业的16名临床医生进行了可用性测试,使用结构化任务和有声思考协议。用户反馈是通过用户体验可用性度量(UMUX)问卷调查和访谈记录的专题分析收集的。结果:ORCA表现出较强的可用性(平均UMUX得分:86.2)。专题分析揭示了主要的实施挑战:系统限制(24.56%)、工作流集成(23.39%)和界面导航(22.22%)。用户重视ORCA熟悉的界面和非现场可访问性,但发现了动态决策支持能力的关键差距。结论:ORCA代表了在停机期间维持基本临床决策支持的可行方法。然而,在复制动态CDS特性和确保与现有停机程序的有效集成方面仍然存在重大挑战。这些发现为弹性CDS系统的未来发展提供了信息,并强调了在临床系统中规划后备途径的重要性。
{"title":"Development and Evaluation of Offsite Repository for Clinical Assets, a Resilient Solution for Order Set Access during EHR Downtimes.","authors":"Stephon Proctor, Bimal Desai","doi":"10.1055/a-2620-6221","DOIUrl":"10.1055/a-2620-6221","url":null,"abstract":"<p><p>Clinical decision support systems (CDSS) are central to modern health care, but their effectiveness is compromised during system downtimes, which affect 96% of health care organizations. During these failures, clinicians lose access to critical decision-making tools like order sets, increasing the risk of medical errors. Traditional downtime solutions, such as paper-based protocols, are often impractical and difficult to maintain.This study introduces and evaluates Offsite Repository for Clinical Assets (ORCA), a resilient web-based solution designed to maintain access to electronic health record (EHR) order sets during system failures. We assessed its usability and effectiveness as a downtime decision support tool across various clinical settings.ORCA was developed based on an analysis of previous downtime incidents, replicating essential order set functionality while ensuring offsite accessibility. We conducted usability testing with 16 clinicians from diverse specialties, using structured tasks and think-aloud protocols. User feedback was collected through the Usability Metric for User Experience (UMUX) questionnaire and thematic analysis of interview transcripts.ORCA demonstrated strong usability (mean UMUX score: 86.2). Thematic analysis revealed key implementation challenges: system limitations, workflow integration, and interface navigation. Users valued ORCA's familiar interface and offsite accessibility but identified critical gaps in dynamic decision support capabilities.ORCA represents a viable approach to maintaining basic clinical decision support (CDS) during downtimes. However, significant challenges remain in replicating dynamic CDS features and ensuring effective integration with existing downtime procedures. These findings inform future development of resilient CDSS and highlight the importance of planned fallback pathways in clinical systems.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":" ","pages":"1401-1412"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12534125/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144152613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-10-30DOI: 10.1055/a-2735-0527
Hyo Jung Hong, Nigam H Shah, Michael A Pfeffer, Lisa S Lehmann
This study aims to evaluate physicians' practices and perspectives regarding large language models (LLMs) in health care settings.A cross-sectional survey study was conducted between May and July 2024, comparing physician perspectives at two major academic medical centers (AMCs), one with institutional LLM access and one without. Participants included both clinical faculty and trainees recruited through departmental leadership and snowball sampling. Primary outcomes were current LLM use frequency, ranked importance of evaluation metrics, liability concerns, and preferred learning topics.Among 306 respondents (217 attending physicians [70.9%], 80 trainees [26.1%]), 197 (64.4%) reported using LLMs. The AMC with institutional LLM access reported significantly lower liability concerns (49.2 vs. 66.7% reporting high concern; 17.5 percentage points difference [95% CI, 6.8-28.2]; p = 0.0082). Accuracy was prioritized across all specialties (median rank 1.0 [interquartile range; IQR, 1.0-2.0]). Of the respondents, 287 physicians (94%) requested additional training. Key learning priorities were clinical applications (206 [71.9%]) and risk management (181 [63.1%]). Despite widespread personal use, only 8 physicians (2.6%) recommended LLMs to patients. Notable specialty and demographic variations emerged, with younger physicians showing higher enthusiasm but also elevated legal concerns.This survey study provides insights into physicians' current usage patterns and perspectives on LLMs. Liability concerns appear to be lessened in settings with institutional LLM access. The findings suggest opportunities for medical centers to consider when developing LLM-related policies and educational programs.
{"title":"Physician Perspectives on Large Language Models in Health Care: A Cross-Sectional Survey Study.","authors":"Hyo Jung Hong, Nigam H Shah, Michael A Pfeffer, Lisa S Lehmann","doi":"10.1055/a-2735-0527","DOIUrl":"10.1055/a-2735-0527","url":null,"abstract":"<p><p>This study aims to evaluate physicians' practices and perspectives regarding large language models (LLMs) in health care settings.A cross-sectional survey study was conducted between May and July 2024, comparing physician perspectives at two major academic medical centers (AMCs), one with institutional LLM access and one without. Participants included both clinical faculty and trainees recruited through departmental leadership and snowball sampling. Primary outcomes were current LLM use frequency, ranked importance of evaluation metrics, liability concerns, and preferred learning topics.Among 306 respondents (217 attending physicians [70.9%], 80 trainees [26.1%]), 197 (64.4%) reported using LLMs. The AMC with institutional LLM access reported significantly lower liability concerns (49.2 vs. 66.7% reporting high concern; 17.5 percentage points difference [95% CI, 6.8-28.2]; <i>p</i> = 0.0082). Accuracy was prioritized across all specialties (median rank 1.0 [interquartile range; IQR, 1.0-2.0]). Of the respondents, 287 physicians (94%) requested additional training. Key learning priorities were clinical applications (206 [71.9%]) and risk management (181 [63.1%]). Despite widespread personal use, only 8 physicians (2.6%) recommended LLMs to patients. Notable specialty and demographic variations emerged, with younger physicians showing higher enthusiasm but also elevated legal concerns.This survey study provides insights into physicians' current usage patterns and perspectives on LLMs. Liability concerns appear to be lessened in settings with institutional LLM access. The findings suggest opportunities for medical centers to consider when developing LLM-related policies and educational programs.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":" ","pages":"1738-1748"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12618148/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-06-02DOI: 10.1055/a-2624-5482
Ellen A Ahlness, Deborah R Levy
Health professionals (HPs) trainee burnout is hard to capture. A lack of rigorous review and systematic methodological consideration hinders the development of qualitative methodological tools that can elicit rich and trustworthy qualitative data on HPs trainee burnout.This study aimed to report the process, results, and lessons learned while developing and pilot testing a qualitative tool to assess HPs' trainee experiences of burnout to complement quantitative tools.We developed a set of semistructured interview questions (n = 3) probing into HP trainee burnout and refined them through a Modified Delphi process. We, then, planned pilot testing of the qualitative tool in initial interviews with HP trainees.We developed a three-question set of semistructured interview questions to probe burnout for HP trainees, which were refined using a Modified Delphi approach (n = 10 subject matter experts). We conducted pilot testing (n = 43 interviews with n = 14 trainees). We developed a novel qualitative tool to assess HPs trainee experiences of burnout, consisting of three core questions and three follow-up probes that elicit data on key dimensions of HPs trainee burnout for integration into a structured or semistructured interview guide.We present results as lessons learned, which can support the further development of tools to articulate HPs' trainee perspectives in studying burnout, especially during health system transitions. Developing qualitative measurement tools designed to be used with well-validated, established quantitative tools may be a complex process, but it is critical in efforts to mitigate HP trainee burnout.
{"title":"Examining Health Professional Trainee Burnout: Lessons Learned Using Qualitative Inquiry to Elicit Rich Data.","authors":"Ellen A Ahlness, Deborah R Levy","doi":"10.1055/a-2624-5482","DOIUrl":"10.1055/a-2624-5482","url":null,"abstract":"<p><p>Health professionals (HPs) trainee burnout is hard to capture. A lack of rigorous review and systematic methodological consideration hinders the development of qualitative methodological tools that can elicit rich and trustworthy qualitative data on HPs trainee burnout.This study aimed to report the process, results, and lessons learned while developing and pilot testing a qualitative tool to assess HPs' trainee experiences of burnout to complement quantitative tools.We developed a set of semistructured interview questions (<i>n</i> = 3) probing into HP trainee burnout and refined them through a Modified Delphi process. We, then, planned pilot testing of the qualitative tool in initial interviews with HP trainees.We developed a three-question set of semistructured interview questions to probe burnout for HP trainees, which were refined using a Modified Delphi approach (<i>n</i> = 10 subject matter experts). We conducted pilot testing (<i>n</i> = 43 interviews with <i>n</i> = 14 trainees). We developed a novel qualitative tool to assess HPs trainee experiences of burnout, consisting of three core questions and three follow-up probes that elicit data on key dimensions of HPs trainee burnout for integration into a structured or semistructured interview guide.We present results as lessons learned, which can support the further development of tools to articulate HPs' trainee perspectives in studying burnout, especially during health system transitions. Developing qualitative measurement tools designed to be used with well-validated, established quantitative tools may be a complex process, but it is critical in efforts to mitigate HP trainee burnout.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":" ","pages":"1568-1577"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12578574/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144210013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-10-28DOI: 10.1055/a-2702-6872
Sarah W Chen, Michael Gannon, John L Kilgallon, Weng Ian Chay, David Rubins, Hojjat Salmasian, Sayon Dutta, Dustin S McEvoy, Edward Wu, Adam Wright, Allison McCoy, Lipika Samal
Clinical decision support (CDS) systems have been widely adopted across clinical settings to promote evidence-based practice for clinicians. CDS malfunctions often affect the user experience and indirectly or directly interfere with patient care. To enhance optimal performance, it is critical to constantly monitor the performance of the tool and react promptly when malfunctions are identified.This study aimed to describe malfunctions identified in the development and implementation of a CDS alert as well as lessons learned.A pragmatic randomized controlled trial of a CDS alert for primary care patients with chronic kidney disease and uncontrolled blood pressure was conducted. The alert included prechecked default orders for medication initiation or titration, basic metabolic panel, and nephrology electronic consult. Alert monitoring involved retrospective chart review and review of alert firing reports.Eight CDS malfunctions were identified. The most common causes of malfunctions were due to conceptualization and build errors. Provider feedback and retrospective chart review were the primary methods of identifying the root cause of malfunctions.Our findings highlight the need for CDS interventions to be continuously monitored through chart review, alert firing reports, and opportunities for provider feedback. Lessons learned from CDS malfunctions can be implemented to improve provider trust in automated electronic health record-based alerts, reduce administrative burden, and prevent inappropriate alert recommendations that can negatively affect patient outcomes. This study is registered with Clinivaltrials.gov (identifier: NCT03679247).
{"title":"Applying an Empirical Taxonomy to Alert Malfunctions in a Pragmatic Trial for Hypertension Management in Chronic Kidney Disease.","authors":"Sarah W Chen, Michael Gannon, John L Kilgallon, Weng Ian Chay, David Rubins, Hojjat Salmasian, Sayon Dutta, Dustin S McEvoy, Edward Wu, Adam Wright, Allison McCoy, Lipika Samal","doi":"10.1055/a-2702-6872","DOIUrl":"10.1055/a-2702-6872","url":null,"abstract":"<p><p>Clinical decision support (CDS) systems have been widely adopted across clinical settings to promote evidence-based practice for clinicians. CDS malfunctions often affect the user experience and indirectly or directly interfere with patient care. To enhance optimal performance, it is critical to constantly monitor the performance of the tool and react promptly when malfunctions are identified.This study aimed to describe malfunctions identified in the development and implementation of a CDS alert as well as lessons learned.A pragmatic randomized controlled trial of a CDS alert for primary care patients with chronic kidney disease and uncontrolled blood pressure was conducted. The alert included prechecked default orders for medication initiation or titration, basic metabolic panel, and nephrology electronic consult. Alert monitoring involved retrospective chart review and review of alert firing reports.Eight CDS malfunctions were identified. The most common causes of malfunctions were due to conceptualization and build errors. Provider feedback and retrospective chart review were the primary methods of identifying the root cause of malfunctions.Our findings highlight the need for CDS interventions to be continuously monitored through chart review, alert firing reports, and opportunities for provider feedback. Lessons learned from CDS malfunctions can be implemented to improve provider trust in automated electronic health record-based alerts, reduce administrative burden, and prevent inappropriate alert recommendations that can negatively affect patient outcomes. This study is registered with Clinivaltrials.gov (identifier: NCT03679247).</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":"16 5","pages":"1457-1464"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12566919/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145394409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-06-02DOI: 10.1055/a-2621-0110
Danielle Jungst, Anthony Solomonides, Chad Konchak
Health equity is greatly impacted by the systems and processes with which health systems deliver care. Given the minimal guidance on measurement and reporting of health inequities specific to key population health outcomes, a solution for measurement of health equity is proposed.The concept of a lens of equity was adopted to disaggregate common measures such as breast cancer screening rates to expose inequities across neighborhoods and races in populations served. Two measures were introduced into the corporate measurement systems, race/ethnicity as measured in the electronic health record, and a surrogate measure of family income.An equity category was added to system scorecards and counted toward corporate goals along with data insights and discovery tools to support the efforts of the breast cancer screening improvement teams. Over a 1-year timeframe, Endeavor Health not only met but exceeded its breast cancer screening equity goal, increasing mammography adherence from 73 to 82.6% among residents in the lowest-income neighborhoods served.The analytics and data systems that support complex health care measurement tools require diligent and thoughtful design to meet external reporting requirements and support the internal teams who aim to improve the care of populations served. The analytic approach presented may be readily extended to populations with other potentially impactful differences in social determinants and health status. A "lens-of-equity" tool may be established along similar lines, allowing policy and strategy initiatives to be appropriately targeted and successfully implemented.
{"title":"Introduction of a Health Care System Lens-of-Equity Measurement Strategy to Optimize Breast Cancer Screening.","authors":"Danielle Jungst, Anthony Solomonides, Chad Konchak","doi":"10.1055/a-2621-0110","DOIUrl":"10.1055/a-2621-0110","url":null,"abstract":"<p><p>Health equity is greatly impacted by the systems and processes with which health systems deliver care. Given the minimal guidance on measurement and reporting of health inequities specific to key population health outcomes, a solution for measurement of health equity is proposed.The concept of a <i>lens of equity</i> was adopted to disaggregate common measures such as breast cancer screening rates to expose inequities across neighborhoods and races in populations served. Two measures were introduced into the corporate measurement systems, race/ethnicity as measured in the electronic health record, and a surrogate measure of family income.An equity category was added to system scorecards and counted toward corporate goals along with data insights and discovery tools to support the efforts of the breast cancer screening improvement teams. Over a 1-year timeframe, Endeavor Health not only met but exceeded its breast cancer screening equity goal, increasing mammography adherence from 73 to 82.6% among residents in the lowest-income neighborhoods served.The analytics and data systems that support complex health care measurement tools require diligent and thoughtful design to meet external reporting requirements and support the internal teams who aim to improve the care of populations served. The analytic approach presented may be readily extended to populations with other potentially impactful differences in social determinants and health status. A \"lens-of-equity\" tool may be established along similar lines, allowing policy and strategy initiatives to be appropriately targeted and successfully implemented.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":" ","pages":"1550-1559"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12594557/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144210011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-11-07DOI: 10.1055/a-2630-3204
Robert P Pierce, Adam Kell, Bernie Eskridge, Lea Brandt, Kevin W Clary, Kevin Craig
End-of-life care (EoLC), such as advance care planning, advance directives, hospice, and palliative care consults, can improve patient quality of life and reduce costs, yet such interventions are underused. Machine learning-based prediction models show promise in identifying patients who may be candidates for EoLC based on increased risk of short-term (less than 1 year) mortality. Clinical decision support systems using these models can identify candidate patients at a time during their care when care teams can increase the provision of EoLC.Evaluate changes in the provision of EoLC with implementation of a machine learning-based mortality prediction model in an academic health center.A clinical decision support system based on a random forest machine learning mortality prediction model is described. The system was implemented in an academic health system, first in the medical intensive care unit, then house-wide. An interrupted time series analysis was performed over the 16 weeks prior to and 43 weeks after the implementations. Primary outcomes were the rates of documentation of advance directives, palliative care consultations, and do not attempt resuscitation (DNAR) orders among encounters with an alert for PRISM score over 50% (PRISM positive) compared with those without an alert (PRISM negative).Following a steep preintervention decline, the rate of advance directive documentation improved immediately after implementation. However, the implementations were not associated with improvements in any of the other primary outcomes. The model discrimination was substantially worse than that observed in model development, and after 16 months, it was withdrawn from production.A clinical decision support system based on a machine learning mortality prediction model failed to provide clinically meaningful improvements in EoLC measures. Possible causes for the failure include system-level factors, clinical decision support system design, and poor model performance.
{"title":"A Machine Learning-Based Clinical Decision Support System to Improve End-of-Life Care.","authors":"Robert P Pierce, Adam Kell, Bernie Eskridge, Lea Brandt, Kevin W Clary, Kevin Craig","doi":"10.1055/a-2630-3204","DOIUrl":"10.1055/a-2630-3204","url":null,"abstract":"<p><p>End-of-life care (EoLC), such as advance care planning, advance directives, hospice, and palliative care consults, can improve patient quality of life and reduce costs, yet such interventions are underused. Machine learning-based prediction models show promise in identifying patients who may be candidates for EoLC based on increased risk of short-term (less than 1 year) mortality. Clinical decision support systems using these models can identify candidate patients at a time during their care when care teams can increase the provision of EoLC.Evaluate changes in the provision of EoLC with implementation of a machine learning-based mortality prediction model in an academic health center.A clinical decision support system based on a random forest machine learning mortality prediction model is described. The system was implemented in an academic health system, first in the medical intensive care unit, then house-wide. An interrupted time series analysis was performed over the 16 weeks prior to and 43 weeks after the implementations. Primary outcomes were the rates of documentation of advance directives, palliative care consultations, and do not attempt resuscitation (DNAR) orders among encounters with an alert for PRISM score over 50% (PRISM positive) compared with those without an alert (PRISM negative).Following a steep preintervention decline, the rate of advance directive documentation improved immediately after implementation. However, the implementations were not associated with improvements in any of the other primary outcomes. The model discrimination was substantially worse than that observed in model development, and after 16 months, it was withdrawn from production.A clinical decision support system based on a machine learning mortality prediction model failed to provide clinically meaningful improvements in EoLC measures. Possible causes for the failure include system-level factors, clinical decision support system design, and poor model performance.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":"16 5","pages":"1637-1645"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12594560/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145472359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}