Pub Date : 2025-10-01Epub Date: 2025-06-02DOI: 10.1055/a-2624-5482
Ellen A Ahlness, Deborah R Levy
Health professionals (HPs) trainee burnout is hard to capture. A lack of rigorous review and systematic methodological consideration hinders the development of qualitative methodological tools that can elicit rich and trustworthy qualitative data on HPs trainee burnout.This study aimed to report the process, results, and lessons learned while developing and pilot testing a qualitative tool to assess HPs' trainee experiences of burnout to complement quantitative tools.We developed a set of semistructured interview questions (n = 3) probing into HP trainee burnout and refined them through a Modified Delphi process. We, then, planned pilot testing of the qualitative tool in initial interviews with HP trainees.We developed a three-question set of semistructured interview questions to probe burnout for HP trainees, which were refined using a Modified Delphi approach (n = 10 subject matter experts). We conducted pilot testing (n = 43 interviews with n = 14 trainees). We developed a novel qualitative tool to assess HPs trainee experiences of burnout, consisting of three core questions and three follow-up probes that elicit data on key dimensions of HPs trainee burnout for integration into a structured or semistructured interview guide.We present results as lessons learned, which can support the further development of tools to articulate HPs' trainee perspectives in studying burnout, especially during health system transitions. Developing qualitative measurement tools designed to be used with well-validated, established quantitative tools may be a complex process, but it is critical in efforts to mitigate HP trainee burnout.
{"title":"Examining Health Professional Trainee Burnout: Lessons Learned Using Qualitative Inquiry to Elicit Rich Data.","authors":"Ellen A Ahlness, Deborah R Levy","doi":"10.1055/a-2624-5482","DOIUrl":"10.1055/a-2624-5482","url":null,"abstract":"<p><p>Health professionals (HPs) trainee burnout is hard to capture. A lack of rigorous review and systematic methodological consideration hinders the development of qualitative methodological tools that can elicit rich and trustworthy qualitative data on HPs trainee burnout.This study aimed to report the process, results, and lessons learned while developing and pilot testing a qualitative tool to assess HPs' trainee experiences of burnout to complement quantitative tools.We developed a set of semistructured interview questions (<i>n</i> = 3) probing into HP trainee burnout and refined them through a Modified Delphi process. We, then, planned pilot testing of the qualitative tool in initial interviews with HP trainees.We developed a three-question set of semistructured interview questions to probe burnout for HP trainees, which were refined using a Modified Delphi approach (<i>n</i> = 10 subject matter experts). We conducted pilot testing (<i>n</i> = 43 interviews with <i>n</i> = 14 trainees). We developed a novel qualitative tool to assess HPs trainee experiences of burnout, consisting of three core questions and three follow-up probes that elicit data on key dimensions of HPs trainee burnout for integration into a structured or semistructured interview guide.We present results as lessons learned, which can support the further development of tools to articulate HPs' trainee perspectives in studying burnout, especially during health system transitions. Developing qualitative measurement tools designed to be used with well-validated, established quantitative tools may be a complex process, but it is critical in efforts to mitigate HP trainee burnout.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":" ","pages":"1568-1577"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12578574/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144210013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-10-28DOI: 10.1055/a-2706-3092
Thomas S Ledger, Sharifa Yeung, Yuen Su, Melissa T Baysari
There is limited literature on prefilled order sentences, a form of prescription prefilled with dosage, route, and frequency information, and none on their effect in a targeted setting for community-acquired pneumonia, for which reported compliance is poor.Prefilled orders incorporated within computerized provider order entry systems (CPOE) may facilitate compliance guidelines by acting as a form of clinical decision support (CDS), providing a default choice for prescribers. We aim to assess the effect of prefilled order sentences on guideline-compliant prescribing.Prospective observational study featuring introduction of prefilled order sentences relating to community-acquired pneumonia. To assess guideline compliance based on the CURB-65 score, a scoring tool was used to assess the severity of community-acquired pneumonia. A study period of 6 months was chosen based on a sample size of 164 records with power of 80% to detect a 20% change in admissions that had guideline-compliant prescribing.The intervention was implemented on February 28, 2023, and data were extracted 6 months before and 6 months after. A total of 11,682 prescriptions were identified before the intervention, and 14,726 after the intervention. After screening and review, this corresponded to 75 and 53 eligible admissions before and after the intervention, which was lower than the anticipated sample size. The mean age of patients was 76.6 years old (sd. 17.3, range 24-97 years). There was a significant difference between before and after samples in the presence of confusion (17.3% before, and 37.7% after; p = 0.009). There was no significant difference in the other parameters of the CURB-65 score in the before and after patient groups. A mild CURB-65 score was reported in 35% of admissions (n = 45), a moderate score in 26% (n = 33), and a score of severe in 39% (n = 50). Less than half of all admissions (46.9%) had prescriptions that were compliant to antibiotic guidelines. Following the intervention, there was a nonsignificant decrease in overall compliance, with 50.7% of admissions having compliant prescriptions before, and 41.5% after intervention.Although unable to reach our planned sample size, the introduction of prefilled order sentences did not change guideline-compliant prescribing. This likely reflects the fact that prefilled orders do not address more systemic barriers affecting antibiotic use and compliance to guidelines.
{"title":"Prefilled Order Sentences via Free-Text Search for Community-Acquired Pneumonia: A Prospective Observational Study.","authors":"Thomas S Ledger, Sharifa Yeung, Yuen Su, Melissa T Baysari","doi":"10.1055/a-2706-3092","DOIUrl":"10.1055/a-2706-3092","url":null,"abstract":"<p><p>There is limited literature on prefilled order sentences, a form of prescription prefilled with dosage, route, and frequency information, and none on their effect in a targeted setting for community-acquired pneumonia, for which reported compliance is poor.Prefilled orders incorporated within computerized provider order entry systems (CPOE) may facilitate compliance guidelines by acting as a form of clinical decision support (CDS), providing a default choice for prescribers. We aim to assess the effect of prefilled order sentences on guideline-compliant prescribing.Prospective observational study featuring introduction of prefilled order sentences relating to community-acquired pneumonia. To assess guideline compliance based on the CURB-65 score, a scoring tool was used to assess the severity of community-acquired pneumonia. A study period of 6 months was chosen based on a sample size of 164 records with power of 80% to detect a 20% change in admissions that had guideline-compliant prescribing.The intervention was implemented on February 28, 2023, and data were extracted 6 months before and 6 months after. A total of 11,682 prescriptions were identified before the intervention, and 14,726 after the intervention. After screening and review, this corresponded to 75 and 53 eligible admissions before and after the intervention, which was lower than the anticipated sample size. The mean age of patients was 76.6 years old (sd. 17.3, range 24-97 years). There was a significant difference between before and after samples in the presence of confusion (17.3% before, and 37.7% after; <i>p</i> = 0.009). There was no significant difference in the other parameters of the CURB-65 score in the before and after patient groups. A mild CURB-65 score was reported in 35% of admissions (<i>n</i> = 45), a moderate score in 26% (<i>n</i> = 33), and a score of severe in 39% (<i>n</i> = 50). Less than half of all admissions (46.9%) had prescriptions that were compliant to antibiotic guidelines. Following the intervention, there was a nonsignificant decrease in overall compliance, with 50.7% of admissions having compliant prescriptions before, and 41.5% after intervention.Although unable to reach our planned sample size, the introduction of prefilled order sentences did not change guideline-compliant prescribing. This likely reflects the fact that prefilled orders do not address more systemic barriers affecting antibiotic use and compliance to guidelines.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":"16 5","pages":"1486-1492"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12566920/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145394416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-06-16DOI: 10.1055/a-2635-3820
A Fischer Lees, Andrew White, Michael Leu, Jeff Robinson, M Kennedy Hall, Robert Doerning
Appropriate Use Criteria Clinical Decision Support (AUC CDS) was legislatively mandated in the United States in 2014, and multiple CDS vendors were designated as qualified Clinical Decision Support Mechanisms by the Centers for Medicare and Medicaid Services. Little is known about the costs and benefits of these systems in real-world settings.We evaluated the effectiveness of an AUC CDS system and the time costs it imposes on clinicians at an academic medical center.Our U.S. academic medical center's enterprise data warehouse was queried for AUC CDS alert events and timestamps occurring between July 1, 2021, and June 30, 2022. We calculated the percentage of altered orders and alert-related timespans, and used these to calculate CDS positive predictive value (PPV), time costs, and the cost-benefit ratio of minutes of provider time per altered order. Based on the medical literature and expert opinion on well-performing CDS, we hypothesized a CDS PPV of 8%.Overall PPV was 1%, leading us to reject our hypothesis that our CDS was well-performing (p < 0.001). Median time costs per alert were high (12 seconds load time, 2 seconds dwell time), yielding a CDS cost-benefit ratio of 38 provider minutes per altered order.Despite using one of three market-leading AUC CDS tools, our CDS demonstrated long load times, short dwell times, and low PPV. Provider attention is not free-policymakers should consider both CDS effectiveness and costs (including time costs) when designing AUC policy.
{"title":"The Costs and Benefits of Clinical Decision Support for Radiology Appropriate Use Criteria: A Retrospective Observational Study.","authors":"A Fischer Lees, Andrew White, Michael Leu, Jeff Robinson, M Kennedy Hall, Robert Doerning","doi":"10.1055/a-2635-3820","DOIUrl":"10.1055/a-2635-3820","url":null,"abstract":"<p><p>Appropriate Use Criteria Clinical Decision Support (AUC CDS) was legislatively mandated in the United States in 2014, and multiple CDS vendors were designated as qualified Clinical Decision Support Mechanisms by the Centers for Medicare and Medicaid Services. Little is known about the costs and benefits of these systems in real-world settings.We evaluated the effectiveness of an AUC CDS system and the time costs it imposes on clinicians at an academic medical center.Our U.S. academic medical center's enterprise data warehouse was queried for AUC CDS alert events and timestamps occurring between July 1, 2021, and June 30, 2022. We calculated the percentage of altered orders and alert-related timespans, and used these to calculate CDS positive predictive value (PPV), time costs, and the cost-benefit ratio of minutes of provider time per altered order. Based on the medical literature and expert opinion on well-performing CDS, we hypothesized a CDS PPV of 8%.Overall PPV was 1%, leading us to reject our hypothesis that our CDS was well-performing (<i>p</i> < 0.001). Median time costs per alert were high (12 seconds load time, 2 seconds dwell time), yielding a CDS cost-benefit ratio of 38 provider minutes per altered order.Despite using one of three market-leading AUC CDS tools, our CDS demonstrated long load times, short dwell times, and low PPV. Provider attention is not free-policymakers should consider both CDS effectiveness and costs (including time costs) when designing AUC policy.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":" ","pages":"1658-1663"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12594561/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144310651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-12-05DOI: 10.1055/a-2750-4422
Jessica Kemp, Hwayeon D Shin, Charlotte Pape, Alina Lee, Bay Bahri, Wei Wang, Sara Ling, Gillian Strudwick
Nurses are the largest group of electronic health record (EHR) users in Canada, yet their experiences with documentation burden remain underexplored. While EHR-generated usage data, such as audit logs and time-motion metrics, have been used to quantify documentation time, they are rarely used to better understand EHR inefficiencies and identify potential changes for nursing documentation and workflows. This approach may help address instances of documentation demands detracting from direct patient care and contributing to burnout, which has been largely reported by nurses.This study aimed to: (1) examine EHR utilization patterns and time spent by nurses across clinical venues and nurse types; (2) identify EHR areas contributing most to nursing workload; (3) determine predictors of EHR time; and (4) assess differences in usage patterns across venues.We analyzed 12 months of EHR usage data from nurses at Canada's largest academic mental health hospital using Cerner Advance (Oracle Health). Seven metrics were selected in collaboration with a Nursing Advisory Council. Regression and least-squares means comparisons were conducted using R, with venue and nurse type as predictors.Data from 840 nurses revealed significant differences in EHR usage across venues and nurse types. Mean active time per patient per shift was highest in inpatient (19.3 minutes), followed by emergency (14.8 minutes), and ambulatory settings (6.3 minutes). Registered Practical Nurses (RPNs) averaged more active EHR time (20.1 minutes) than Registered Nurses (16.4 minutes). Documentation time per patient was significantly different across venues (F [3,832] = 71.97, p < 0.001) and nurse types (p = 0.0018). PowerForms time also varied significantly (F [3,818] = 102.1, p < 0.001). These findings support targeted EHR optimization efforts based on clinical context and role.Significant variation exists in how nurses interact with EHRs, with documentation representing a substantial time burden, especially for RPNs and inpatient settings. These findings emphasize the need for venue and role-specific optimization strategies and underscore the importance of including nurses' voices in EHR design and quality improvement initiatives.
护士是加拿大最大的电子健康记录(EHR)用户群体,但他们在文件负担方面的经验仍未得到充分探讨。虽然EHR生成的使用数据(如审计日志和时间运动指标)已被用于量化记录时间,但它们很少用于更好地了解EHR的低效率,并确定护理文档和工作流程的潜在变化。这种方法可能有助于解决文件需求减损患者直接护理和导致倦怠的情况,这在很大程度上是由护士报告的。本研究的目的是:(1)研究不同临床场所和护士类型的护士使用电子病历的模式和时间;(2)确定对护理工作量贡献最大的电子病历领域;(3)确定电子病历时间的预测因子;(4)评估不同场馆使用模式的差异。我们使用Cerner Advance (Oracle health)分析了加拿大最大的学术精神健康医院护士12个月的电子病历使用数据。与护理咨询委员会合作选择了七个指标。回归和最小二乘方法采用R进行比较,以地点和护士类型为预测因子。来自840名护士的数据显示,不同场所和护士类型的电子病历使用存在显著差异。每班每位患者的平均活动时间在住院患者中最高(19.3分钟),其次是急诊(14.8分钟)和门诊(6.3分钟)。注册执业护士(rpn)平均活跃电子病历时间(20.1分钟)高于注册护士(16.4分钟)。每位患者的记录时间在不同地点有显著差异(F [3,832] = 71.97, p p = 0.0018)。PowerForms的时间也有显著差异(F [3,818] = 102.1, p
{"title":"Variations in Nursing Documentation Time in a Mental Health Setting: A Retrospective Observational Study of EHR Usage Data.","authors":"Jessica Kemp, Hwayeon D Shin, Charlotte Pape, Alina Lee, Bay Bahri, Wei Wang, Sara Ling, Gillian Strudwick","doi":"10.1055/a-2750-4422","DOIUrl":"10.1055/a-2750-4422","url":null,"abstract":"<p><p>Nurses are the largest group of electronic health record (EHR) users in Canada, yet their experiences with documentation burden remain underexplored. While EHR-generated usage data, such as audit logs and time-motion metrics, have been used to quantify documentation time, they are rarely used to better understand EHR inefficiencies and identify potential changes for nursing documentation and workflows. This approach may help address instances of documentation demands detracting from direct patient care and contributing to burnout, which has been largely reported by nurses.This study aimed to: (1) examine EHR utilization patterns and time spent by nurses across clinical venues and nurse types; (2) identify EHR areas contributing most to nursing workload; (3) determine predictors of EHR time; and (4) assess differences in usage patterns across venues.We analyzed 12 months of EHR usage data from nurses at Canada's largest academic mental health hospital using Cerner Advance (Oracle Health). Seven metrics were selected in collaboration with a Nursing Advisory Council. Regression and least-squares means comparisons were conducted using R, with venue and nurse type as predictors.Data from 840 nurses revealed significant differences in EHR usage across venues and nurse types. Mean active time per patient per shift was highest in inpatient (19.3 minutes), followed by emergency (14.8 minutes), and ambulatory settings (6.3 minutes). Registered Practical Nurses (RPNs) averaged more active EHR time (20.1 minutes) than Registered Nurses (16.4 minutes). Documentation time per patient was significantly different across venues (F [3,832] = 71.97, <i>p</i> < 0.001) and nurse types (<i>p</i> = 0.0018). PowerForms time also varied significantly (F [3,818] = 102.1, <i>p</i> < 0.001). These findings support targeted EHR optimization efforts based on clinical context and role.Significant variation exists in how nurses interact with EHRs, with documentation representing a substantial time burden, especially for RPNs and inpatient settings. These findings emphasize the need for venue and role-specific optimization strategies and underscore the importance of including nurses' voices in EHR design and quality improvement initiatives.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":"16 5","pages":"1799-1814"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12680479/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145688414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-06-02DOI: 10.1055/a-2621-0110
Danielle Jungst, Anthony Solomonides, Chad Konchak
Health equity is greatly impacted by the systems and processes with which health systems deliver care. Given the minimal guidance on measurement and reporting of health inequities specific to key population health outcomes, a solution for measurement of health equity is proposed.The concept of a lens of equity was adopted to disaggregate common measures such as breast cancer screening rates to expose inequities across neighborhoods and races in populations served. Two measures were introduced into the corporate measurement systems, race/ethnicity as measured in the electronic health record, and a surrogate measure of family income.An equity category was added to system scorecards and counted toward corporate goals along with data insights and discovery tools to support the efforts of the breast cancer screening improvement teams. Over a 1-year timeframe, Endeavor Health not only met but exceeded its breast cancer screening equity goal, increasing mammography adherence from 73 to 82.6% among residents in the lowest-income neighborhoods served.The analytics and data systems that support complex health care measurement tools require diligent and thoughtful design to meet external reporting requirements and support the internal teams who aim to improve the care of populations served. The analytic approach presented may be readily extended to populations with other potentially impactful differences in social determinants and health status. A "lens-of-equity" tool may be established along similar lines, allowing policy and strategy initiatives to be appropriately targeted and successfully implemented.
{"title":"Introduction of a Health Care System Lens-of-Equity Measurement Strategy to Optimize Breast Cancer Screening.","authors":"Danielle Jungst, Anthony Solomonides, Chad Konchak","doi":"10.1055/a-2621-0110","DOIUrl":"10.1055/a-2621-0110","url":null,"abstract":"<p><p>Health equity is greatly impacted by the systems and processes with which health systems deliver care. Given the minimal guidance on measurement and reporting of health inequities specific to key population health outcomes, a solution for measurement of health equity is proposed.The concept of a <i>lens of equity</i> was adopted to disaggregate common measures such as breast cancer screening rates to expose inequities across neighborhoods and races in populations served. Two measures were introduced into the corporate measurement systems, race/ethnicity as measured in the electronic health record, and a surrogate measure of family income.An equity category was added to system scorecards and counted toward corporate goals along with data insights and discovery tools to support the efforts of the breast cancer screening improvement teams. Over a 1-year timeframe, Endeavor Health not only met but exceeded its breast cancer screening equity goal, increasing mammography adherence from 73 to 82.6% among residents in the lowest-income neighborhoods served.The analytics and data systems that support complex health care measurement tools require diligent and thoughtful design to meet external reporting requirements and support the internal teams who aim to improve the care of populations served. The analytic approach presented may be readily extended to populations with other potentially impactful differences in social determinants and health status. A \"lens-of-equity\" tool may be established along similar lines, allowing policy and strategy initiatives to be appropriately targeted and successfully implemented.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":" ","pages":"1550-1559"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12594557/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144210011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-11-07DOI: 10.1055/a-2630-3204
Robert P Pierce, Adam Kell, Bernie Eskridge, Lea Brandt, Kevin W Clary, Kevin Craig
End-of-life care (EoLC), such as advance care planning, advance directives, hospice, and palliative care consults, can improve patient quality of life and reduce costs, yet such interventions are underused. Machine learning-based prediction models show promise in identifying patients who may be candidates for EoLC based on increased risk of short-term (less than 1 year) mortality. Clinical decision support systems using these models can identify candidate patients at a time during their care when care teams can increase the provision of EoLC.Evaluate changes in the provision of EoLC with implementation of a machine learning-based mortality prediction model in an academic health center.A clinical decision support system based on a random forest machine learning mortality prediction model is described. The system was implemented in an academic health system, first in the medical intensive care unit, then house-wide. An interrupted time series analysis was performed over the 16 weeks prior to and 43 weeks after the implementations. Primary outcomes were the rates of documentation of advance directives, palliative care consultations, and do not attempt resuscitation (DNAR) orders among encounters with an alert for PRISM score over 50% (PRISM positive) compared with those without an alert (PRISM negative).Following a steep preintervention decline, the rate of advance directive documentation improved immediately after implementation. However, the implementations were not associated with improvements in any of the other primary outcomes. The model discrimination was substantially worse than that observed in model development, and after 16 months, it was withdrawn from production.A clinical decision support system based on a machine learning mortality prediction model failed to provide clinically meaningful improvements in EoLC measures. Possible causes for the failure include system-level factors, clinical decision support system design, and poor model performance.
{"title":"A Machine Learning-Based Clinical Decision Support System to Improve End-of-Life Care.","authors":"Robert P Pierce, Adam Kell, Bernie Eskridge, Lea Brandt, Kevin W Clary, Kevin Craig","doi":"10.1055/a-2630-3204","DOIUrl":"10.1055/a-2630-3204","url":null,"abstract":"<p><p>End-of-life care (EoLC), such as advance care planning, advance directives, hospice, and palliative care consults, can improve patient quality of life and reduce costs, yet such interventions are underused. Machine learning-based prediction models show promise in identifying patients who may be candidates for EoLC based on increased risk of short-term (less than 1 year) mortality. Clinical decision support systems using these models can identify candidate patients at a time during their care when care teams can increase the provision of EoLC.Evaluate changes in the provision of EoLC with implementation of a machine learning-based mortality prediction model in an academic health center.A clinical decision support system based on a random forest machine learning mortality prediction model is described. The system was implemented in an academic health system, first in the medical intensive care unit, then house-wide. An interrupted time series analysis was performed over the 16 weeks prior to and 43 weeks after the implementations. Primary outcomes were the rates of documentation of advance directives, palliative care consultations, and do not attempt resuscitation (DNAR) orders among encounters with an alert for PRISM score over 50% (PRISM positive) compared with those without an alert (PRISM negative).Following a steep preintervention decline, the rate of advance directive documentation improved immediately after implementation. However, the implementations were not associated with improvements in any of the other primary outcomes. The model discrimination was substantially worse than that observed in model development, and after 16 months, it was withdrawn from production.A clinical decision support system based on a machine learning mortality prediction model failed to provide clinically meaningful improvements in EoLC measures. Possible causes for the failure include system-level factors, clinical decision support system design, and poor model performance.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":"16 5","pages":"1637-1645"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12594560/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145472359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-11-14DOI: 10.1055/a-2638-9340
Monisha Dilip, Craig Rothenberg, Reinier Van Tonder, Karen Jubanyik, Arjun K Venkatesh, Deborah Rhodes, Rohit B Sangal, Nancy Kim
Electronic health records (EHRs) are intended to improve clinical practice, but excessive alerts can be counterproductive, leading to workarounds. The Mortality Surprise Question (MSQ), a tool for identifying patients who might benefit from early end-of-life discussions, was integrated into the Emergency Department (ED) EHR admission process.This study investigated how the staged implementation of a clinical decision support tool at the point of admission order entry affected ED clinician admission order practices.This retrospective cohort study examined ED admission orders from 2023 across three EDs. Clinicians used either the Quicklist or Disposition tab in the Epic EHR for admissions. The MSQ was introduced in two phases, first to the Quicklist on May 31, 2023, and then to the Disposition tab on September 11, 2023. Admissions from both tabs were analyzed pre- and post-MSQ implementation. Statistical analysis included chi-square testing to compare the admission source in the EHR after each phase of implementation of the MSQ to examine changes in the clinicians' admission workflow, with further categorization based on clinician EHR experience.Overall, 53,897 patients were admitted from the ED, with 29,542 (55%) admissions via the Quicklist and 24,355 (45%) via the Disposition tab. A statistically significant difference was found in Quicklist admission proportions before and after MSQ implementation in both workflows. As compared with clinicians with less than 2 years of experience with the EHR, clinicians with 2 to 4 years of EHR use were less likely to use the Quicklist after MSQ implementation, whereas those with over 4 years of use were more likely to use it.The MSQ disrupted established workflows, prompting clinicians to initially adopt more effortful alternatives to avoid the new cognitive task. Embedding the MSQ into these alternatives reduced resistance, highlighting that removing optionality promotes adoption. Accounting for clinician habits and potential workarounds can enhance the integration and efficiency of new quality improvement measures.
{"title":"Effect of a Clinical Decision Support Tool for Identifying Patients Benefiting from End-of-Life Discussions on Emergency Department Clinician Behavior.","authors":"Monisha Dilip, Craig Rothenberg, Reinier Van Tonder, Karen Jubanyik, Arjun K Venkatesh, Deborah Rhodes, Rohit B Sangal, Nancy Kim","doi":"10.1055/a-2638-9340","DOIUrl":"10.1055/a-2638-9340","url":null,"abstract":"<p><p>Electronic health records (EHRs) are intended to improve clinical practice, but excessive alerts can be counterproductive, leading to workarounds. The Mortality Surprise Question (MSQ), a tool for identifying patients who might benefit from early end-of-life discussions, was integrated into the Emergency Department (ED) EHR admission process.This study investigated how the staged implementation of a clinical decision support tool at the point of admission order entry affected ED clinician admission order practices.This retrospective cohort study examined ED admission orders from 2023 across three EDs. Clinicians used either the Quicklist or Disposition tab in the Epic EHR for admissions. The MSQ was introduced in two phases, first to the Quicklist on May 31, 2023, and then to the Disposition tab on September 11, 2023. Admissions from both tabs were analyzed pre- and post-MSQ implementation. Statistical analysis included chi-square testing to compare the admission source in the EHR after each phase of implementation of the MSQ to examine changes in the clinicians' admission workflow, with further categorization based on clinician EHR experience.Overall, 53,897 patients were admitted from the ED, with 29,542 (55%) admissions via the Quicklist and 24,355 (45%) via the Disposition tab. A statistically significant difference was found in Quicklist admission proportions before and after MSQ implementation in both workflows. As compared with clinicians with less than 2 years of experience with the EHR, clinicians with 2 to 4 years of EHR use were less likely to use the Quicklist after MSQ implementation, whereas those with over 4 years of use were more likely to use it.The MSQ disrupted established workflows, prompting clinicians to initially adopt more effortful alternatives to avoid the new cognitive task. Embedding the MSQ into these alternatives reduced resistance, highlighting that removing optionality promotes adoption. Accounting for clinician habits and potential workarounds can enhance the integration and efficiency of new quality improvement measures.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":"16 5","pages":"1677-1682"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12618147/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145524294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-10-28DOI: 10.1055/a-2702-6872
Sarah W Chen, Michael Gannon, John L Kilgallon, Weng Ian Chay, David Rubins, Hojjat Salmasian, Sayon Dutta, Dustin S McEvoy, Edward Wu, Adam Wright, Allison McCoy, Lipika Samal
Clinical decision support (CDS) systems have been widely adopted across clinical settings to promote evidence-based practice for clinicians. CDS malfunctions often affect the user experience and indirectly or directly interfere with patient care. To enhance optimal performance, it is critical to constantly monitor the performance of the tool and react promptly when malfunctions are identified.This study aimed to describe malfunctions identified in the development and implementation of a CDS alert as well as lessons learned.A pragmatic randomized controlled trial of a CDS alert for primary care patients with chronic kidney disease and uncontrolled blood pressure was conducted. The alert included prechecked default orders for medication initiation or titration, basic metabolic panel, and nephrology electronic consult. Alert monitoring involved retrospective chart review and review of alert firing reports.Eight CDS malfunctions were identified. The most common causes of malfunctions were due to conceptualization and build errors. Provider feedback and retrospective chart review were the primary methods of identifying the root cause of malfunctions.Our findings highlight the need for CDS interventions to be continuously monitored through chart review, alert firing reports, and opportunities for provider feedback. Lessons learned from CDS malfunctions can be implemented to improve provider trust in automated electronic health record-based alerts, reduce administrative burden, and prevent inappropriate alert recommendations that can negatively affect patient outcomes. This study is registered with Clinivaltrials.gov (identifier: NCT03679247).
{"title":"Applying an Empirical Taxonomy to Alert Malfunctions in a Pragmatic Trial for Hypertension Management in Chronic Kidney Disease.","authors":"Sarah W Chen, Michael Gannon, John L Kilgallon, Weng Ian Chay, David Rubins, Hojjat Salmasian, Sayon Dutta, Dustin S McEvoy, Edward Wu, Adam Wright, Allison McCoy, Lipika Samal","doi":"10.1055/a-2702-6872","DOIUrl":"10.1055/a-2702-6872","url":null,"abstract":"<p><p>Clinical decision support (CDS) systems have been widely adopted across clinical settings to promote evidence-based practice for clinicians. CDS malfunctions often affect the user experience and indirectly or directly interfere with patient care. To enhance optimal performance, it is critical to constantly monitor the performance of the tool and react promptly when malfunctions are identified.This study aimed to describe malfunctions identified in the development and implementation of a CDS alert as well as lessons learned.A pragmatic randomized controlled trial of a CDS alert for primary care patients with chronic kidney disease and uncontrolled blood pressure was conducted. The alert included prechecked default orders for medication initiation or titration, basic metabolic panel, and nephrology electronic consult. Alert monitoring involved retrospective chart review and review of alert firing reports.Eight CDS malfunctions were identified. The most common causes of malfunctions were due to conceptualization and build errors. Provider feedback and retrospective chart review were the primary methods of identifying the root cause of malfunctions.Our findings highlight the need for CDS interventions to be continuously monitored through chart review, alert firing reports, and opportunities for provider feedback. Lessons learned from CDS malfunctions can be implemented to improve provider trust in automated electronic health record-based alerts, reduce administrative burden, and prevent inappropriate alert recommendations that can negatively affect patient outcomes. This study is registered with Clinivaltrials.gov (identifier: NCT03679247).</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":"16 5","pages":"1457-1464"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12566919/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145394409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-12-12DOI: 10.1055/a-2767-1161
Ellen Overson, Jacob Wagner, James Grace, Melissa Haala, Bradley Burns, Abraham Jacob, Rebecca Markowitz
Many academic medical centers (AMCs) rely on systems like the Vizient Quality and Accountability Scorecard to track quality metrics such as the observed-to-expected (O/E) mortality ratio. The O/E mortality ratio calculation relies on clinical documentation. Missed documentation of diagnoses and risk factors for mortality leads to an underestimated expected mortality, which negatively affects the O/E metric.We aimed to reduce our O/E mortality ratio from a median of 1.08 (± 0.10) to a median well below 0.90 within 12 months by improving the accuracy of clinical documentation.We used a continuous quality improvement process that began with creating a rule-based tool within a standardized documentation template. The tool was designed to pull pertinent discrete electronic health record data into clinician documentation. The tool only pulled in data that were present on admission, and it especially prioritized inclusion of frequently missed risk factors according to prior coding query data. We then formed a multidisciplinary mortality review committee where providers reviewed mortality cases, made suggestions for documentation clarification, and found potential diagnoses and risk factors that the patient had which were missing from the documentation. We then leveraged the committee's expertise and feedback to improve the rule-based clinical tool.Over the 21-month period following implementation, the median O/E mortality ratio decreased by 30%, from 1.08 (± 0.10) to 0.72 (± 0.13) and consistently remained below the prior levels. Importantly, the intervention also led to a reduction in the total number of coding queries sent to clinicians, indicating a lower administrative burden for clinicians and coders.Our interventions showed a clear improvement in the O/E mortality ratio at our AMC and in the expected mortality percentage compared with other similar institutions without significantly increasing burden on clinicians or coding specialists.
{"title":"Improving the Observed-to-Expected Mortality Ratio with the Combination of Standardized Documentation and a Multidisciplinary Mortality Review Committee.","authors":"Ellen Overson, Jacob Wagner, James Grace, Melissa Haala, Bradley Burns, Abraham Jacob, Rebecca Markowitz","doi":"10.1055/a-2767-1161","DOIUrl":"10.1055/a-2767-1161","url":null,"abstract":"<p><p>Many academic medical centers (AMCs) rely on systems like the Vizient Quality and Accountability Scorecard to track quality metrics such as the observed-to-expected (O/E) mortality ratio. The O/E mortality ratio calculation relies on clinical documentation. Missed documentation of diagnoses and risk factors for mortality leads to an underestimated expected mortality, which negatively affects the O/E metric.We aimed to reduce our O/E mortality ratio from a median of 1.08 (± 0.10) to a median well below 0.90 within 12 months by improving the accuracy of clinical documentation.We used a continuous quality improvement process that began with creating a rule-based tool within a standardized documentation template. The tool was designed to pull pertinent discrete electronic health record data into clinician documentation. The tool only pulled in data that were present on admission, and it especially prioritized inclusion of frequently missed risk factors according to prior coding query data. We then formed a multidisciplinary mortality review committee where providers reviewed mortality cases, made suggestions for documentation clarification, and found potential diagnoses and risk factors that the patient had which were missing from the documentation. We then leveraged the committee's expertise and feedback to improve the rule-based clinical tool.Over the 21-month period following implementation, the median O/E mortality ratio decreased by 30%, from 1.08 (± 0.10) to 0.72 (± 0.13) and consistently remained below the prior levels. Importantly, the intervention also led to a reduction in the total number of coding queries sent to clinicians, indicating a lower administrative burden for clinicians and coders.Our interventions showed a clear improvement in the O/E mortality ratio at our AMC and in the expected mortality percentage compared with other similar institutions without significantly increasing burden on clinicians or coding specialists.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":" ","pages":"1909-1916"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12737979/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145745291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-11-07DOI: 10.1055/a-2721-6170
Albert D Piersson, George Nunoo, Evans Tettey, Nicholas Otumi
The effective operation of magnetic resonance imaging (MRI) systems relies on physical interactions with complex imaging environments, equipment, and user interfaces (UIs). However, there is limited empirical data evaluating how physical interactions with MRI equipment and accessories, workspace configuration, MRI UI design, and technical proficiency influence clinical workflow.In this study, a cross-sectional survey was conducted among MRI end-users, across public and private health facilities (n = 13), using a structured questionnaire to assess demographics, patient positioning and equipment handling, MRI workspace adequacy, interface usability (guided by Nielsen's heuristics), and self-reported MRI skill proficiency.The predominant field strength of scanners in current use was 1.5T. General Electric was the most frequently used MRI scanner brand. Most respondents received their MRI training from nonvendor sources-such as academic institutions or peer-based instruction-rather than directly from equipment manufacturers. High ease-of-use ratings were reported for patient positioning and equipment handling tasks. Workspace adequacy was mostly rated as very adequate to highly adequate. Computed Tomography-experienced users showed moderate-to-high proficiency in MRI pulse sequencing and image optimization. However, lower proficiency was noted in quality assurance and physiologic monitoring. Help documentation within the MRI interface received the lowest usability scores. No significant differences in usability or proficiency were found between those trained by vendors versus nonvendors (U = 8.5-15.0; p = 0.376-0.921).Opportunities exist to enhance clinical workflow and patient throughput by refining error-handling features, improving support documentation, reinforcing ongoing professional development, and re-evaluating training delivery by incorporating iterative, multimedia-based learning modules and regular postinstallation refresher sessions. End-user input in UI design and user feedback analysis should be prioritized to improve system usability and clinical efficiency.
磁共振成像(MRI)系统的有效运行依赖于与复杂成像环境、设备和用户界面(ui)的物理交互。然而,评估与MRI设备和配件、工作空间配置、MRI UI设计和技术熟练程度的物理交互如何影响临床工作流程的经验数据有限。在这项研究中,我们在公立和私立医疗机构的MRI终端用户中进行了一项横断面调查(n = 13),使用结构化问卷来评估人口统计、患者定位和设备处理、MRI工作空间充分性、界面可用性(由尼尔森启发式指导)和自我报告的MRI技能熟练程度。目前使用的扫描仪的主要场强为1.5T。通用电气是使用频率最高的MRI扫描仪品牌。大多数受访者接受的MRI培训来自非供应商来源,如学术机构或同行指导,而不是直接来自设备制造商。据报道,患者定位和设备处理任务的易用性评分较高。工作空间的充足性通常被评为非常充足到高度充足。计算机断层扫描经验丰富的用户在MRI脉冲测序和图像优化方面表现出中等到高度的熟练程度。然而,在质量保证和生理监测方面的熟练程度较低。MRI界面中的帮助文档获得了最低的可用性分数。供应商与非供应商在可用性或熟练程度上没有显著差异(U = 8.5-15.0; p = 0.376-0.921)。通过改进错误处理功能、改进支持文档、加强正在进行的专业发展,以及通过结合迭代的、基于多媒体的学习模块和定期的安装后复习课程来重新评估培训交付,存在改进临床工作流程和患者吞吐量的机会。应优先考虑用户在UI设计和用户反馈分析中的输入,以提高系统的可用性和临床效率。
{"title":"User-Centered Assessment of MRI Equipment Flexibility, Workspace Adequacy, User Interface Usability, and Technical Proficiency.","authors":"Albert D Piersson, George Nunoo, Evans Tettey, Nicholas Otumi","doi":"10.1055/a-2721-6170","DOIUrl":"10.1055/a-2721-6170","url":null,"abstract":"<p><p>The effective operation of magnetic resonance imaging (MRI) systems relies on physical interactions with complex imaging environments, equipment, and user interfaces (UIs). However, there is limited empirical data evaluating how physical interactions with MRI equipment and accessories, workspace configuration, MRI UI design, and technical proficiency influence clinical workflow.In this study, a cross-sectional survey was conducted among MRI end-users, across public and private health facilities (<i>n</i> = 13), using a structured questionnaire to assess demographics, patient positioning and equipment handling, MRI workspace adequacy, interface usability (guided by Nielsen's heuristics), and self-reported MRI skill proficiency.The predominant field strength of scanners in current use was 1.5T. General Electric was the most frequently used MRI scanner brand. Most respondents received their MRI training from nonvendor sources-such as academic institutions or peer-based instruction-rather than directly from equipment manufacturers. High ease-of-use ratings were reported for patient positioning and equipment handling tasks. Workspace adequacy was mostly rated as very adequate to highly adequate. Computed Tomography-experienced users showed moderate-to-high proficiency in MRI pulse sequencing and image optimization. However, lower proficiency was noted in quality assurance and physiologic monitoring. Help documentation within the MRI interface received the lowest usability scores. No significant differences in usability or proficiency were found between those trained by vendors versus nonvendors (<i>U</i> = 8.5-15.0; <i>p</i> = 0.376-0.921).Opportunities exist to enhance clinical workflow and patient throughput by refining error-handling features, improving support documentation, reinforcing ongoing professional development, and re-evaluating training delivery by incorporating iterative, multimedia-based learning modules and regular postinstallation refresher sessions. End-user input in UI design and user feedback analysis should be prioritized to improve system usability and clinical efficiency.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":"16 5","pages":"1595-1605"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12594564/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145472367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}