Pub Date : 2025-10-01Epub Date: 2025-11-28DOI: 10.1055/a-2740-1587
Oluwatoba Moninuola, John M Grisham, Naveed Farrukh, Leanne Murray, Aarti Chandawarkar, Laura Rust, Alysha J Taxter, Juan D Chaparro, Jeffrey Hoffman, Jennifer A Lee
This study aimed to evaluate the effect of optimizing the ambulatory medication preference list on provider efficiency in medication ordering.Using electronic health record (EHR) vendor data, a multidisciplinary informatics team optimized the general ambulatory medication preference list to better align with providers' ordering patterns. We conducted a pre-postintervention analysis assessing time-in-orders per encounter and number of manual changes per order.Postintervention, average manual changes per order decreased from 4.12 to 3.00 (p < 0.01), and median time spent in the orders activity per encounter decreased from 3.1 to 2.3 minutes (p < 0.01).Optimizing the ambulatory medication preference list reduced time spent and clicks needed by providers when ordering medications. This is relevant to ongoing efforts to address EHR-related burden.
{"title":"A Prescription for Efficiency: The Effect of an Ambulatory Medication Preference List Optimization.","authors":"Oluwatoba Moninuola, John M Grisham, Naveed Farrukh, Leanne Murray, Aarti Chandawarkar, Laura Rust, Alysha J Taxter, Juan D Chaparro, Jeffrey Hoffman, Jennifer A Lee","doi":"10.1055/a-2740-1587","DOIUrl":"10.1055/a-2740-1587","url":null,"abstract":"<p><p>This study aimed to evaluate the effect of optimizing the ambulatory medication preference list on provider efficiency in medication ordering.Using electronic health record (EHR) vendor data, a multidisciplinary informatics team optimized the general ambulatory medication preference list to better align with providers' ordering patterns. We conducted a pre-postintervention analysis assessing time-in-orders per encounter and number of manual changes per order.Postintervention, average manual changes per order decreased from 4.12 to 3.00 (<i>p</i> < 0.01), and median time spent in the orders activity per encounter decreased from 3.1 to 2.3 minutes (<i>p</i> < 0.01).Optimizing the ambulatory medication preference list reduced time spent and clicks needed by providers when ordering medications. This is relevant to ongoing efforts to address EHR-related burden.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":"16 5","pages":"1794-1798"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12662729/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145641700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-10-17DOI: 10.1055/a-2701-4543
Courtney J Diamond, Jonathan E Elias, Rachel Y Lee, Haomiao Jia, Erika L Abramson, Jessica S Ancker, Susan Bostwick, Kenrick D Cato, Richard Trepp, Rachel A Lewis, Timothy J Crimmins, Sarah C Rossetti
The adoption of electronic health records (EHRs) into clinical practice has changed clinical workflows and, in some cases, increased documentation burden and clinician burnout. Identifying factors associated with perceived EHR usability after the implementation of a new EHR may guide efforts to reduce burden and burnout.This study measured: (1) group-level perceptions of EHR usability pre- and postimplementation of a new EHR; (2) adaptation to the new EHR; and (3) the effects of clinical role, setting, and specialty on these measures.Pre- and postimplementation surveys were sent to clinical staff at two academic medical centers (AMC A and AMC B), each part of the same Northeast health system where one instance of a new EHR was implemented starting in 2020. The surveys measured constructs from the Health Information Technology Usability Evaluation Scale (Health-ITUES) and Health Information Technology Adaptation survey. Unpaired t-tests assessed changes in group-level scores from pre- to postimplementation, and multiway analyses of variance with post hoc pairwise t-tests with Bonferroni's correction were used to assess differences in scores by clinical role, setting, and specialty.Average Perceived Usefulness (PU) and adaptation scores were higher at AMC B than at AMC A, but similar pre- to postimplementation trends were observed at both sites. Perceptions of Quality of Work Life (QWL), PU, and User Control (UC) improved across both sites postimplementation, whereas Perceived Ease of Use and Cognitive Support and Situational Awareness declined. Ordering Providers, Registered Nurses, clinicians practicing in the Emergency Department setting, and Emergency Medicine, and Critical/Intensive Care specialists had statistically different scores across various constructs.After implementation of a new EHR system at two AMCs, clinical staff perceptions of quality of work life (QWL), perceived usefulness (PU), and user control (UC) generally improved, although perceptions of perceived ease of use and cognitive support declined.
{"title":"Evaluating Clinical Staff Perceptions of EHR Usability, Satisfaction, and Adaptation to a New EHR: A Multisite, Pre-Post Implementation Study.","authors":"Courtney J Diamond, Jonathan E Elias, Rachel Y Lee, Haomiao Jia, Erika L Abramson, Jessica S Ancker, Susan Bostwick, Kenrick D Cato, Richard Trepp, Rachel A Lewis, Timothy J Crimmins, Sarah C Rossetti","doi":"10.1055/a-2701-4543","DOIUrl":"10.1055/a-2701-4543","url":null,"abstract":"<p><p>The adoption of electronic health records (EHRs) into clinical practice has changed clinical workflows and, in some cases, increased documentation burden and clinician burnout. Identifying factors associated with perceived EHR usability after the implementation of a new EHR may guide efforts to reduce burden and burnout.This study measured: (1) group-level perceptions of EHR usability pre- and postimplementation of a new EHR; (2) adaptation to the new EHR; and (3) the effects of clinical role, setting, and specialty on these measures.Pre- and postimplementation surveys were sent to clinical staff at two academic medical centers (AMC A and AMC B), each part of the same Northeast health system where one instance of a new EHR was implemented starting in 2020. The surveys measured constructs from the Health Information Technology Usability Evaluation Scale (Health-ITUES) and Health Information Technology Adaptation survey. Unpaired <i>t</i>-tests assessed changes in group-level scores from pre- to postimplementation, and multiway analyses of variance with post hoc pairwise <i>t</i>-tests with Bonferroni's correction were used to assess differences in scores by clinical role, setting, and specialty.Average Perceived Usefulness (PU) and adaptation scores were higher at AMC B than at AMC A, but similar pre- to postimplementation trends were observed at both sites. Perceptions of Quality of Work Life (QWL), PU, and User Control (UC) improved across both sites postimplementation, whereas Perceived Ease of Use and Cognitive Support and Situational Awareness declined. Ordering Providers, Registered Nurses, clinicians practicing in the Emergency Department setting, and Emergency Medicine, and Critical/Intensive Care specialists had statistically different scores across various constructs.After implementation of a new EHR system at two AMCs, clinical staff perceptions of quality of work life (QWL), perceived usefulness (PU), and user control (UC) generally improved, although perceptions of perceived ease of use and cognitive support declined.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":"16 5","pages":"1368-1380"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12534127/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145313912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-09-26DOI: 10.1055/a-2710-4226
Theresa Burkard, Montse Camprubi, Daniel Prieto-Alhambra, Peter Rijnbeek, Marta Pineda-Moncusi
Federated network studies allow data to remain locally while the research is conducted through the sharing of analytical code and aggregated results across different health care settings and countries. A large number of databases have been mapped to the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM), boosting the use of analytical pipelines for standardized observational research within this open science framework. Transparency, reproducibility, and robustness of results have positioned federated analyses using the OMOP CDM within the European Health Data and Evidence Network (EHDEN) as an essential tool for generating large-scale evidence.We conducted large-scale federated analyses involving 52 databases from 19 countries using the OMOP CDM. In this State-of-the-Art/Best Practice article, we aimed to share key lessons and strategies for conducting such complex, large multidatabase analyses.Meticulous planning, establishing a strong community of collaborators, efficient communication channels, standardized analytics, and strategic division of responsibilities are essential. We highlight the benefits of network engagement, cross-fertilization of ideas, and shared learning. Further key elements contributing to the study's success included an inclusive, incremental implementation of the analytical code, timely engagement of data partners, and community webinars to discuss and interpret study findings.We received predominantly positive feedback from data custodians about their participation, and included input for further improvements for future large-scale federated network studies from this shared learning experience.
{"title":"Best Practices to Design, Plan, and Execute Large-Scale Federated Analyses-Key Learnings and Suggestions from a Study Comprising 52 Databases.","authors":"Theresa Burkard, Montse Camprubi, Daniel Prieto-Alhambra, Peter Rijnbeek, Marta Pineda-Moncusi","doi":"10.1055/a-2710-4226","DOIUrl":"10.1055/a-2710-4226","url":null,"abstract":"<p><p>Federated network studies allow data to remain locally while the research is conducted through the sharing of analytical code and aggregated results across different health care settings and countries. A large number of databases have been mapped to the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM), boosting the use of analytical pipelines for standardized observational research within this open science framework. Transparency, reproducibility, and robustness of results have positioned federated analyses using the OMOP CDM within the European Health Data and Evidence Network (EHDEN) as an essential tool for generating large-scale evidence.We conducted large-scale federated analyses involving 52 databases from 19 countries using the OMOP CDM. In this <i>State-of-the-Art/Best Practice</i> article, we aimed to share key lessons and strategies for conducting such complex, large multidatabase analyses.Meticulous planning, establishing a strong community of collaborators, efficient communication channels, standardized analytics, and strategic division of responsibilities are essential. We highlight the benefits of network engagement, cross-fertilization of ideas, and shared learning. Further key elements contributing to the study's success included an inclusive, incremental implementation of the analytical code, timely engagement of data partners, and community webinars to discuss and interpret study findings.We received predominantly positive feedback from data custodians about their participation, and included input for further improvements for future large-scale federated network studies from this shared learning experience.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":" ","pages":"1507-1517"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12575070/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145179787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-11-14DOI: 10.1055/a-2639-4974
Olena Mazurenko, Nate C Apathy, Lindsey M Sanner, Emma McCord, Meredith C B Adams, Robert W Hurley, Randall W Grout, Burke Mamlin, Saura Fortin, Justin Blackburn, Nir Menachemi, Joshua R Vest, Matthew Gurka, Christopher A Harle
This study aimed to assess the effect of a passive, opt-in electronic health record (EHR)-based clinical decision support (CDS), the Chronic Pain OneSheet, on guideline-recommended chronic pain management in primary care.A pragmatic randomized controlled trial with a parallel group design was conducted between October 2020 and May 2022. Participants were 137 primary care clinicians (PCCs) treating qualifying patients with chronic pain at 25 primary care clinics within two academic health systems in the United States. PCCs were randomized in the EHR to have access to OneSheet or usual care. OneSheet aggregates guideline-relevant information in a single view and provides shortcuts to guideline-recommended actions (e.g., ordering urine drug screening [UDS] for patients prescribed opioids). We constructed five visit-level binary outcomes: (1) documenting pain-related goals; (2) documenting pain and function via Pain, Enjoyment of Life and General Activity (PEG) scale; (3) reviewing prescription drug monitoring programs (PDMPs); (4) ordering UDS; and (5) ordering naloxone. Analysis used generalized linear mixed models for each outcome.OneSheet access minimally increased rates of pain-related goal documentation (0.2 percentage point increase, p = 0.013), PEG scale documentation (0.7 percentage point increase, p < 0.001), and UDS orders (2.2 percentage point increase, p = 0.006). OneSheet access decreased the rate of ordering naloxone (0.5 percentage point decrease, p < 0.001). OneSheet access did not affect PDMP review rates (0.5 percentage point decrease, p = 0.382).OneSheet access did not result in clinically significant improvements in guideline-recommended management of chronic pain in primary care despite a robust user-centered design incorporating clinician input and EHR integration. Several factors likely limited OneSheet effectiveness, including limited ability to target certain patient visits, workflow limits on data collection and ordering, and evolving COVID-19 and opioid epidemic-related policies and procedures. These findings highlight specific limitations of OneSheet and the broader challenges of implementing effective EHR-based CDS in complex health care environments.
本研究旨在评估被动的、可选择的基于电子健康记录(EHR)的临床决策支持(CDS),即慢性疼痛单,对初级保健中指南推荐的慢性疼痛管理的影响。本研究于2020年10月至2022年5月进行了一项具有平行组设计的实用随机对照试验。参与者是137名初级保健临床医生(PCCs),他们在美国两个学术卫生系统的25个初级保健诊所治疗符合条件的慢性疼痛患者。在电子病历中随机分配PCCs,使其能够获得电子病历或常规护理。OneSheet在单一视图中汇总了指南相关信息,并提供了指南推荐行动的快捷方式(例如,为处方阿片类药物的患者订购尿液药物筛查[UDS])。我们构建了五个访问级二元结果:(1)记录疼痛相关目标;(2)通过疼痛、生活享受和一般活动(PEG)量表记录疼痛和功能;(3)审查处方药监测方案;(4)订购UDS;(5)订购纳洛酮。对每个结果使用广义线性混合模型进行分析。单张表的访问最低限度地增加了疼痛相关目标文档的比率(增加0.2个百分点,p = 0.013), PEG量表文档(增加0.7个百分点,p = 0.006)。单页访问降低了纳洛酮订购率(降低0.5个百分点,p p = 0.382)。尽管采用了强有力的以用户为中心的设计,包括临床医生输入和电子病历整合,但在指南推荐的初级保健慢性疼痛管理方面,OneSheet访问并没有带来临床显著的改善。有几个因素可能限制了OneSheet的有效性,包括针对某些患者就诊的能力有限,数据收集和排序的工作流程限制,以及不断发展的COVID-19和阿片类药物流行相关政策和程序。这些发现突出了OneSheet的特定局限性,以及在复杂的卫生保健环境中实施有效的基于ehr的CDS所面临的更广泛挑战。
{"title":"Learning from Misses: Evaluating a Clinical Decision Support for Chronic Pain Management in Primary Care.","authors":"Olena Mazurenko, Nate C Apathy, Lindsey M Sanner, Emma McCord, Meredith C B Adams, Robert W Hurley, Randall W Grout, Burke Mamlin, Saura Fortin, Justin Blackburn, Nir Menachemi, Joshua R Vest, Matthew Gurka, Christopher A Harle","doi":"10.1055/a-2639-4974","DOIUrl":"10.1055/a-2639-4974","url":null,"abstract":"<p><p>This study aimed to assess the effect of a passive, opt-in electronic health record (EHR)-based clinical decision support (CDS), the Chronic Pain OneSheet, on guideline-recommended chronic pain management in primary care.A pragmatic randomized controlled trial with a parallel group design was conducted between October 2020 and May 2022. Participants were 137 primary care clinicians (PCCs) treating qualifying patients with chronic pain at 25 primary care clinics within two academic health systems in the United States. PCCs were randomized in the EHR to have access to OneSheet or usual care. OneSheet aggregates guideline-relevant information in a single view and provides shortcuts to guideline-recommended actions (e.g., ordering urine drug screening [UDS] for patients prescribed opioids). We constructed five visit-level binary outcomes: (1) documenting pain-related goals; (2) documenting pain and function via Pain, Enjoyment of Life and General Activity (PEG) scale; (3) reviewing prescription drug monitoring programs (PDMPs); (4) ordering UDS; and (5) ordering naloxone. Analysis used generalized linear mixed models for each outcome.OneSheet access minimally increased rates of pain-related goal documentation (0.2 percentage point increase, <i>p</i> = 0.013), PEG scale documentation (0.7 percentage point increase, <i>p</i> < 0.001), and UDS orders (2.2 percentage point increase, <i>p</i> = 0.006). OneSheet access decreased the rate of ordering naloxone (0.5 percentage point decrease, <i>p</i> < 0.001). OneSheet access did not affect PDMP review rates (0.5 percentage point decrease, <i>p</i> = 0.382).OneSheet access did not result in clinically significant improvements in guideline-recommended management of chronic pain in primary care despite a robust user-centered design incorporating clinician input and EHR integration. Several factors likely limited OneSheet effectiveness, including limited ability to target certain patient visits, workflow limits on data collection and ordering, and evolving COVID-19 and opioid epidemic-related policies and procedures. These findings highlight specific limitations of OneSheet and the broader challenges of implementing effective EHR-based CDS in complex health care environments.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":"16 5","pages":"1683-1694"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12618150/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145524281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-09-15DOI: 10.1055/a-2700-7036
Victoria L Tiase, Patrice Hicks, Haddy Bah, Ainsley Snow, Devin M Mann, David A Feldstein, Wendy Halm, Paul D Smith, Rachel Hess
Overuse and misuse of antibiotics is an urgent health care problem and one of the key factors in antibiotic resistance. Validated clinical prediction rules have shown effectiveness in guiding providers to an appropriate diagnosis and identifying when antibiotics are the recommended choice for treatment.We aimed to study the relative ability of registered nurses using clinical prediction rules to guide the management of acute respiratory infections in a simulated environment compared with practicing primary care physicians.We evaluated a case-based simulation of the diagnosis and treatment for acute respiratory infections using clinical prediction rules. As a secondary outcome, we examined nursing self-efficacy by administering a survey before and after case evaluations. Participants included 40 registered nurses from three academic medical centers and five primary care physicians as comparators. Participants evaluated six simulated case studies, three for patients presenting with cough symptoms, and three for sore throat.Compared with physicians, nurses determined risk and treatment for simulated sore throat cases using clinical prediction rules with 100% accuracy in low-risk sore throat cases versus 80% for physicians. We found great variability in the accuracy of the risk level and appropriate treatment for cough cases. Nurses reported slight increases in self-efficacy from baseline to postcase evaluation suggesting further information is needed to understand correlation.Clinical prediction rules used by nurses in sore throat management workflows can guide accurate diagnosis and treatment in simulated cases, while cough management requires further exploration. Our results support the future implementation of automated prediction rules in a clinical decision support tool and a thorough examination of their effect on clinical practice and patient outcomes.
{"title":"Nursing Performance Using Clinical Prediction Rules for Acute Respiratory Infection Management: A Case-Based Simulation.","authors":"Victoria L Tiase, Patrice Hicks, Haddy Bah, Ainsley Snow, Devin M Mann, David A Feldstein, Wendy Halm, Paul D Smith, Rachel Hess","doi":"10.1055/a-2700-7036","DOIUrl":"10.1055/a-2700-7036","url":null,"abstract":"<p><p>Overuse and misuse of antibiotics is an urgent health care problem and one of the key factors in antibiotic resistance. Validated clinical prediction rules have shown effectiveness in guiding providers to an appropriate diagnosis and identifying when antibiotics are the recommended choice for treatment.We aimed to study the relative ability of registered nurses using clinical prediction rules to guide the management of acute respiratory infections in a simulated environment compared with practicing primary care physicians.We evaluated a case-based simulation of the diagnosis and treatment for acute respiratory infections using clinical prediction rules. As a secondary outcome, we examined nursing self-efficacy by administering a survey before and after case evaluations. Participants included 40 registered nurses from three academic medical centers and five primary care physicians as comparators. Participants evaluated six simulated case studies, three for patients presenting with cough symptoms, and three for sore throat.Compared with physicians, nurses determined risk and treatment for simulated sore throat cases using clinical prediction rules with 100% accuracy in low-risk sore throat cases versus 80% for physicians. We found great variability in the accuracy of the risk level and appropriate treatment for cough cases. Nurses reported slight increases in self-efficacy from baseline to postcase evaluation suggesting further information is needed to understand correlation.Clinical prediction rules used by nurses in sore throat management workflows can guide accurate diagnosis and treatment in simulated cases, while cough management requires further exploration. Our results support the future implementation of automated prediction rules in a clinical decision support tool and a thorough examination of their effect on clinical practice and patient outcomes.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":" ","pages":"1359-1367"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12534126/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145070972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-11-14DOI: 10.1055/a-2641-0265
Kerstin Jorsäter Blomgren, Johan Fastbom
Clinical decision support systems (CDSS) have been suggested to be helpful in detecting and preventing drug-related problems such as adverse drug events (ADEs). However, patient participation systems monitoring self-reported data, such as symptoms, are still sparsely described in the literature.This study aimed to investigate if the use of a patient participating CDSS (PCDSS) can facilitate early detection of ADEs, thereby contributing to safer drug treatment in older adults.A 1-year prospective observational study of elderly patients using a free web-based PCDSS to register symptoms over time at home. Initially, the PCDSS analyzed the extent and quality of the patient's drug use, based on a Swedish national set of criteria, and assessed drug-related symptoms using a standardized scale (PHASE-20). Thereafter, the patients recorded symptoms at home for 1 year-the first 6 months in free text, the second 6 months selecting from 19 predefined symptoms. The PCDSS signaled when symptoms were registered on three occasions in a 3-week period. The patient was then asked to contact his/her nurse at the healthcare center (HCC) for assessment of the symptoms and decisions on further contacts with the nurse or doctor. We analyzed the extent of signals generated, accompanying contacts, and associated medication reviews and adjustments.The 48 study participants registered 1,275 symptoms during the monitoring period, 61% by women. The PCDSS generated a total of 171 signals, of which 58% from women. Seventy-one percent (121) occurred under the first registration (free text) period. Of all signals, 44% (75) led to activities at the HCC, of which 48% (36) were physician contacts. In total, they contributed to medication reviews in 42% (15) and medication adjustments in 64% (23), with a total of 33 adjustments.Patient participation by self-reporting symptoms via a PCDSS can contribute to safer drug use.
{"title":"Patient Participation in Monitoring Potential Adverse Drug Events.","authors":"Kerstin Jorsäter Blomgren, Johan Fastbom","doi":"10.1055/a-2641-0265","DOIUrl":"10.1055/a-2641-0265","url":null,"abstract":"<p><p>Clinical decision support systems (CDSS) have been suggested to be helpful in detecting and preventing drug-related problems such as adverse drug events (ADEs). However, patient participation systems monitoring self-reported data, such as symptoms, are still sparsely described in the literature.This study aimed to investigate if the use of a patient participating CDSS (PCDSS) can facilitate early detection of ADEs, thereby contributing to safer drug treatment in older adults.A 1-year prospective observational study of elderly patients using a free web-based PCDSS to register symptoms over time at home. Initially, the PCDSS analyzed the extent and quality of the patient's drug use, based on a Swedish national set of criteria, and assessed drug-related symptoms using a standardized scale (PHASE-20). Thereafter, the patients recorded symptoms at home for 1 year-the first 6 months in free text, the second 6 months selecting from 19 predefined symptoms. The PCDSS signaled when symptoms were registered on three occasions in a 3-week period. The patient was then asked to contact his/her nurse at the healthcare center (HCC) for assessment of the symptoms and decisions on further contacts with the nurse or doctor. We analyzed the extent of signals generated, accompanying contacts, and associated medication reviews and adjustments.The 48 study participants registered 1,275 symptoms during the monitoring period, 61% by women. The PCDSS generated a total of 171 signals, of which 58% from women. Seventy-one percent (121) occurred under the first registration (free text) period. Of all signals, 44% (75) led to activities at the HCC, of which 48% (36) were physician contacts. In total, they contributed to medication reviews in 42% (15) and medication adjustments in 64% (23), with a total of 33 adjustments.Patient participation by self-reporting symptoms via a PCDSS can contribute to safer drug use.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":"16 5","pages":"1709-1719"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12618149/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145524510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-12-11DOI: 10.1055/a-2753-9439
David Powering, Nico Humig, Eva Rothgang
The German Federal Ministry of Health introduced the Pflegepersonalregelung 2.0 (PPR 2.0) to address the nursing staffing crisis. It establishes a framework to determine personnel requirements, ensuring adequate staffing. However, the required daily classification of patient care levels imposes a significant administrative burden on nursing staff. Digitizing this process may reduce documentation time and enhance efficiency, but effectiveness depends on usability and acceptance.This study evaluates the acceptance and usability of a direct digitization of the analog PPR 2.0 classification catalog into a digital user interface-the PPR 2.0 Calculator.A mixed-methods approach was used, combining quantitative assessment using the Technology Acceptance Model 3 (TAM 3) and the System Usability Scale (SUS), with qualitative insights from a semistructured interview. Fifteen nursing staff members from a pediatric rheumatology clinic in Germany participated.The PPR 2.0 Calculator was rated highly usable, with strong scores for Perceived Ease of Use (4.00) and Computer Self-Efficacy (4.09). Participants required minimal technical support, indicating an intuitive interface. However, Perceived Usefulness (2.82) and Job Relevance (2.53) scores were lower, suggesting limited value in daily workflows. The SUS score (65.50) was slightly below the benchmark of 68, indicating good usability with moderate room for improvement.Digitizing the analog PPR 2.0 catalog resulted in good usability, but significant challenges regarding practical relevance and workflow integration remained. Directly adopting the catalog content negatively affected perceived usefulness and job relevance, revealing limitations in the classification framework itself. Refinement of the PPR 2.0 framework is needed to reflect real-world clinical nursing tasks. Seamless integration with existing infrastructures and structured documentation is also critical. Future improvements should go beyond simple digitization and explore automated classification features.
{"title":"Acceptance and Usability of a Web Application for Patient Care Level Classification in German Clinical Nursing Care: A Pilot Study.","authors":"David Powering, Nico Humig, Eva Rothgang","doi":"10.1055/a-2753-9439","DOIUrl":"10.1055/a-2753-9439","url":null,"abstract":"<p><p>The German Federal Ministry of Health introduced the Pflegepersonalregelung 2.0 (PPR 2.0) to address the nursing staffing crisis. It establishes a framework to determine personnel requirements, ensuring adequate staffing. However, the required daily classification of patient care levels imposes a significant administrative burden on nursing staff. Digitizing this process may reduce documentation time and enhance efficiency, but effectiveness depends on usability and acceptance.This study evaluates the acceptance and usability of a direct digitization of the analog PPR 2.0 classification catalog into a digital user interface-the PPR 2.0 Calculator.A mixed-methods approach was used, combining quantitative assessment using the Technology Acceptance Model 3 (TAM 3) and the System Usability Scale (SUS), with qualitative insights from a semistructured interview. Fifteen nursing staff members from a pediatric rheumatology clinic in Germany participated.The PPR 2.0 Calculator was rated highly usable, with strong scores for Perceived Ease of Use (4.00) and Computer Self-Efficacy (4.09). Participants required minimal technical support, indicating an intuitive interface. However, Perceived Usefulness (2.82) and Job Relevance (2.53) scores were lower, suggesting limited value in daily workflows. The SUS score (65.50) was slightly below the benchmark of 68, indicating good usability with moderate room for improvement.Digitizing the analog PPR 2.0 catalog resulted in good usability, but significant challenges regarding practical relevance and workflow integration remained. Directly adopting the catalog content negatively affected perceived usefulness and job relevance, revealing limitations in the classification framework itself. Refinement of the PPR 2.0 framework is needed to reflect real-world clinical nursing tasks. Seamless integration with existing infrastructures and structured documentation is also critical. Future improvements should go beyond simple digitization and explore automated classification features.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":"16 5","pages":"1828-1836"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12698287/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145745287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-12-16DOI: 10.1055/a-2771-6216
Ibrahim S Karakus, Shashank Gupta, Rana Gur, Marco A Bracamonte Aranibar, Abdelhamed Elgazar, Hossam Gad, Xuechao Hao, Fabio Morales Salas, Sude Kilickaya, Alexander Niven, Oguz Kilickaya, Amelia Barwise
Effective communication in the intensive care unit (ICU) is essential, particularly for patients with non-English language preference, yet timely access to professional interpreters remains limited. While artificial intelligence (AI)-based translation tools have been explored in outpatient and nonacute care settings, studies evaluating their use in acute care, environments such as the ICU remain limited. To address this gap, we developed AI-TransLATE (AI-enhanced Transition to Language-Agnostic Transcultural Engagement), a speech-based translation tool designed for multilingual communication in critical care settings.This study aimed to assess the interpretation quality of AI-TransLATE across four languages-Spanish, Chinese, Arabic, and Turkish-using scripted ICU scenarios.We created ICU communication scripts and recorded bilingual research team members simulating clinical interactions. Two independent bilingual evaluators assessed interpretation quality using a 5-point Likert scale across fluency, adequacy, meaning preservation, and severity of errors. Clarity and cultural appropriateness were also rated. Percentage agreement was used to assess interrater agreement.AI-TransLATE achieved acceptable composite scores (≥16/20) across all languages. Spanish and Turkish performed consistently well; Chinese and Arabic showed variability due to omissions and terminology errors.AI-TransLATE shows promise as a clinical communication tool, but further evaluation in real-world, unscripted ICU settings is needed.
背景:有效的沟通在ICU是必不可少的,特别是对于非英语语言偏好(NELP)的患者,但及时获得专业口译员的机会仍然有限。虽然基于人工智能(AI)的翻译工具已经在门诊和低风险环境中进行了探索,但文献中缺乏评估其在ICU等高风险、情绪复杂环境中的应用的研究。为了解决这一差距,我们开发了AI-TransLATE (AI-enhanced Transition To Language-Agnostic trancultural Engagement),这是一种基于语音的翻译工具,专为重症监护环境中的多语言交流而设计。目的:评估人工智能翻译在四种语言(西班牙语、汉语、阿拉伯语和土耳其语)下使用脚本化ICU场景的口译质量。方法:制作ICU交流脚本,记录双语研究组成员模拟临床互动。两名独立的双语评估人员使用5分李克特量表评估口译质量,包括流利性、充分性、意义保留和错误严重程度。清晰性和文化适应性也被评价。科恩kappa被用来评估评分者之间的一致性。结果:AI-TransLATE在所有语言中均获得可接受的综合评分(≥16/20)。西班牙语和土耳其语一直表现良好;由于遗漏和术语错误,汉语和阿拉伯语表现出差异。结论:AI-TransLATE有望成为一种临床交流工具,但需要在现实世界的无脚本ICU环境中进行进一步评估。
{"title":"AI-TransLATE: Validation of a Speech-Based Multilingual Interpretation Tool in Critical Care.","authors":"Ibrahim S Karakus, Shashank Gupta, Rana Gur, Marco A Bracamonte Aranibar, Abdelhamed Elgazar, Hossam Gad, Xuechao Hao, Fabio Morales Salas, Sude Kilickaya, Alexander Niven, Oguz Kilickaya, Amelia Barwise","doi":"10.1055/a-2771-6216","DOIUrl":"10.1055/a-2771-6216","url":null,"abstract":"<p><p>Effective communication in the intensive care unit (ICU) is essential, particularly for patients with non-English language preference, yet timely access to professional interpreters remains limited. While artificial intelligence (AI)-based translation tools have been explored in outpatient and nonacute care settings, studies evaluating their use in acute care, environments such as the ICU remain limited. To address this gap, we developed AI-TransLATE (AI-enhanced Transition to Language-Agnostic Transcultural Engagement), a speech-based translation tool designed for multilingual communication in critical care settings.This study aimed to assess the interpretation quality of AI-TransLATE across four languages-Spanish, Chinese, Arabic, and Turkish-using scripted ICU scenarios.We created ICU communication scripts and recorded bilingual research team members simulating clinical interactions. Two independent bilingual evaluators assessed interpretation quality using a 5-point Likert scale across fluency, adequacy, meaning preservation, and severity of errors. Clarity and cultural appropriateness were also rated. Percentage agreement was used to assess interrater agreement.AI-TransLATE achieved acceptable composite scores (≥16/20) across all languages. Spanish and Turkish performed consistently well; Chinese and Arabic showed variability due to omissions and terminology errors.AI-TransLATE shows promise as a clinical communication tool, but further evaluation in real-world, unscripted ICU settings is needed.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":" ","pages":"1917-1924"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12737978/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145769552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-09-25DOI: 10.1055/a-2707-2959
Yi-Ling Chiang, Kuei-Fen Yang, Pin-Chih Su, Shang-Feng Tsai, Kai-Li Liang
This study aimed to leverage a large language model (LLM) to improve the efficiency and thoroughness of medical record documentation. This study focused on aiding clinical staff in creating structured summaries with the help of an LLM and assessing the quality of these artificial intelligence (AI)-proposed records in comparison to those produced by doctors.This strategy involved assembling a team of specialists, including data engineers, physicians, and medical information experts, to develop guidelines for medical summaries produced by an LLM (Llama 3.1), all under the direction of policymakers at the study hospital. The LLM proposes admission, weekly summaries, and discharge notes for physicians to review and edit. A validated Physician Documentation Quality Instrument (PDQI-9) was used to compare the quality of physician-authored and LLM-generated medical records.The results showed no significant difference was observed in the total PDQI-9 scores between the physician-drafted and AI-created weekly summaries and discharge notes (p = 0.129 and 0.873, respectively). However, there was a significant difference in the total PDQI-9 scores between the physician and AI admission notes (p = 0.004). Furthermore, there were significant differences in item levels between physicians' and AI notes. After deploying the note-assisted function in our hospital, it gradually gained popularity.LLM shows considerable promise for enhancing the efficiency and quality of medical record summaries. For the successful integration of LLM-assisted documentation, regular quality assessments, continuous support, and training are essential. Implementing LLM can allow clinical staff to concentrate on more valuable tasks, potentially enhancing overall health care delivery.
{"title":"Leveraging a Large Language Model for Streamlined Medical Record Generation: Implications for Health Care Informatics.","authors":"Yi-Ling Chiang, Kuei-Fen Yang, Pin-Chih Su, Shang-Feng Tsai, Kai-Li Liang","doi":"10.1055/a-2707-2959","DOIUrl":"10.1055/a-2707-2959","url":null,"abstract":"<p><p>This study aimed to leverage a large language model (LLM) to improve the efficiency and thoroughness of medical record documentation. This study focused on aiding clinical staff in creating structured summaries with the help of an LLM and assessing the quality of these artificial intelligence (AI)-proposed records in comparison to those produced by doctors.This strategy involved assembling a team of specialists, including data engineers, physicians, and medical information experts, to develop guidelines for medical summaries produced by an LLM (Llama 3.1), all under the direction of policymakers at the study hospital. The LLM proposes admission, weekly summaries, and discharge notes for physicians to review and edit. A validated Physician Documentation Quality Instrument (PDQI-9) was used to compare the quality of physician-authored and LLM-generated medical records.The results showed no significant difference was observed in the total PDQI-9 scores between the physician-drafted and AI-created weekly summaries and discharge notes (<i>p</i> = 0.129 and 0.873, respectively). However, there was a significant difference in the total PDQI-9 scores between the physician and AI admission notes (<i>p</i> = 0.004). Furthermore, there were significant differences in item levels between physicians' and AI notes. After deploying the note-assisted function in our hospital, it gradually gained popularity.LLM shows considerable promise for enhancing the efficiency and quality of medical record summaries. For the successful integration of LLM-assisted documentation, regular quality assessments, continuous support, and training are essential. Implementing LLM can allow clinical staff to concentrate on more valuable tasks, potentially enhancing overall health care delivery.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":" ","pages":"1493-1506"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12571553/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145151547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-11-07DOI: 10.1055/a-2720-5448
Andrew J King, Christopher M Horvat, David Schlessinger, Harry Hochheiser, Kevin V Bui, Jason N Kennedy, Emily B Brant, James Shalaby, Derek C Angus, Vincent X Liu, Christopher W Seymour
Sepsis is a heterogeneous syndrome with high morbidity and mortality. Despite extensive clinical trials, therapeutic progress remains limited, in part due to the absence of actionable sepsis subtypes.This study aimed to evaluate the feasibility of using HL7 Fast Healthcare Interoperability Resources (FHIR) for prerandomization sepsis subtyping to support clinical trial enrichment across multiple health systems.Data from 765 encounters at two academic medical centers were analyzed. FHIR-based resources were extracted from both research data warehouses (RDWs) and electronic health records (EHRs). A Python implementation of the Sepsis Endotyping in Emergency Care (SENECA) sepsis subtyping algorithm was developed to query and assemble FHIR resources for subtype classification.Open-source Python code for the SENECA algorithm is provided on GitHub. Experiments demonstrated: (1) successful sepsis subtyping across both health systems; (2) concordance between the original R implementation and the new Python implementation; and (3) discrepancies when comparing RDW-derived versus EHR-integrated FHIR APIs, primarily due to query and filtering limitations. Missing data were common and influenced by both clinical practice and FHIR API constraints. We provide five recommendations to address these challenges.FHIR can support multi-institutional sepsis subtyping and trial enrichment, though technical and governance challenges remain.
{"title":"A FHIR-Powered Python Implementation of the SENECA Algorithm for Sepsis Subtyping.","authors":"Andrew J King, Christopher M Horvat, David Schlessinger, Harry Hochheiser, Kevin V Bui, Jason N Kennedy, Emily B Brant, James Shalaby, Derek C Angus, Vincent X Liu, Christopher W Seymour","doi":"10.1055/a-2720-5448","DOIUrl":"10.1055/a-2720-5448","url":null,"abstract":"<p><p>Sepsis is a heterogeneous syndrome with high morbidity and mortality. Despite extensive clinical trials, therapeutic progress remains limited, in part due to the absence of actionable sepsis subtypes.This study aimed to evaluate the feasibility of using HL7 Fast Healthcare Interoperability Resources (FHIR) for prerandomization sepsis subtyping to support clinical trial enrichment across multiple health systems.Data from 765 encounters at two academic medical centers were analyzed. FHIR-based resources were extracted from both research data warehouses (RDWs) and electronic health records (EHRs). A Python implementation of the Sepsis Endotyping in Emergency Care (SENECA) sepsis subtyping algorithm was developed to query and assemble FHIR resources for subtype classification.Open-source Python code for the SENECA algorithm is provided on GitHub. Experiments demonstrated: (1) successful sepsis subtyping across both health systems; (2) concordance between the original R implementation and the new Python implementation; and (3) discrepancies when comparing RDW-derived versus EHR-integrated FHIR APIs, primarily due to query and filtering limitations. Missing data were common and influenced by both clinical practice and FHIR API constraints. We provide five recommendations to address these challenges.FHIR can support multi-institutional sepsis subtyping and trial enrichment, though technical and governance challenges remain.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":"16 5","pages":"1588-1594"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12594559/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145472394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}