Pub Date : 2025-10-01Epub Date: 2025-11-24DOI: 10.1055/a-2751-1896
Rhiannon Doherty, Abby Swanson Kazley, Eva Karp, Jennifer Ferrand
For every 30 minutes a provider spends seeing a patient, they spend 36 minutes charting in the electronic health record (EHR). Clinical documentation burden in U.S. health care is driven by increasing administrative tasks associated with EHRs, regulatory demands, and workflow inefficiencies. This burden contributes to increased cognitive load, fragmented care, and staff burnout. No comprehensive conceptual framework guides researchers addressing these challenges.This study aimed to develop a conceptual framework clarifying the interplay between psychological factors, technology, and documentation attributes-usability, effort, and perceived burden-among health care providers.Data were collected from a cross-sectional survey using a convenience sample of hospital- and ambulatory-based physicians, advanced practice registered nurses, and physician assistants. A newly constructed questionnaire was used, incorporating elements from well-established instruments. Descriptive and exploratory factor analysis was performed to identify significant findings and develop the preliminary Clinical Documentation Burden Framework.The analysis revealed three main factors underpinning clinical documentation burden: Poor usability, perceived task value, and excessive mental exertion. These factors were significantly correlated with professional dissonance (PD) and burnout, underscoring the complex interplay between time requirements, design challenges, task engagement, and cognitive load. The resulting conceptual framework highlights the importance of aligning documentation tasks with provider values to mitigate burden.The study offers new insights into the complex phenomenon of documentation burden affecting health care providers by incorporating key psychological factors. This conceptual framework provides a preliminary foundation for understanding this multifaceted problem. Like prior burnout research, conceptual clarity is key to creating shared definitions and a dedicated measurement instrument to support effective interventions. Given that the sample was predominantly advanced practice providers with underpowered subgroup comparisons, the framework should be interpreted as preliminary. This new appreciation of the dimensionality of documentation burden expands the potential levers available to alleviate operational strain and reduce PD and burnout.
{"title":"A Preliminary Conceptual Framework of Clinical Documentation Burden: Exploratory Factor Analysis Investigating Usability, Effort, and Perceived Burden among Health Care Providers.","authors":"Rhiannon Doherty, Abby Swanson Kazley, Eva Karp, Jennifer Ferrand","doi":"10.1055/a-2751-1896","DOIUrl":"10.1055/a-2751-1896","url":null,"abstract":"<p><p>For every 30 minutes a provider spends seeing a patient, they spend 36 minutes charting in the electronic health record (EHR). Clinical documentation burden in U.S. health care is driven by increasing administrative tasks associated with EHRs, regulatory demands, and workflow inefficiencies. This burden contributes to increased cognitive load, fragmented care, and staff burnout. No comprehensive conceptual framework guides researchers addressing these challenges.This study aimed to develop a conceptual framework clarifying the interplay between psychological factors, technology, and documentation attributes-usability, effort, and perceived burden-among health care providers.Data were collected from a cross-sectional survey using a convenience sample of hospital- and ambulatory-based physicians, advanced practice registered nurses, and physician assistants. A newly constructed questionnaire was used, incorporating elements from well-established instruments. Descriptive and exploratory factor analysis was performed to identify significant findings and develop the preliminary Clinical Documentation Burden Framework.The analysis revealed three main factors underpinning clinical documentation burden: Poor usability, perceived task value, and excessive mental exertion. These factors were significantly correlated with professional dissonance (PD) and burnout, underscoring the complex interplay between time requirements, design challenges, task engagement, and cognitive load. The resulting conceptual framework highlights the importance of aligning documentation tasks with provider values to mitigate burden.The study offers new insights into the complex phenomenon of documentation burden affecting health care providers by incorporating key psychological factors. This conceptual framework provides a preliminary foundation for understanding this multifaceted problem. Like prior burnout research, conceptual clarity is key to creating shared definitions and a dedicated measurement instrument to support effective interventions. Given that the sample was predominantly advanced practice providers with underpowered subgroup comparisons, the framework should be interpreted as preliminary. This new appreciation of the dimensionality of documentation burden expands the potential levers available to alleviate operational strain and reduce PD and burnout.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":" ","pages":"1815-1827"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12700715/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145596932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-10-30DOI: 10.1055/a-2717-3119
Susanne Dugas-Breit, Christian Menzer, Christian U Blank, Matteo S Carlino, Christoph U Lehmann, Jessica C Hassel, Martin Dugas
Rare B-rapidly accelerated fibrosarcoma gene (BRAF) mutations in advanced melanoma, and other malignancies, represent a significant clinical challenge due to sparse evidence on the efficiency of targeted therapy. Conventional genomic databases do not integrate detailed outcome data on treatments for patients with these mutations, requiring innovative informatics approaches.For the use case of patients with rare BRAF-mutated melanoma, we developed a "Treatment Outcome Tool" as a web-based database on rare cancers that aggregates anonymized, expert-validated clinical data. Unstructured interviews with dermato-oncologic experts guided the design, ensuring that the system allows users to query specific or combined rare BRAF mutations and retrieve key outcome measures, such as progression-free survival, overall response rate, and disease control rate with BRAF and/or mitogen-activated proteinkinase kinase (MEK) inhibition. Data are collected via a structured input form. After rigorous review and quality assurance by dedicated experts, data are then transferred to an externally accessible R/Shiny platform, where they can be assessed. The usability of the developed database was then evaluated by the System Usability Scale (SUS) of contributing dermato-oncologic experts.The first productive database version was implemented in October 2024. As of May 2025, the database contained data from 130 patients with 23 BRAF mutations. Evaluation of the "Treatment Outcome Tool" by 14 international dermato-oncologic experts yielded a median SUS score of 92.5, confirming excellent usability.Our database fills a critical gap in personalized oncology therapy by directly correlating rare BRAF mutation profiles with treatment outcomes. Our tool had usability and was found to be of high clinical value. The generic informatics framework chosen by us has the potential to be expanded to other rare tumors, ultimately enhancing evidence-based clinical practice and fostering international collaboration in cancer research.
{"title":"Development and Evaluation of a Web-Based Outcome Database for Advanced Melanoma with Rare BRAF Mutations.","authors":"Susanne Dugas-Breit, Christian Menzer, Christian U Blank, Matteo S Carlino, Christoph U Lehmann, Jessica C Hassel, Martin Dugas","doi":"10.1055/a-2717-3119","DOIUrl":"10.1055/a-2717-3119","url":null,"abstract":"<p><p>Rare <i>B-rapidly accelerated fibrosarcoma gene</i> (<i>BRAF</i>) mutations in advanced melanoma, and other malignancies, represent a significant clinical challenge due to sparse evidence on the efficiency of targeted therapy. Conventional genomic databases do not integrate detailed outcome data on treatments for patients with these mutations, requiring innovative informatics approaches.For the use case of patients with rare <i>BRAF</i>-mutated melanoma, we developed a \"Treatment Outcome Tool\" as a web-based database on rare cancers that aggregates anonymized, expert-validated clinical data. Unstructured interviews with dermato-oncologic experts guided the design, ensuring that the system allows users to query specific or combined rare <i>BRAF</i> mutations and retrieve key outcome measures, such as progression-free survival, overall response rate, and disease control rate with BRAF and/or mitogen-activated proteinkinase kinase (MEK) inhibition. Data are collected via a structured input form. After rigorous review and quality assurance by dedicated experts, data are then transferred to an externally accessible R/Shiny platform, where they can be assessed. The usability of the developed database was then evaluated by the System Usability Scale (SUS) of contributing dermato-oncologic experts.The first productive database version was implemented in October 2024. As of May 2025, the database contained data from 130 patients with 23 <i>BRAF</i> mutations. Evaluation of the \"Treatment Outcome Tool\" by 14 international dermato-oncologic experts yielded a median SUS score of 92.5, confirming excellent usability.Our database fills a critical gap in personalized oncology therapy by directly correlating rare <i>BRAF</i> mutation profiles with treatment outcomes. Our tool had usability and was found to be of high clinical value. The generic informatics framework chosen by us has the potential to be expanded to other rare tumors, ultimately enhancing evidence-based clinical practice and fostering international collaboration in cancer research.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":"16 5","pages":"1541-1549"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12575071/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-06-06DOI: 10.1055/a-2628-8408
Elvan Burak Verdi, Oguz Akbilgic
This study aimed to evaluate and compare the diagnostic responses generated by two artificial intelligence (AI) models developed 54 years apart, and encourage physicians to explore the use of large language models (LLMs) like GPT-4o in clinical practice.A clinical case of metabolic acidosis was presented to GPT-4o, and the model's diagnostic reasoning, data interpretation, and management recommendations were recorded. These outputs were then compared with the responses from Schwartz's 1970 AI model built with a decision-tree algorithm using Conversational Algebraic Language (CAL). Both models were given the same patient data to ensure a fair comparison.GPT-4o generated an advanced analysis of the patient's acid-base disturbance, correctly identifying likely causes and suggesting relevant diagnostic tests and treatments. It provided a detailed, narrative explanation of the metabolic acidosis. The 1970 CAL model, while correctly recognizing the metabolic acidosis and flagging implausible inputs, was constrained by its rule-based design. CAL offered only basic stepwise guidance and required sequential prompts for each data point, reflecting a limited capacity to handle complex or unanticipated information. GPT-4o, by contrast, integrated the data more holistically, although it occasionally ventured beyond the provided information.This comparison illustrates substantial advances in AI capabilities over five decades. GPT-4o's performance demonstrates the transformative potential of modern LLMs in clinical decision-making, showcasing abilities to synthesize complex data and assist diagnosis without specialized training, yet necessitating further validation, rigorous clinical trials, and adaptation to clinical contexts. Although innovative for its era and offering certain advantages over GPT-4o, the rule-based CAL system had technical limitations. Rather than viewing one as simply "better," this study provides perspective on how far AI in medicine has progressed while acknowledging that current AI tools remain supplements to-not replacements for-physician judgment.
{"title":"Comparing the Performances of a 54-Year-Old Computer-Based Consultation to ChatGPT-4o.","authors":"Elvan Burak Verdi, Oguz Akbilgic","doi":"10.1055/a-2628-8408","DOIUrl":"10.1055/a-2628-8408","url":null,"abstract":"<p><p>This study aimed to evaluate and compare the diagnostic responses generated by two artificial intelligence (AI) models developed 54 years apart, and encourage physicians to explore the use of large language models (LLMs) like GPT-4o in clinical practice.A clinical case of metabolic acidosis was presented to GPT-4o, and the model's diagnostic reasoning, data interpretation, and management recommendations were recorded. These outputs were then compared with the responses from Schwartz's 1970 AI model built with a decision-tree algorithm using Conversational Algebraic Language (CAL). Both models were given the same patient data to ensure a fair comparison.GPT-4o generated an advanced analysis of the patient's acid-base disturbance, correctly identifying likely causes and suggesting relevant diagnostic tests and treatments. It provided a detailed, narrative explanation of the metabolic acidosis. The 1970 CAL model, while correctly recognizing the metabolic acidosis and flagging implausible inputs, was constrained by its rule-based design. CAL offered only basic stepwise guidance and required sequential prompts for each data point, reflecting a limited capacity to handle complex or unanticipated information. GPT-4o, by contrast, integrated the data more holistically, although it occasionally ventured beyond the provided information.This comparison illustrates substantial advances in AI capabilities over five decades. GPT-4o's performance demonstrates the transformative potential of modern LLMs in clinical decision-making, showcasing abilities to synthesize complex data and assist diagnosis without specialized training, yet necessitating further validation, rigorous clinical trials, and adaptation to clinical contexts. Although innovative for its era and offering certain advantages over GPT-4o, the rule-based CAL system had technical limitations. Rather than viewing one as simply \"better,\" this study provides perspective on how far AI in medicine has progressed while acknowledging that current AI tools remain supplements to-not replacements for-physician judgment.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":" ","pages":"1627-1636"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12594563/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144250521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-10-24DOI: 10.1055/a-2702-1770
April Barnado, Ryan P Moore, Henry J Domenico, Emily Grace, Sarah Green, Ashley Suh, Nikol Nikolova, Bryan Han, Allison B McCoy
Our objective was to identify barriers to implementing a custom clinical decision support (CDS) alert to randomize individuals in a pragmatic study, specifically those with a positive antinuclear antibody (ANA) test.We integrated a validated logistic regression model into the electronic health record to predict the risk of developing autoimmune disease for individuals with a positive ANA (titer ≥ 1:80). A custom CDS alert was created to randomize eligible individuals into a pragmatic study evaluating whether the risk model reduces time to autoimmune disease diagnosis. The custom CDS alert runs silently in the background and is not visible to providers. Individuals were randomized to either an intervention or control arm. In the intervention arm, the study team reviewed risk model results, notified providers of high-risk scores, and offered expedited rheumatology referrals to high-risk individuals in addition to standard of care. The control arm received standard care only. The study team accessed a daily Epic report containing randomization assignments and model variables.Starting in June 2023, the risk model assessed 3,961 individuals and successfully randomized 2,105 individuals to date. Technical challenges that prevented the custom CDS alert from firing included an unanticipated change in the laboratory testing vendor and reporting due to a broken laboratory machine, followed by a change in the laboratory test name.This case report showcases the successful implementation of a laboratory-based custom CDS alert to randomize individuals for a pragmatic study. This approach enabled our study to be feasible across a large health care system. Key lessons learned included the importance of close collaboration with the laboratory team and thorough understanding of the laboratory testing, workflow, and reporting to ensure successful execution of the laboratory-based custom CDS alert.
{"title":"A Case Report in Using a Laboratory-Based Decision Support Alert for Research Enrollment and Randomization.","authors":"April Barnado, Ryan P Moore, Henry J Domenico, Emily Grace, Sarah Green, Ashley Suh, Nikol Nikolova, Bryan Han, Allison B McCoy","doi":"10.1055/a-2702-1770","DOIUrl":"10.1055/a-2702-1770","url":null,"abstract":"<p><p>Our objective was to identify barriers to implementing a custom clinical decision support (CDS) alert to randomize individuals in a pragmatic study, specifically those with a positive antinuclear antibody (ANA) test.We integrated a validated logistic regression model into the electronic health record to predict the risk of developing autoimmune disease for individuals with a positive ANA (titer ≥ 1:80). A custom CDS alert was created to randomize eligible individuals into a pragmatic study evaluating whether the risk model reduces time to autoimmune disease diagnosis. The custom CDS alert runs silently in the background and is not visible to providers. Individuals were randomized to either an intervention or control arm. In the intervention arm, the study team reviewed risk model results, notified providers of high-risk scores, and offered expedited rheumatology referrals to high-risk individuals in addition to standard of care. The control arm received standard care only. The study team accessed a daily Epic report containing randomization assignments and model variables.Starting in June 2023, the risk model assessed 3,961 individuals and successfully randomized 2,105 individuals to date. Technical challenges that prevented the custom CDS alert from firing included an unanticipated change in the laboratory testing vendor and reporting due to a broken laboratory machine, followed by a change in the laboratory test name.This case report showcases the successful implementation of a laboratory-based custom CDS alert to randomize individuals for a pragmatic study. This approach enabled our study to be feasible across a large health care system. Key lessons learned included the importance of close collaboration with the laboratory team and thorough understanding of the laboratory testing, workflow, and reporting to ensure successful execution of the laboratory-based custom CDS alert.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":"16 5","pages":"1439-1444"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12552065/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145369136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-09-10DOI: 10.1055/a-2698-0841
Liem M Nguyen, Amrita Sinha, Adam Dziorny, Daniel Tawfik
Time spent in the electronic health record (EHR) is an important measure of clinical activity. Vendor-derived EHR use metrics may not correspond to actual EHR experience. Raw EHR audit logs enable customized EHR use metrics, but translating discrete timestamps to time intervals is challenging. There are insufficient data available to quantify inactivity between audit log timestamps.This study aimed to develop and validate a computer vision-based model that (1) classifies EHR tasks and identifies task changes and (2) quantifies active-use time from clinician session screen recordings of EHR use. This study also aimed to develop and validate a computer vision-based model that (1) classifies EHR tasks and identifies task changes and (2) quantifies active-use time from clinician session screen recordings of EHR use.We generated 111 minutes of simulated workflow in an Epic sandbox environment for development and training and collected 86 minutes of real-world clinician session recordings for validation. The model used YOLOv8, Tesseract OCR, and a predefined dictionary to perform task classification and task change detection. We developed a frame comparison algorithm to delineate activity from inactivity and thus quantify active time. We compared the model's output of task classification, task change identification, and active time quantification against clinician annotations. We then performed a post hoc sensitivity analysis to identify the model's accuracy when using optimal parameters.Our model classified time spent in various high-level tasks with 94% accuracy. It detected task changes with 90.6% sensitivity. Active-use quantification varied by task, with lower mean absolute percentage error (MAPE) for tasks with clear visual changes (e.g., Results Review) and higher MAPE for tasks with subtle interactions (e.g., Note Entry). A post hoc sensitivity analysis revealed improvement in active-use quantification with a lower threshold of inactivity than initially used.A computer vision approach to identifying tasks performed and measuring time spent in the EHR is feasible. Future work should refine task-specific thresholds and validate across diverse settings. This approach enables defining optimal context-sensitive thresholds for quantifying clinically relevant active EHR time using raw audit log data.
{"title":"Identifying Electronic Health Record Tasks and Activity Using Computer Vision.","authors":"Liem M Nguyen, Amrita Sinha, Adam Dziorny, Daniel Tawfik","doi":"10.1055/a-2698-0841","DOIUrl":"10.1055/a-2698-0841","url":null,"abstract":"<p><p>Time spent in the electronic health record (EHR) is an important measure of clinical activity. Vendor-derived EHR use metrics may not correspond to actual EHR experience. Raw EHR audit logs enable customized EHR use metrics, but translating discrete timestamps to time intervals is challenging. There are insufficient data available to quantify inactivity between audit log timestamps.This study aimed to develop and validate a computer vision-based model that (1) classifies EHR tasks and identifies task changes and (2) quantifies active-use time from clinician session screen recordings of EHR use. This study also aimed to develop and validate a computer vision-based model that (1) classifies EHR tasks and identifies task changes and (2) quantifies active-use time from clinician session screen recordings of EHR use.We generated 111 minutes of simulated workflow in an Epic sandbox environment for development and training and collected 86 minutes of real-world clinician session recordings for validation. The model used YOLOv8, Tesseract OCR, and a predefined dictionary to perform task classification and task change detection. We developed a frame comparison algorithm to delineate activity from inactivity and thus quantify active time. We compared the model's output of task classification, task change identification, and active time quantification against clinician annotations. We then performed a post hoc sensitivity analysis to identify the model's accuracy when using optimal parameters.Our model classified time spent in various high-level tasks with 94% accuracy. It detected task changes with 90.6% sensitivity. Active-use quantification varied by task, with lower mean absolute percentage error (MAPE) for tasks with clear visual changes (e.g., Results Review) and higher MAPE for tasks with subtle interactions (e.g., Note Entry). A post hoc sensitivity analysis revealed improvement in active-use quantification with a lower threshold of inactivity than initially used.A computer vision approach to identifying tasks performed and measuring time spent in the EHR is feasible. Future work should refine task-specific thresholds and validate across diverse settings. This approach enables defining optimal context-sensitive thresholds for quantifying clinically relevant active EHR time using raw audit log data.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":" ","pages":"1350-1358"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12534128/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145034135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-06-30DOI: 10.1055/a-2644-7250
Wendi Zhao, Xuetao Wang, Kevin Afra
Despite the benefits of clinical decision support (CDS), concerns of potential risks arise amidst increasing reports of CDS malfunctions. Without objective and standardized methods to evaluate CDS in the post-live stage, CDS performance in a dynamic healthcare environment remains a black box from the user's perspective. In this study, we proposed a comprehensive framework to identify and evaluate post-live CDS malfunctions from the perspective of healthcare settings.We developed a two-phase framework to identify and evaluate post-live CDS system malfunctions: (1) real-time feedback from users in healthcare settings; (2) systematic validation through the use of databases that involve fundamental data flow validation and knowledge and rules validation. Identity, completeness, plausibility, and consistency across locations and time patterns were included as measures for systematic validation. We applied this framework to a commercial CDS system in 14 acute care facilities in Canada in a 2-year period.During this study, seven types of malfunctions were identified. The general rate of malfunctions was below 2%. In addition, an increase in CDS malfunctions was found during the electronic health record upgrade and implementation periods.This framework can be used to comprehensively evaluate CDS performance for healthcare settings. It provides objective insights into the extent of CDS issues, with the ability to capture low-prevalence malfunctions. Applying this framework to CDS evaluation can help improve CDS performance from the perspective of healthcare settings.
{"title":"A Two-Phase Framework Leveraging User Feedback and Systemic Validation to Improve Post-Live Clinical Decision Support.","authors":"Wendi Zhao, Xuetao Wang, Kevin Afra","doi":"10.1055/a-2644-7250","DOIUrl":"10.1055/a-2644-7250","url":null,"abstract":"<p><p>Despite the benefits of clinical decision support (CDS), concerns of potential risks arise amidst increasing reports of CDS malfunctions. Without objective and standardized methods to evaluate CDS in the post-live stage, CDS performance in a dynamic healthcare environment remains a black box from the user's perspective. In this study, we proposed a comprehensive framework to identify and evaluate post-live CDS malfunctions from the perspective of healthcare settings.We developed a two-phase framework to identify and evaluate post-live CDS system malfunctions: (1) real-time feedback from users in healthcare settings; (2) systematic validation through the use of databases that involve fundamental data flow validation and knowledge and rules validation. Identity, completeness, plausibility, and consistency across locations and time patterns were included as measures for systematic validation. We applied this framework to a commercial CDS system in 14 acute care facilities in Canada in a 2-year period.During this study, seven types of malfunctions were identified. The general rate of malfunctions was below 2%. In addition, an increase in CDS malfunctions was found during the electronic health record upgrade and implementation periods.This framework can be used to comprehensively evaluate CDS performance for healthcare settings. It provides objective insights into the extent of CDS issues, with the ability to capture low-prevalence malfunctions. Applying this framework to CDS evaluation can help improve CDS performance from the perspective of healthcare settings.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":" ","pages":"1720-1727"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12618151/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144530619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Enhancing the efficiency of family-centered rounds (FCRs) while ensuring timely patient care has been a focus of study over the past decade. We employed an Operations Research technique (i.e., simulation) to identify opportunities for improving rounding efficiency on our inpatient cardiology unit at Nationwide Children's Hospital (NCH).Through simulation of schedule-based rounds, our aims were to reduce the length of stay (LOS) and subsequent healthcare costs via (1) prioritizing rounds for patients needing time-sensitive care decisions or those likely ready to be discharged, and (2) enhancing participation from both families and bedside nurses during rounds.Data were collected through direct observation of rounding activities. We then conducted simulations to evaluate the effect of various rounding paths on efficiency, measured in terms of time and penalties depending on context.Our simulations indicated a tradeoff between minimizing the risk of delayed rounding and the amount of time spent on rounds. Optimizing rounds for 20 patients reduced cumulative patient waiting time and associated penalty scores. Based on prior research linking earlier clinical interventions to improved efficiency, this approach is estimated to reduce LOS by 166.08 hours and cost by approximately $3,460 per rotation.By simulating the hospital rounding processes on an inpatient pediatric cardiology unit, we demonstrated that prioritized rounding could reduce both LOS and associated costs. Despite a potential increase in total rounding time, which can be managed by clinical decision-makers, we recommend utilizing scheduling-based FCRs based on prioritization techniques that enhance rounding efficiency while minimizing risk and cost.
{"title":"Enhancing Efficiency, Reducing Length of Stay and Costs in Pediatric Cardiology Rounds Through Simulation-Based Optimization.","authors":"Yifan Yang, Silvio Fernandes-Junior, Thipkanok Wongphothiphan, Xu Zhang, Jeffrey Hoffman, Jessica Bowman, Yungui Huang","doi":"10.1055/a-2729-9693","DOIUrl":"10.1055/a-2729-9693","url":null,"abstract":"<p><p>Enhancing the efficiency of family-centered rounds (FCRs) while ensuring timely patient care has been a focus of study over the past decade. We employed an Operations Research technique (i.e., simulation) to identify opportunities for improving rounding efficiency on our inpatient cardiology unit at Nationwide Children's Hospital (NCH).Through simulation of schedule-based rounds, our aims were to reduce the length of stay (LOS) and subsequent healthcare costs via (1) prioritizing rounds for patients needing time-sensitive care decisions or those likely ready to be discharged, and (2) enhancing participation from both families and bedside nurses during rounds.Data were collected through direct observation of rounding activities. We then conducted simulations to evaluate the effect of various rounding paths on efficiency, measured in terms of time and penalties depending on context.Our simulations indicated a tradeoff between minimizing the risk of delayed rounding and the amount of time spent on rounds. Optimizing rounds for 20 patients reduced cumulative patient waiting time and associated penalty scores. Based on prior research linking earlier clinical interventions to improved efficiency, this approach is estimated to reduce LOS by 166.08 hours and cost by approximately $3,460 per rotation.By simulating the hospital rounding processes on an inpatient pediatric cardiology unit, we demonstrated that prioritized rounding could reduce both LOS and associated costs. Despite a potential increase in total rounding time, which can be managed by clinical decision-makers, we recommend utilizing scheduling-based FCRs based on prioritization techniques that enhance rounding efficiency while minimizing risk and cost.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":"16 5","pages":"1761-1770"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12634208/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145565955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-09-19DOI: 10.1055/a-2703-3735
Philipp Haessner, Jessica M Ray, Megan E Gregory
Patient portals are increasingly used to support digital health engagement, but little is known about how caregivers used patient portals before, during, and after the coronavirus disease 2019 (COVID-19) pandemic.This study aimed to examine longitudinal changes in caregiver engagement with pediatric patient portals, focusing on logins, session duration, messaging behaviors, and provider response times across prepandemic, pandemic, and postpandemic periods.We conducted a retrospective cohort study using deidentified MyChart data from caregivers of children aged 0 through 11 who received care at four pediatric primary care clinics in the Southeastern United States between March 2018 and March 2023. Generalized linear models were used to compare portal engagement across prepandemic, pandemic, and postpandemic periods. Outcomes included login frequency, session duration, message volume, message types and recipients, and provider response times, all normalized per user per year.Among 478 caregivers, portal logins and session duration increased significantly during and postpandemic, with 16-fold increases postpandemic compared with prepandemic (p < 0.001). Message volume declined substantially during the pandemic (p < 0.001) but returned to baseline levels. Provider response times shortened during the pandemic and remained lower than prepandemic levels (p = 0.032). Messaging to primary care declined and did not recover fully, while specialty care messaging increased across all periods. Appointment and medical advice messages declined during the pandemic, with only the latter rebounding. Customer service inquiries rose significantly and remained elevated, and medication renewal messages increased markedly postpandemic.The COVID-19 pandemic initiated lasting changes in caregivers' engagement with pediatric patient portals, including deeper engagement, quicker provider responses, and shifts in messaging patterns. Findings can be used to guide and optimize caregiver-centered digital health strategies in pediatrics. Future work should explore potential provider burnout from increased portal workload, incorporate multicenter studies, and link portal use to clinical characteristics to better inform digital health interventions.
{"title":"Changes in Pediatric Portal Use Among Caregivers Before, During, and After the Coronavirus Disease 2019 Pandemic: A Longitudinal Study.","authors":"Philipp Haessner, Jessica M Ray, Megan E Gregory","doi":"10.1055/a-2703-3735","DOIUrl":"10.1055/a-2703-3735","url":null,"abstract":"<p><p>Patient portals are increasingly used to support digital health engagement, but little is known about how caregivers used patient portals before, during, and after the coronavirus disease 2019 (COVID-19) pandemic.This study aimed to examine longitudinal changes in caregiver engagement with pediatric patient portals, focusing on logins, session duration, messaging behaviors, and provider response times across prepandemic, pandemic, and postpandemic periods.We conducted a retrospective cohort study using deidentified MyChart data from caregivers of children aged 0 through 11 who received care at four pediatric primary care clinics in the Southeastern United States between March 2018 and March 2023. Generalized linear models were used to compare portal engagement across prepandemic, pandemic, and postpandemic periods. Outcomes included login frequency, session duration, message volume, message types and recipients, and provider response times, all normalized per user per year.Among 478 caregivers, portal logins and session duration increased significantly during and postpandemic, with 16-fold increases postpandemic compared with prepandemic (<i>p</i> < 0.001). Message volume declined substantially during the pandemic (<i>p</i> < 0.001) but returned to baseline levels. Provider response times shortened during the pandemic and remained lower than prepandemic levels (<i>p</i> = 0.032). Messaging to primary care declined and did not recover fully, while specialty care messaging increased across all periods. Appointment and medical advice messages declined during the pandemic, with only the latter rebounding. Customer service inquiries rose significantly and remained elevated, and medication renewal messages increased markedly postpandemic.The COVID-19 pandemic initiated lasting changes in caregivers' engagement with pediatric patient portals, including deeper engagement, quicker provider responses, and shifts in messaging patterns. Findings can be used to guide and optimize caregiver-centered digital health strategies in pediatrics. Future work should explore potential provider burnout from increased portal workload, incorporate multicenter studies, and link portal use to clinical characteristics to better inform digital health interventions.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":" ","pages":"1465-1474"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12566923/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145092628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-11-07DOI: 10.1055/a-2638-8750
Nida Afzal, Amy D Nguyen, Annie Y S Lau
Falls among adults over 60 are a global health concern, including Australia.This study aimed to investigate temporospatial fall alert patterns-across time and location-detected by ambient fall detection sensors in three Australian aged care settings, to inform fall prevention strategies.A mixed-methods approach was used to analyze fall alert patterns and fall risks. Ambient fall detection sensors collected data from three care settings (residential aged care facilities [RACFs], retirement villages [RVs], and home dwelling communities [HDCs]; n = 31 households). Quantitative analysis involved fall alerts, temporospatial analysis by time of day and location. Qualitative insights were obtained through semistructured interviews with 14 older adults and 9 caregivers to understand fall risks.Distinct fall alert patterns emerged. In RACFs, alerts were most frequently recorded in bedrooms at night, linked to physical limitations and cognitive decline. RVs showed a more even distribution of alerts throughout the day, influenced by mobility issues, social activities, and pets affecting sensor accuracy. HDCs had the lowest fall alert rates, with nighttime alerts mainly in bedrooms, reflecting residents' physical status and strong family support. Qualitative data underscored the effect of cognitive and physical impairments in RACFs, mobility challenges, social activities, and pet influences in RVs, and shared living arrangements in HDCs.Fall alert patterns varied across RACFs, RVs, and HDCs, requiring tailored strategies. In RACFs, prevention should focus on nighttime safety with improved monitoring and bed alarms. Medication reviews are important, as many residents take medications affecting balance and cognition, increasing nighttime fall risks. In RVs, mobility programs and sensor accuracy improvements are needed to reduce false alerts from pets or daily activities. In HDCs, where alerts were fewer, more adaptable fall detection technology is needed to address the effect of shared bedrooms at night.
60岁以上成年人的跌倒是一个全球健康问题,包括澳大利亚。本研究旨在调查澳大利亚三个老年护理机构的环境跌倒检测传感器在不同时间和地点检测到的时空跌倒警报模式,为跌倒预防策略提供信息。采用混合方法分析跌倒预警模式和跌倒风险。环境跌倒检测传感器收集了三个护理机构(住宅老年护理设施[racf],退休村[rv]和家庭住宅社区[HDCs]; n = 31户)的数据。定量分析包括跌倒警报,按时间和地点进行时空分析。通过对14名老年人和9名护理人员的半结构化访谈获得定性见解,以了解跌倒风险。出现了明显的坠落警报模式。在racf中,警报最常发生在晚上的卧室,与身体限制和认知能力下降有关。房车全天的警报分布更为均匀,受到移动性问题、社交活动和宠物影响传感器准确性的影响。住宅单位的跌倒警报率最低,夜间警报主要在卧室,反映了居民的身体状况和强大的家庭支持。定性数据强调了认知和身体障碍在rac中的影响,流动性挑战,社会活动和宠物对房车的影响,以及在hdc中的共同生活安排。坠落警报模式因rac、rv和hdc而异,需要量身定制的策略。在农村农村地区,预防应侧重于夜间安全,改进监测和床铺警报。药物检查很重要,因为许多居民服用影响平衡和认知的药物,增加了夜间跌倒的风险。在房车中,移动程序和传感器精度需要改进,以减少来自宠物或日常活动的错误警报。在警报较少的高密度住宅中,需要更具适应性的跌倒检测技术来解决夜间共用卧室的影响。
{"title":"A Mixed Methods Exploration of Temporospatial Fall Alert Patterns in Australian Aged Care Settings.","authors":"Nida Afzal, Amy D Nguyen, Annie Y S Lau","doi":"10.1055/a-2638-8750","DOIUrl":"10.1055/a-2638-8750","url":null,"abstract":"<p><p>Falls among adults over 60 are a global health concern, including Australia.This study aimed to investigate temporospatial fall alert patterns-across time and location-detected by ambient fall detection sensors in three Australian aged care settings, to inform fall prevention strategies.A mixed-methods approach was used to analyze fall alert patterns and fall risks. Ambient fall detection sensors collected data from three care settings (residential aged care facilities [RACFs], retirement villages [RVs], and home dwelling communities [HDCs]; <i>n</i> = 31 households). Quantitative analysis involved fall alerts, temporospatial analysis by time of day and location. Qualitative insights were obtained through semistructured interviews with 14 older adults and 9 caregivers to understand fall risks.Distinct fall alert patterns emerged. In RACFs, alerts were most frequently recorded in bedrooms at night, linked to physical limitations and cognitive decline. RVs showed a more even distribution of alerts throughout the day, influenced by mobility issues, social activities, and pets affecting sensor accuracy. HDCs had the lowest fall alert rates, with nighttime alerts mainly in bedrooms, reflecting residents' physical status and strong family support. Qualitative data underscored the effect of cognitive and physical impairments in RACFs, mobility challenges, social activities, and pet influences in RVs, and shared living arrangements in HDCs.Fall alert patterns varied across RACFs, RVs, and HDCs, requiring tailored strategies. In RACFs, prevention should focus on nighttime safety with improved monitoring and bed alarms. Medication reviews are important, as many residents take medications affecting balance and cognition, increasing nighttime fall risks. In RVs, mobility programs and sensor accuracy improvements are needed to reduce false alerts from pets or daily activities. In HDCs, where alerts were fewer, more adaptable fall detection technology is needed to address the effect of shared bedrooms at night.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":"16 5","pages":"1664-1676"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12594566/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145472411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-10-24DOI: 10.1055/a-2701-5819
Qichuan Fang, Jun Liang, Peng Xiang, Min Zhao, Yunfan He, Zijiao Zhang, Haofeng Wan, Yue Hu, Tong Wang, Jianbo Lei
The widespread adoption of health information technology (HIT) has deepened hospitals' reliance on electronic health records (EHR). However, EHR downtime events, which refer to partial or complete system failures, can disrupt hospital operations and threaten patient safety. Systematic research on HIT downtime events in China remains limited.This study aims to identify and classify reported EHR downtime events in a Chinese hospital, assess their frequency and severity, and propose improvement recommendations and response strategies.We identified and coded downtime events based on a Chinese hospital's adverse event reports between January 2018 and August 2022, extracting features such as time, type, and affected scope. Both descriptive and inferential statistics were used for analysis.A total of 204 EHR downtime events were identified, with 96.1% (n = 196) unplanned. The most frequent categories were medication-related events (n = 52, 25.5%), imaging-related events (n = 35, 17.2%), and accounting and billing-related events (n = 17, 8.3%). For severity, 76.0% (n = 155) of events were reported as patient care disruptions, while 76.5% (n = 156) occurred within certain departments. In terms of time, the daily downtime incidence was 0.142 (95% CI: 0.122-0.164) on weekdays versus 0.064 (95% CI: 0.044-0.090) on weekends, with an incidence rate ratio (IRR) of 2.22 (95% CI: 1.52-3.25). The downtime incidence during the morning period was 0.0130 per hour (95% CI: 0.0107-0.0156), which was higher than other time periods, with IRRs ranging from 1.42 (95% CI: 1.06-1.90) to 22.2 (95% CI: 12.66-38.92).In this study, analysis of EHR downtime events in a Chinese hospital identified three key issues: high-risk downtime in medication processes, peak occurrence periods on weekdays and during morning hours, and significant clinical care disruptions. Recommended measures include implementing tiered contingency protocols, enhancing technical resilience, and establishing standardized reporting mechanisms.
{"title":"Electronic Health Record Downtime Events of a Hospital: A Retrospective Analysis from Adverse Event Reports.","authors":"Qichuan Fang, Jun Liang, Peng Xiang, Min Zhao, Yunfan He, Zijiao Zhang, Haofeng Wan, Yue Hu, Tong Wang, Jianbo Lei","doi":"10.1055/a-2701-5819","DOIUrl":"10.1055/a-2701-5819","url":null,"abstract":"<p><p>The widespread adoption of health information technology (HIT) has deepened hospitals' reliance on electronic health records (EHR). However, EHR downtime events, which refer to partial or complete system failures, can disrupt hospital operations and threaten patient safety. Systematic research on HIT downtime events in China remains limited.This study aims to identify and classify reported EHR downtime events in a Chinese hospital, assess their frequency and severity, and propose improvement recommendations and response strategies.We identified and coded downtime events based on a Chinese hospital's adverse event reports between January 2018 and August 2022, extracting features such as time, type, and affected scope. Both descriptive and inferential statistics were used for analysis.A total of 204 EHR downtime events were identified, with 96.1% (<i>n</i> = 196) unplanned. The most frequent categories were medication-related events (<i>n</i> = 52, 25.5%), imaging-related events (<i>n</i> = 35, 17.2%), and accounting and billing-related events (<i>n</i> = 17, 8.3%). For severity, 76.0% (<i>n</i> = 155) of events were reported as patient care disruptions, while 76.5% (<i>n</i> = 156) occurred within certain departments. In terms of time, the daily downtime incidence was 0.142 (95% CI: 0.122-0.164) on weekdays versus 0.064 (95% CI: 0.044-0.090) on weekends, with an incidence rate ratio (IRR) of 2.22 (95% CI: 1.52-3.25). The downtime incidence during the morning period was 0.0130 per hour (95% CI: 0.0107-0.0156), which was higher than other time periods, with IRRs ranging from 1.42 (95% CI: 1.06-1.90) to 22.2 (95% CI: 12.66-38.92).In this study, analysis of EHR downtime events in a Chinese hospital identified three key issues: high-risk downtime in medication processes, peak occurrence periods on weekdays and during morning hours, and significant clinical care disruptions. Recommended measures include implementing tiered contingency protocols, enhancing technical resilience, and establishing standardized reporting mechanisms.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":"16 5","pages":"1419-1429"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12552068/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145369076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}