Pub Date : 2025-10-01Epub Date: 2025-09-10DOI: 10.1055/a-2698-0841
Liem M Nguyen, Amrita Sinha, Adam Dziorny, Daniel Tawfik
Time spent in the electronic health record (EHR) is an important measure of clinical activity. Vendor-derived EHR use metrics may not correspond to actual EHR experience. Raw EHR audit logs enable customized EHR use metrics, but translating discrete timestamps to time intervals is challenging. There are insufficient data available to quantify inactivity between audit log timestamps.This study aimed to develop and validate a computer vision-based model that (1) classifies EHR tasks and identifies task changes and (2) quantifies active-use time from clinician session screen recordings of EHR use. This study also aimed to develop and validate a computer vision-based model that (1) classifies EHR tasks and identifies task changes and (2) quantifies active-use time from clinician session screen recordings of EHR use.We generated 111 minutes of simulated workflow in an Epic sandbox environment for development and training and collected 86 minutes of real-world clinician session recordings for validation. The model used YOLOv8, Tesseract OCR, and a predefined dictionary to perform task classification and task change detection. We developed a frame comparison algorithm to delineate activity from inactivity and thus quantify active time. We compared the model's output of task classification, task change identification, and active time quantification against clinician annotations. We then performed a post hoc sensitivity analysis to identify the model's accuracy when using optimal parameters.Our model classified time spent in various high-level tasks with 94% accuracy. It detected task changes with 90.6% sensitivity. Active-use quantification varied by task, with lower mean absolute percentage error (MAPE) for tasks with clear visual changes (e.g., Results Review) and higher MAPE for tasks with subtle interactions (e.g., Note Entry). A post hoc sensitivity analysis revealed improvement in active-use quantification with a lower threshold of inactivity than initially used.A computer vision approach to identifying tasks performed and measuring time spent in the EHR is feasible. Future work should refine task-specific thresholds and validate across diverse settings. This approach enables defining optimal context-sensitive thresholds for quantifying clinically relevant active EHR time using raw audit log data.
{"title":"Identifying Electronic Health Record Tasks and Activity Using Computer Vision.","authors":"Liem M Nguyen, Amrita Sinha, Adam Dziorny, Daniel Tawfik","doi":"10.1055/a-2698-0841","DOIUrl":"10.1055/a-2698-0841","url":null,"abstract":"<p><p>Time spent in the electronic health record (EHR) is an important measure of clinical activity. Vendor-derived EHR use metrics may not correspond to actual EHR experience. Raw EHR audit logs enable customized EHR use metrics, but translating discrete timestamps to time intervals is challenging. There are insufficient data available to quantify inactivity between audit log timestamps.This study aimed to develop and validate a computer vision-based model that (1) classifies EHR tasks and identifies task changes and (2) quantifies active-use time from clinician session screen recordings of EHR use. This study also aimed to develop and validate a computer vision-based model that (1) classifies EHR tasks and identifies task changes and (2) quantifies active-use time from clinician session screen recordings of EHR use.We generated 111 minutes of simulated workflow in an Epic sandbox environment for development and training and collected 86 minutes of real-world clinician session recordings for validation. The model used YOLOv8, Tesseract OCR, and a predefined dictionary to perform task classification and task change detection. We developed a frame comparison algorithm to delineate activity from inactivity and thus quantify active time. We compared the model's output of task classification, task change identification, and active time quantification against clinician annotations. We then performed a post hoc sensitivity analysis to identify the model's accuracy when using optimal parameters.Our model classified time spent in various high-level tasks with 94% accuracy. It detected task changes with 90.6% sensitivity. Active-use quantification varied by task, with lower mean absolute percentage error (MAPE) for tasks with clear visual changes (e.g., Results Review) and higher MAPE for tasks with subtle interactions (e.g., Note Entry). A post hoc sensitivity analysis revealed improvement in active-use quantification with a lower threshold of inactivity than initially used.A computer vision approach to identifying tasks performed and measuring time spent in the EHR is feasible. Future work should refine task-specific thresholds and validate across diverse settings. This approach enables defining optimal context-sensitive thresholds for quantifying clinically relevant active EHR time using raw audit log data.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":" ","pages":"1350-1358"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12534128/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145034135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-06-30DOI: 10.1055/a-2644-7250
Wendi Zhao, Xuetao Wang, Kevin Afra
Despite the benefits of clinical decision support (CDS), concerns of potential risks arise amidst increasing reports of CDS malfunctions. Without objective and standardized methods to evaluate CDS in the post-live stage, CDS performance in a dynamic healthcare environment remains a black box from the user's perspective. In this study, we proposed a comprehensive framework to identify and evaluate post-live CDS malfunctions from the perspective of healthcare settings.We developed a two-phase framework to identify and evaluate post-live CDS system malfunctions: (1) real-time feedback from users in healthcare settings; (2) systematic validation through the use of databases that involve fundamental data flow validation and knowledge and rules validation. Identity, completeness, plausibility, and consistency across locations and time patterns were included as measures for systematic validation. We applied this framework to a commercial CDS system in 14 acute care facilities in Canada in a 2-year period.During this study, seven types of malfunctions were identified. The general rate of malfunctions was below 2%. In addition, an increase in CDS malfunctions was found during the electronic health record upgrade and implementation periods.This framework can be used to comprehensively evaluate CDS performance for healthcare settings. It provides objective insights into the extent of CDS issues, with the ability to capture low-prevalence malfunctions. Applying this framework to CDS evaluation can help improve CDS performance from the perspective of healthcare settings.
{"title":"A Two-Phase Framework Leveraging User Feedback and Systemic Validation to Improve Post-Live Clinical Decision Support.","authors":"Wendi Zhao, Xuetao Wang, Kevin Afra","doi":"10.1055/a-2644-7250","DOIUrl":"10.1055/a-2644-7250","url":null,"abstract":"<p><p>Despite the benefits of clinical decision support (CDS), concerns of potential risks arise amidst increasing reports of CDS malfunctions. Without objective and standardized methods to evaluate CDS in the post-live stage, CDS performance in a dynamic healthcare environment remains a black box from the user's perspective. In this study, we proposed a comprehensive framework to identify and evaluate post-live CDS malfunctions from the perspective of healthcare settings.We developed a two-phase framework to identify and evaluate post-live CDS system malfunctions: (1) real-time feedback from users in healthcare settings; (2) systematic validation through the use of databases that involve fundamental data flow validation and knowledge and rules validation. Identity, completeness, plausibility, and consistency across locations and time patterns were included as measures for systematic validation. We applied this framework to a commercial CDS system in 14 acute care facilities in Canada in a 2-year period.During this study, seven types of malfunctions were identified. The general rate of malfunctions was below 2%. In addition, an increase in CDS malfunctions was found during the electronic health record upgrade and implementation periods.This framework can be used to comprehensively evaluate CDS performance for healthcare settings. It provides objective insights into the extent of CDS issues, with the ability to capture low-prevalence malfunctions. Applying this framework to CDS evaluation can help improve CDS performance from the perspective of healthcare settings.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":" ","pages":"1720-1727"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12618151/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144530619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Enhancing the efficiency of family-centered rounds (FCRs) while ensuring timely patient care has been a focus of study over the past decade. We employed an Operations Research technique (i.e., simulation) to identify opportunities for improving rounding efficiency on our inpatient cardiology unit at Nationwide Children's Hospital (NCH).Through simulation of schedule-based rounds, our aims were to reduce the length of stay (LOS) and subsequent healthcare costs via (1) prioritizing rounds for patients needing time-sensitive care decisions or those likely ready to be discharged, and (2) enhancing participation from both families and bedside nurses during rounds.Data were collected through direct observation of rounding activities. We then conducted simulations to evaluate the effect of various rounding paths on efficiency, measured in terms of time and penalties depending on context.Our simulations indicated a tradeoff between minimizing the risk of delayed rounding and the amount of time spent on rounds. Optimizing rounds for 20 patients reduced cumulative patient waiting time and associated penalty scores. Based on prior research linking earlier clinical interventions to improved efficiency, this approach is estimated to reduce LOS by 166.08 hours and cost by approximately $3,460 per rotation.By simulating the hospital rounding processes on an inpatient pediatric cardiology unit, we demonstrated that prioritized rounding could reduce both LOS and associated costs. Despite a potential increase in total rounding time, which can be managed by clinical decision-makers, we recommend utilizing scheduling-based FCRs based on prioritization techniques that enhance rounding efficiency while minimizing risk and cost.
{"title":"Enhancing Efficiency, Reducing Length of Stay and Costs in Pediatric Cardiology Rounds Through Simulation-Based Optimization.","authors":"Yifan Yang, Silvio Fernandes-Junior, Thipkanok Wongphothiphan, Xu Zhang, Jeffrey Hoffman, Jessica Bowman, Yungui Huang","doi":"10.1055/a-2729-9693","DOIUrl":"10.1055/a-2729-9693","url":null,"abstract":"<p><p>Enhancing the efficiency of family-centered rounds (FCRs) while ensuring timely patient care has been a focus of study over the past decade. We employed an Operations Research technique (i.e., simulation) to identify opportunities for improving rounding efficiency on our inpatient cardiology unit at Nationwide Children's Hospital (NCH).Through simulation of schedule-based rounds, our aims were to reduce the length of stay (LOS) and subsequent healthcare costs via (1) prioritizing rounds for patients needing time-sensitive care decisions or those likely ready to be discharged, and (2) enhancing participation from both families and bedside nurses during rounds.Data were collected through direct observation of rounding activities. We then conducted simulations to evaluate the effect of various rounding paths on efficiency, measured in terms of time and penalties depending on context.Our simulations indicated a tradeoff between minimizing the risk of delayed rounding and the amount of time spent on rounds. Optimizing rounds for 20 patients reduced cumulative patient waiting time and associated penalty scores. Based on prior research linking earlier clinical interventions to improved efficiency, this approach is estimated to reduce LOS by 166.08 hours and cost by approximately $3,460 per rotation.By simulating the hospital rounding processes on an inpatient pediatric cardiology unit, we demonstrated that prioritized rounding could reduce both LOS and associated costs. Despite a potential increase in total rounding time, which can be managed by clinical decision-makers, we recommend utilizing scheduling-based FCRs based on prioritization techniques that enhance rounding efficiency while minimizing risk and cost.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":"16 5","pages":"1761-1770"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12634208/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145565955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-11-07DOI: 10.1055/a-2638-8750
Nida Afzal, Amy D Nguyen, Annie Y S Lau
Falls among adults over 60 are a global health concern, including Australia.This study aimed to investigate temporospatial fall alert patterns-across time and location-detected by ambient fall detection sensors in three Australian aged care settings, to inform fall prevention strategies.A mixed-methods approach was used to analyze fall alert patterns and fall risks. Ambient fall detection sensors collected data from three care settings (residential aged care facilities [RACFs], retirement villages [RVs], and home dwelling communities [HDCs]; n = 31 households). Quantitative analysis involved fall alerts, temporospatial analysis by time of day and location. Qualitative insights were obtained through semistructured interviews with 14 older adults and 9 caregivers to understand fall risks.Distinct fall alert patterns emerged. In RACFs, alerts were most frequently recorded in bedrooms at night, linked to physical limitations and cognitive decline. RVs showed a more even distribution of alerts throughout the day, influenced by mobility issues, social activities, and pets affecting sensor accuracy. HDCs had the lowest fall alert rates, with nighttime alerts mainly in bedrooms, reflecting residents' physical status and strong family support. Qualitative data underscored the effect of cognitive and physical impairments in RACFs, mobility challenges, social activities, and pet influences in RVs, and shared living arrangements in HDCs.Fall alert patterns varied across RACFs, RVs, and HDCs, requiring tailored strategies. In RACFs, prevention should focus on nighttime safety with improved monitoring and bed alarms. Medication reviews are important, as many residents take medications affecting balance and cognition, increasing nighttime fall risks. In RVs, mobility programs and sensor accuracy improvements are needed to reduce false alerts from pets or daily activities. In HDCs, where alerts were fewer, more adaptable fall detection technology is needed to address the effect of shared bedrooms at night.
60岁以上成年人的跌倒是一个全球健康问题,包括澳大利亚。本研究旨在调查澳大利亚三个老年护理机构的环境跌倒检测传感器在不同时间和地点检测到的时空跌倒警报模式,为跌倒预防策略提供信息。采用混合方法分析跌倒预警模式和跌倒风险。环境跌倒检测传感器收集了三个护理机构(住宅老年护理设施[racf],退休村[rv]和家庭住宅社区[HDCs]; n = 31户)的数据。定量分析包括跌倒警报,按时间和地点进行时空分析。通过对14名老年人和9名护理人员的半结构化访谈获得定性见解,以了解跌倒风险。出现了明显的坠落警报模式。在racf中,警报最常发生在晚上的卧室,与身体限制和认知能力下降有关。房车全天的警报分布更为均匀,受到移动性问题、社交活动和宠物影响传感器准确性的影响。住宅单位的跌倒警报率最低,夜间警报主要在卧室,反映了居民的身体状况和强大的家庭支持。定性数据强调了认知和身体障碍在rac中的影响,流动性挑战,社会活动和宠物对房车的影响,以及在hdc中的共同生活安排。坠落警报模式因rac、rv和hdc而异,需要量身定制的策略。在农村农村地区,预防应侧重于夜间安全,改进监测和床铺警报。药物检查很重要,因为许多居民服用影响平衡和认知的药物,增加了夜间跌倒的风险。在房车中,移动程序和传感器精度需要改进,以减少来自宠物或日常活动的错误警报。在警报较少的高密度住宅中,需要更具适应性的跌倒检测技术来解决夜间共用卧室的影响。
{"title":"A Mixed Methods Exploration of Temporospatial Fall Alert Patterns in Australian Aged Care Settings.","authors":"Nida Afzal, Amy D Nguyen, Annie Y S Lau","doi":"10.1055/a-2638-8750","DOIUrl":"10.1055/a-2638-8750","url":null,"abstract":"<p><p>Falls among adults over 60 are a global health concern, including Australia.This study aimed to investigate temporospatial fall alert patterns-across time and location-detected by ambient fall detection sensors in three Australian aged care settings, to inform fall prevention strategies.A mixed-methods approach was used to analyze fall alert patterns and fall risks. Ambient fall detection sensors collected data from three care settings (residential aged care facilities [RACFs], retirement villages [RVs], and home dwelling communities [HDCs]; <i>n</i> = 31 households). Quantitative analysis involved fall alerts, temporospatial analysis by time of day and location. Qualitative insights were obtained through semistructured interviews with 14 older adults and 9 caregivers to understand fall risks.Distinct fall alert patterns emerged. In RACFs, alerts were most frequently recorded in bedrooms at night, linked to physical limitations and cognitive decline. RVs showed a more even distribution of alerts throughout the day, influenced by mobility issues, social activities, and pets affecting sensor accuracy. HDCs had the lowest fall alert rates, with nighttime alerts mainly in bedrooms, reflecting residents' physical status and strong family support. Qualitative data underscored the effect of cognitive and physical impairments in RACFs, mobility challenges, social activities, and pet influences in RVs, and shared living arrangements in HDCs.Fall alert patterns varied across RACFs, RVs, and HDCs, requiring tailored strategies. In RACFs, prevention should focus on nighttime safety with improved monitoring and bed alarms. Medication reviews are important, as many residents take medications affecting balance and cognition, increasing nighttime fall risks. In RVs, mobility programs and sensor accuracy improvements are needed to reduce false alerts from pets or daily activities. In HDCs, where alerts were fewer, more adaptable fall detection technology is needed to address the effect of shared bedrooms at night.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":"16 5","pages":"1664-1676"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12594566/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145472411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-10-24DOI: 10.1055/a-2701-5819
Qichuan Fang, Jun Liang, Peng Xiang, Min Zhao, Yunfan He, Zijiao Zhang, Haofeng Wan, Yue Hu, Tong Wang, Jianbo Lei
The widespread adoption of health information technology (HIT) has deepened hospitals' reliance on electronic health records (EHR). However, EHR downtime events, which refer to partial or complete system failures, can disrupt hospital operations and threaten patient safety. Systematic research on HIT downtime events in China remains limited.This study aims to identify and classify reported EHR downtime events in a Chinese hospital, assess their frequency and severity, and propose improvement recommendations and response strategies.We identified and coded downtime events based on a Chinese hospital's adverse event reports between January 2018 and August 2022, extracting features such as time, type, and affected scope. Both descriptive and inferential statistics were used for analysis.A total of 204 EHR downtime events were identified, with 96.1% (n = 196) unplanned. The most frequent categories were medication-related events (n = 52, 25.5%), imaging-related events (n = 35, 17.2%), and accounting and billing-related events (n = 17, 8.3%). For severity, 76.0% (n = 155) of events were reported as patient care disruptions, while 76.5% (n = 156) occurred within certain departments. In terms of time, the daily downtime incidence was 0.142 (95% CI: 0.122-0.164) on weekdays versus 0.064 (95% CI: 0.044-0.090) on weekends, with an incidence rate ratio (IRR) of 2.22 (95% CI: 1.52-3.25). The downtime incidence during the morning period was 0.0130 per hour (95% CI: 0.0107-0.0156), which was higher than other time periods, with IRRs ranging from 1.42 (95% CI: 1.06-1.90) to 22.2 (95% CI: 12.66-38.92).In this study, analysis of EHR downtime events in a Chinese hospital identified three key issues: high-risk downtime in medication processes, peak occurrence periods on weekdays and during morning hours, and significant clinical care disruptions. Recommended measures include implementing tiered contingency protocols, enhancing technical resilience, and establishing standardized reporting mechanisms.
{"title":"Electronic Health Record Downtime Events of a Hospital: A Retrospective Analysis from Adverse Event Reports.","authors":"Qichuan Fang, Jun Liang, Peng Xiang, Min Zhao, Yunfan He, Zijiao Zhang, Haofeng Wan, Yue Hu, Tong Wang, Jianbo Lei","doi":"10.1055/a-2701-5819","DOIUrl":"10.1055/a-2701-5819","url":null,"abstract":"<p><p>The widespread adoption of health information technology (HIT) has deepened hospitals' reliance on electronic health records (EHR). However, EHR downtime events, which refer to partial or complete system failures, can disrupt hospital operations and threaten patient safety. Systematic research on HIT downtime events in China remains limited.This study aims to identify and classify reported EHR downtime events in a Chinese hospital, assess their frequency and severity, and propose improvement recommendations and response strategies.We identified and coded downtime events based on a Chinese hospital's adverse event reports between January 2018 and August 2022, extracting features such as time, type, and affected scope. Both descriptive and inferential statistics were used for analysis.A total of 204 EHR downtime events were identified, with 96.1% (<i>n</i> = 196) unplanned. The most frequent categories were medication-related events (<i>n</i> = 52, 25.5%), imaging-related events (<i>n</i> = 35, 17.2%), and accounting and billing-related events (<i>n</i> = 17, 8.3%). For severity, 76.0% (<i>n</i> = 155) of events were reported as patient care disruptions, while 76.5% (<i>n</i> = 156) occurred within certain departments. In terms of time, the daily downtime incidence was 0.142 (95% CI: 0.122-0.164) on weekdays versus 0.064 (95% CI: 0.044-0.090) on weekends, with an incidence rate ratio (IRR) of 2.22 (95% CI: 1.52-3.25). The downtime incidence during the morning period was 0.0130 per hour (95% CI: 0.0107-0.0156), which was higher than other time periods, with IRRs ranging from 1.42 (95% CI: 1.06-1.90) to 22.2 (95% CI: 12.66-38.92).In this study, analysis of EHR downtime events in a Chinese hospital identified three key issues: high-risk downtime in medication processes, peak occurrence periods on weekdays and during morning hours, and significant clinical care disruptions. Recommended measures include implementing tiered contingency protocols, enhancing technical resilience, and establishing standardized reporting mechanisms.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":"16 5","pages":"1419-1429"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12552068/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145369076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-12-18DOI: 10.1055/a-2765-6792
Alejandro García-Rudolph, Alicia Romero Marquez, Mónica López Andurell, Laura Jimenez Pérez, Susana Guillén Gazapo, Marc Navarro Berenguel, Eloy Opisso, Elena Hernandez-Pena
Sleep quality critically influences recovery in neurological patients, yet its longitudinal monitoring during hospitalization remains limited. Nursing narrative notes offer an underutilized resource to track sleep trajectories objectively across time.To propose and apply a formal pipeline that integrates structured clinical data and unstructured nursing annotations to monitor sleep trajectories during post-acute inpatient neurorehabilitation, relying exclusively on free-to-use software tools and without increasing nursing workload.A total of 17,039 nighttime nursing annotations were extracted and categorized into four sleep quality states. Two expert raters manually labeled a training set of 2,000 annotations (κ = 0.84). A random forest classifier achieved 0.93 sensitivity and 0.94 specificity and was used to classify the remaining notes. Sleep sequences were constructed and clustered using sequence analysis (TraMineR) and hierarchical clustering (AGNES, Ward's method). The obtained clusters (silhouette = 0.40) were compared using non-parametric statistics across clinical, functional, and social variables in a cohort of 303 post-acute consecutive neurorehabilitation inpatients.Four distinct sleep trajectory clusters were identified, each characterized by unique functional and socio-environmental profiles. The first group (n = 102; 33.7%) combined high functional independence, strong social support, stable economy, short hospitalization, and favorable sleep quality. The second group (n = 76; 25.1%) presented moderate functional independence, precarious economic conditions, and the highest proportion of poor sleep quality. The third group (n = 76; 25.1%) exhibited severe functional impairment, long hospitalization, poor housing conditions, but paradoxically the highest proportion of good sleep quality. The fourth group (n = 49; 16.2%) showed profound disability, relatively favorable socio-economic conditions, and predominance of intermediate sleep quality, likely influenced by medication. Distinctive sets of social and functional keywords emerged for each cluster.This pipeline identified clinically meaningful sleep profiles from nursing notes, highlighting functional and social determinants' role in shaping neurorehabilitation sleep trajectories.
{"title":"A Sequence Clustering Approach to Mining Sleep Trajectories from Nursing Narratives and Structured Clinical Data.","authors":"Alejandro García-Rudolph, Alicia Romero Marquez, Mónica López Andurell, Laura Jimenez Pérez, Susana Guillén Gazapo, Marc Navarro Berenguel, Eloy Opisso, Elena Hernandez-Pena","doi":"10.1055/a-2765-6792","DOIUrl":"10.1055/a-2765-6792","url":null,"abstract":"<p><p>Sleep quality critically influences recovery in neurological patients, yet its longitudinal monitoring during hospitalization remains limited. Nursing narrative notes offer an underutilized resource to track sleep trajectories objectively across time.To propose and apply a formal pipeline that integrates structured clinical data and unstructured nursing annotations to monitor sleep trajectories during post-acute inpatient neurorehabilitation, relying exclusively on free-to-use software tools and without increasing nursing workload.A total of 17,039 nighttime nursing annotations were extracted and categorized into four sleep quality states. Two expert raters manually labeled a training set of 2,000 annotations (κ = 0.84). A random forest classifier achieved 0.93 sensitivity and 0.94 specificity and was used to classify the remaining notes. Sleep sequences were constructed and clustered using sequence analysis (TraMineR) and hierarchical clustering (AGNES, Ward's method). The obtained clusters (silhouette = 0.40) were compared using non-parametric statistics across clinical, functional, and social variables in a cohort of 303 post-acute consecutive neurorehabilitation inpatients.Four distinct sleep trajectory clusters were identified, each characterized by unique functional and socio-environmental profiles. The first group (<i>n</i> = 102; 33.7%) combined high functional independence, strong social support, stable economy, short hospitalization, and favorable sleep quality. The second group (<i>n</i> = 76; 25.1%) presented moderate functional independence, precarious economic conditions, and the highest proportion of poor sleep quality. The third group (<i>n</i> = 76; 25.1%) exhibited severe functional impairment, long hospitalization, poor housing conditions, but paradoxically the highest proportion of good sleep quality. The fourth group (<i>n</i> = 49; 16.2%) showed profound disability, relatively favorable socio-economic conditions, and predominance of intermediate sleep quality, likely influenced by medication. Distinctive sets of social and functional keywords emerged for each cluster.This pipeline identified clinically meaningful sleep profiles from nursing notes, highlighting functional and social determinants' role in shaping neurorehabilitation sleep trajectories.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":"16 5","pages":"1837-1849"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12714454/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145783383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-05-26DOI: 10.1055/a-2620-3244
Brian G Bell, Adam Khimji, Basharat Hussain, Anthony J Avery
In recent years, there has been an expansion in the literature on the effects of computerized alerts on prescribing and patient outcomes. The aim of our study was to examine the impact of these systems on clinician prescribing and patient outcomes.We searched three databases (Medline, Embase, and PsychINFO) for studies that had been conducted since 2009 and included studies that examined the effects of alerts at the point of prescribing. We extracted data from 69 studies.Most studies reported a beneficial effect on prescribing of computerized alerts (n = 58, 84.1%), including all studies (n = 4) that used passive alerts. Seven of the 10 studies that reported on patient outcomes showed a beneficial effect. Both randomized controlled trials (RCTs) and non-RCTS showed beneficial effects on prescribing across a range of different types of alerts. In 43 studies, it was possible to ascertain the effects of different types of alerts; the interventions that were most frequently associated with improvements in prescribing were drug-laboratory alerts (9/11; 81.8%); dose range checking (6/7; 85.7%); formulary alerts (8/9; 88.9%), and drug-allergy alerts (4/4; 100%). However, most of the studies did not satisfy the quality criteria.Most of the studies found a beneficial effect of computerized alerts on prescribing. We have also shown that these benefits are apparent for a range of different types of alerts. These findings support the continued development, implementation, and evaluation of computerized alerts for prescribing.
{"title":"The Effect of Computerized Alerts on Prescribing and Patient Outcomes: A Systematic Review.","authors":"Brian G Bell, Adam Khimji, Basharat Hussain, Anthony J Avery","doi":"10.1055/a-2620-3244","DOIUrl":"10.1055/a-2620-3244","url":null,"abstract":"<p><p>In recent years, there has been an expansion in the literature on the effects of computerized alerts on prescribing and patient outcomes. The aim of our study was to examine the impact of these systems on clinician prescribing and patient outcomes.We searched three databases (Medline, Embase, and PsychINFO) for studies that had been conducted since 2009 and included studies that examined the effects of alerts at the point of prescribing. We extracted data from 69 studies.Most studies reported a beneficial effect on prescribing of computerized alerts (<i>n</i> = 58, 84.1%), including all studies (<i>n</i> = 4) that used passive alerts. Seven of the 10 studies that reported on patient outcomes showed a beneficial effect. Both randomized controlled trials (RCTs) and non-RCTS showed beneficial effects on prescribing across a range of different types of alerts. In 43 studies, it was possible to ascertain the effects of different types of alerts; the interventions that were most frequently associated with improvements in prescribing were drug-laboratory alerts (9/11; 81.8%); dose range checking (6/7; 85.7%); formulary alerts (8/9; 88.9%), and drug-allergy alerts (4/4; 100%). However, most of the studies did not satisfy the quality criteria.Most of the studies found a beneficial effect of computerized alerts on prescribing. We have also shown that these benefits are apparent for a range of different types of alerts. These findings support the continued development, implementation, and evaluation of computerized alerts for prescribing.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":" ","pages":"1381-1392"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12534129/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144152614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-05-30DOI: 10.1055/a-2621-7717
Swaminathan Kandaswamy, Sarah Thompson, Edwin Ray, Tracy Ruska, Evan Orenstein
The timely administration of postoperative antibiotics is crucial for preventing surgical site infections. Despite surgical ordering workflows designed to facilitate care across settings, delays in antibiotic administration posttransfer to the pediatric intensive care unit (PICU) were identified. We aimed to develop a clinical decision support (CDS) system to enhance timely order activation in a large pediatric health system. We hypothesized that the time to release signed and held orders by PICU nurses would decrease after implementation of an electronic health record alert, ultimately reducing time to antibiotic administration.This study aimed to describe the CDS design for the timely release of postoperative orders, evaluate its effectiveness, and share lessons learned from its implementation.Stakeholder interviews and a staged implementation approach were employed to develop and implement the CDS in one of the two PICUs. An interruptive alert was designed to prompt nurses to release specific signed and held orders. The study period spanned from January 2019 to August 2024, with pre- and postintervention comparisons of the mean time to release medication orders.The alert was used from May to December 2021, but was associated with increased time to release orders. Postintervention usability testing revealed confusion among nurses, leading to the alert's discontinuation. A post hoc analysis suggested that the observed delays might align with seasonal trends rather than the CDS intervention.The CDS implementation had unintended adverse effects on order release times, emphasizing the importance of monitoring and evaluating such systems postimplementation. Usability testing highlighted the complexity of the alert messaging and the importance of including end-users in the design phase. Extended evaluation periods are recommended to discern CDS impact accurately. The study also underscores the necessity of assessing whether a technological or workflow/process change is needed in response to safety reports.
{"title":"Unintended Delays in Pediatric Postoperative Antibiotic Administration from Overly Complex CDS Instructions.","authors":"Swaminathan Kandaswamy, Sarah Thompson, Edwin Ray, Tracy Ruska, Evan Orenstein","doi":"10.1055/a-2621-7717","DOIUrl":"10.1055/a-2621-7717","url":null,"abstract":"<p><p>The timely administration of postoperative antibiotics is crucial for preventing surgical site infections. Despite surgical ordering workflows designed to facilitate care across settings, delays in antibiotic administration posttransfer to the pediatric intensive care unit (PICU) were identified. We aimed to develop a clinical decision support (CDS) system to enhance timely order activation in a large pediatric health system. We hypothesized that the time to release signed and held orders by PICU nurses would decrease after implementation of an electronic health record alert, ultimately reducing time to antibiotic administration.This study aimed to describe the CDS design for the timely release of postoperative orders, evaluate its effectiveness, and share lessons learned from its implementation.Stakeholder interviews and a staged implementation approach were employed to develop and implement the CDS in one of the two PICUs. An interruptive alert was designed to prompt nurses to release specific signed and held orders. The study period spanned from January 2019 to August 2024, with pre- and postintervention comparisons of the mean time to release medication orders.The alert was used from May to December 2021, but was associated with increased time to release orders. Postintervention usability testing revealed confusion among nurses, leading to the alert's discontinuation. A post hoc analysis suggested that the observed delays might align with seasonal trends rather than the CDS intervention.The CDS implementation had unintended adverse effects on order release times, emphasizing the importance of monitoring and evaluating such systems postimplementation. Usability testing highlighted the complexity of the alert messaging and the importance of including end-users in the design phase. Extended evaluation periods are recommended to discern CDS impact accurately. The study also underscores the necessity of assessing whether a technological or workflow/process change is needed in response to safety reports.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":" ","pages":"1413-1418"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12534123/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144974990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-10-31DOI: 10.1055/a-2624-1875
Sue S Feldman, Ben Martin, Josette Jones, Kim M Unertl, Madison Fritts, Paul Nagy, RaeLynn Gochnauer
Health informatics continues to be a continuously evolving discipline. As a result, faculty in health informatics training programs cover a broad range of topics and work in highly diverse academic contexts. This is a strength of the field, and also introduces challenges in understanding faculty salary ranges and assessing potential salary disparities across contexts. Although limited studies have been done on salary ranges in specific academic contexts, prior to this, no comprehensive salary survey had been performed on faculty in health informatics.The goal of this study was to obtain a preliminary understanding of the salary ranges for academic health informatics faculty and contextual factors that affect salary ranges in this field.A team of researchers affiliated with the American Medical Informatics Association (AMIA) Academic Forum collaboratively developed a survey focused on salary and factors that affect salary for health informatics faculty. The survey was distributed through official AMIA communication channels, including communications at the 2023 AMIA Symposium. Descriptive statistics were calculated, and an ordinal regression analysis was performed.Of 314 responses, 153 individuals employed by academic organizations reported their base salary information. A majority (61%) of these respondents reported working in a school of medicine, with PhD (59%) and MD (37%) degrees reported as the highest educational level for the majority of the sample. When adjusted for cost of living, there were statistically significant associations between salary and type of school/department, position/title, and highest degree. We also found that while salaries at the assistant professor level were between $120,000 and 159,999, those of associate and full professors were at or above $200,000.The survey provided preliminary baseline data on salary ranges in academic health informatics programs and factors leading to salary differences. More data are needed on focused topics to extend the impact of this type of survey.
{"title":"Salary Structures in Health Informatics Academia: A Preliminary Survey Analysis.","authors":"Sue S Feldman, Ben Martin, Josette Jones, Kim M Unertl, Madison Fritts, Paul Nagy, RaeLynn Gochnauer","doi":"10.1055/a-2624-1875","DOIUrl":"10.1055/a-2624-1875","url":null,"abstract":"<p><p>Health informatics continues to be a continuously evolving discipline. As a result, faculty in health informatics training programs cover a broad range of topics and work in highly diverse academic contexts. This is a strength of the field, and also introduces challenges in understanding faculty salary ranges and assessing potential salary disparities across contexts. Although limited studies have been done on salary ranges in specific academic contexts, prior to this, no comprehensive salary survey had been performed on faculty in health informatics.The goal of this study was to obtain a preliminary understanding of the salary ranges for academic health informatics faculty and contextual factors that affect salary ranges in this field.A team of researchers affiliated with the American Medical Informatics Association (AMIA) Academic Forum collaboratively developed a survey focused on salary and factors that affect salary for health informatics faculty. The survey was distributed through official AMIA communication channels, including communications at the 2023 AMIA Symposium. Descriptive statistics were calculated, and an ordinal regression analysis was performed.Of 314 responses, 153 individuals employed by academic organizations reported their base salary information. A majority (61%) of these respondents reported working in a school of medicine, with PhD (59%) and MD (37%) degrees reported as the highest educational level for the majority of the sample. When adjusted for cost of living, there were statistically significant associations between salary and type of school/department, position/title, and highest degree. We also found that while salaries at the assistant professor level were between $120,000 and 159,999, those of associate and full professors were at or above $200,000.The survey provided preliminary baseline data on salary ranges in academic health informatics programs and factors leading to salary differences. More data are needed on focused topics to extend the impact of this type of survey.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":"16 5","pages":"1560-1567"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12578576/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145423129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-10-30DOI: 10.1055/a-2713-5725
George A Gellert, Daniel Borgasano, Robert Palermo, Gabriel L Gellert, Sean P Kelly
Gather insights regarding the state of third-party access cybersecurity in healthcare delivery organizations (HDOs).An online multinational survey was deployed to eligible respondents to assess HDO third-party access, cybersecurity, and challenges.Of 209 respondents, only 51.1% reported having a comprehensive inventory of all third parties accessing their network. Sixty percent stated third-party access to sensitive/confidential information was not routinely monitored, despite 19% having more than 40, and 31% having 21 to 40 third parties with network access. Reasons included lack of resources (48%) and centralized control over third-party relationships (36%), complexity (28%), and frequent third-party turnover (22%). Confidence in third-party ability to secure information and their reputations was cited. More than half (56%) reported a breach involving a third party in the last 12 months, and two-thirds anticipate breaches increasing in the next 12 to 24 months. Most agreed breaches are a cybersecurity priority, a resource drain, and their weakest attack surface. Slight majorities indicated high perceived effectiveness in mitigating, detecting, preventing, and controlling third-party access risks and security/privacy regulatory compliance. Regarding existing solutions, roughly half (55%) ranked the effectiveness of vendor privileged access management (VPAM) and privileged access management (PAM; 49%) at ≤ 6 on a 10-point scale, respectively. Barriers to reducing access risks include lack of oversight/governance (53%) and insufficient resources (45%). Of those monitoring third-party access, 53% do so manually. Breach consequences include loss/theft of sensitive information (60%), regulatory fines (49%), severed relationships with third parties (47%), and loss of revenue (42%) and business partners (38%).HDOs recognize the increasing threat of third-party cyber breaches but are struggling to effectively address them. Lack of budget, expert resources, complexity, and third-party turnover are among the reasons why. Need exists for automated, cost-effective solutions to address the significant risks of third-party access with a consistent strategy that minimizes breach risk by securing remote access to privileged assets, accounts, and data.
{"title":"Third-Party Access Cybersecurity Threats and Precautions: A Survey of Healthcare Delivery Organizations.","authors":"George A Gellert, Daniel Borgasano, Robert Palermo, Gabriel L Gellert, Sean P Kelly","doi":"10.1055/a-2713-5725","DOIUrl":"10.1055/a-2713-5725","url":null,"abstract":"<p><p>Gather insights regarding the state of third-party access cybersecurity in healthcare delivery organizations (HDOs).An online multinational survey was deployed to eligible respondents to assess HDO third-party access, cybersecurity, and challenges.Of 209 respondents, only 51.1% reported having a comprehensive inventory of all third parties accessing their network. Sixty percent stated third-party access to sensitive/confidential information was not routinely monitored, despite 19% having more than 40, and 31% having 21 to 40 third parties with network access. Reasons included lack of resources (48%) and centralized control over third-party relationships (36%), complexity (28%), and frequent third-party turnover (22%). Confidence in third-party ability to secure information and their reputations was cited. More than half (56%) reported a breach involving a third party in the last 12 months, and two-thirds anticipate breaches increasing in the next 12 to 24 months. Most agreed breaches are a cybersecurity priority, a resource drain, and their weakest attack surface. Slight majorities indicated high perceived effectiveness in mitigating, detecting, preventing, and controlling third-party access risks and security/privacy regulatory compliance. Regarding existing solutions, roughly half (55%) ranked the effectiveness of vendor privileged access management (VPAM) and privileged access management (PAM; 49%) at ≤ 6 on a 10-point scale, respectively. Barriers to reducing access risks include lack of oversight/governance (53%) and insufficient resources (45%). Of those monitoring third-party access, 53% do so manually. Breach consequences include loss/theft of sensitive information (60%), regulatory fines (49%), severed relationships with third parties (47%), and loss of revenue (42%) and business partners (38%).HDOs recognize the increasing threat of third-party cyber breaches but are struggling to effectively address them. Lack of budget, expert resources, complexity, and third-party turnover are among the reasons why. Need exists for automated, cost-effective solutions to address the significant risks of third-party access with a consistent strategy that minimizes breach risk by securing remote access to privileged assets, accounts, and data.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":"16 5","pages":"1518-1530"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12575072/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}