Pub Date : 2024-12-01Epub Date: 2024-08-13DOI: 10.1007/s10729-024-09684-5
Abdulaziz Ahmed, Khalid Y Aram, Salih Tutun, Dursun Delen
The issue of left against medical advice (LAMA) patients is common in today's emergency departments (EDs). This issue represents a medico-legal risk and may result in potential readmission, mortality, or revenue loss. Thus, understanding the factors that cause patients to "leave against medical advice" is vital to mitigate and potentially eliminate these adverse outcomes. This paper proposes a framework for studying the factors that affect LAMA in EDs. The framework integrates machine learning, metaheuristic optimization, and model interpretation techniques. Metaheuristic optimization is used for hyperparameter optimization-one of the main challenges of machine learning model development. Adaptive tabu simulated annealing (ATSA) metaheuristic algorithm is utilized for optimizing the parameters of extreme gradient boosting (XGB). The optimized XGB models are used to predict the LAMA outcomes for patients under treatment in ED. The designed algorithms are trained and tested using four data groups which are created using feature selection. The model with the best predictive performance is then interpreted using the SHaply Additive exPlanations (SHAP) method. The results show that best model has an area under the curve (AUC) and sensitivity of 76% and 82%, respectively. The best model was explained using SHAP method.
不听医嘱(LAMA)的病人在当今的急诊科(ED)中很常见。这一问题代表着医疗法律风险,并可能导致再次入院、死亡或收入损失。因此,了解导致患者 "违抗医嘱离院 "的因素对于减轻和消除这些不良后果至关重要。本文提出了一个研究 ED 中影响 LAMA 的因素的框架。该框架整合了机器学习、元启发式优化和模型解释技术。元启发式优化用于超参数优化--这是机器学习模型开发的主要挑战之一。自适应塔布模拟退火(ATSA)元启发式算法用于优化极梯度提升(XGB)参数。优化后的 XGB 模型用于预测 ED 患者的 LAMA 治疗结果。设计的算法通过使用特征选择创建的四个数据组进行训练和测试。然后,使用 "SHAPly Additive exPlanations (SHAP) "方法对具有最佳预测性能的模型进行解释。结果显示,最佳模型的曲线下面积(AUC)和灵敏度分别为 76% 和 82%。最佳模型是用 SHAP 方法解释的。
{"title":"A study of \"left against medical advice\" emergency department patients: an optimized explainable artificial intelligence framework.","authors":"Abdulaziz Ahmed, Khalid Y Aram, Salih Tutun, Dursun Delen","doi":"10.1007/s10729-024-09684-5","DOIUrl":"10.1007/s10729-024-09684-5","url":null,"abstract":"<p><p>The issue of left against medical advice (LAMA) patients is common in today's emergency departments (EDs). This issue represents a medico-legal risk and may result in potential readmission, mortality, or revenue loss. Thus, understanding the factors that cause patients to \"leave against medical advice\" is vital to mitigate and potentially eliminate these adverse outcomes. This paper proposes a framework for studying the factors that affect LAMA in EDs. The framework integrates machine learning, metaheuristic optimization, and model interpretation techniques. Metaheuristic optimization is used for hyperparameter optimization-one of the main challenges of machine learning model development. Adaptive tabu simulated annealing (ATSA) metaheuristic algorithm is utilized for optimizing the parameters of extreme gradient boosting (XGB). The optimized XGB models are used to predict the LAMA outcomes for patients under treatment in ED. The designed algorithms are trained and tested using four data groups which are created using feature selection. The model with the best predictive performance is then interpreted using the SHaply Additive exPlanations (SHAP) method. The results show that best model has an area under the curve (AUC) and sensitivity of 76% and 82%, respectively. The best model was explained using SHAP method.</p>","PeriodicalId":12903,"journal":{"name":"Health Care Management Science","volume":" ","pages":"485-502"},"PeriodicalIF":2.3,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11645325/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141975577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2024-09-24DOI: 10.1007/s10729-024-09686-3
Lauren L Czerniak, Mariel S Lavieri, Mark S Daskin, Eunshin Byon, Karl Renius, Burgunda V Sweet, Jennifer Leja, Matthew A Tupps
Supply chain disruptions and demand disruptions make it challenging for hospital pharmacy managers to determine how much inventory to have on-hand. Having insufficient inventory leads to drug shortages, while having excess inventory leads to drug waste. To mitigate drug shortages and waste, hospital pharmacy managers can implement inventory policies that account for supply chain disruptions and adapt these inventory policies over time to respond to demand disruptions. Demand disruptions were prevalent during the Covid-19 pandemic. However, it remains unclear how a drug's shortage-waste weighting (i.e., concern for shortages versus concern for waste) as well as the duration of and time between supply chain disruptions influence the benefits (or detriments) of adapting to demand disruptions. We develop an adaptive inventory system (i.e., inventory policies change over time) and conduct an extensive numerical analysis using real-world demand data from the University of Michigan's Central Pharmacy to address this research question. For a fixed mean duration of and mean time between supply chain disruptions, we find a drug's shortage-waste weighting dictates the magnitude of the benefits (or detriments) of adaptive inventory policies. We create a ranking procedure that provides a way of discerning which drugs are of most concern and illustrates which policies to update given that a limited number of inventory policies can be updated. When applying our framework to over 300 drugs, we find a decision-maker needs to update a very small proportion of drugs (e.g., ) at any point in time to get the greatest benefits of adaptive inventory policies.
{"title":"The benefits (or detriments) of adapting to demand disruptions in a hospital pharmacy with supply chain disruptions.","authors":"Lauren L Czerniak, Mariel S Lavieri, Mark S Daskin, Eunshin Byon, Karl Renius, Burgunda V Sweet, Jennifer Leja, Matthew A Tupps","doi":"10.1007/s10729-024-09686-3","DOIUrl":"10.1007/s10729-024-09686-3","url":null,"abstract":"<p><p>Supply chain disruptions and demand disruptions make it challenging for hospital pharmacy managers to determine how much inventory to have on-hand. Having insufficient inventory leads to drug shortages, while having excess inventory leads to drug waste. To mitigate drug shortages and waste, hospital pharmacy managers can implement inventory policies that account for supply chain disruptions and adapt these inventory policies over time to respond to demand disruptions. Demand disruptions were prevalent during the Covid-19 pandemic. However, it remains unclear how a drug's shortage-waste weighting (i.e., concern for shortages versus concern for waste) as well as the duration of and time between supply chain disruptions influence the benefits (or detriments) of adapting to demand disruptions. We develop an adaptive inventory system (i.e., inventory policies change over time) and conduct an extensive numerical analysis using real-world demand data from the University of Michigan's Central Pharmacy to address this research question. For a fixed mean duration of and mean time between supply chain disruptions, we find a drug's shortage-waste weighting dictates the magnitude of the benefits (or detriments) of adaptive inventory policies. We create a ranking procedure that provides a way of discerning which drugs are of most concern and illustrates which policies to update given that a limited number of inventory policies can be updated. When applying our framework to over 300 drugs, we find a decision-maker needs to update a very small proportion of drugs (e.g., <math><mrow><mo><</mo> <mn>5</mn> <mo>%</mo></mrow> </math> ) at any point in time to get the greatest benefits of adaptive inventory policies.</p>","PeriodicalId":12903,"journal":{"name":"Health Care Management Science","volume":" ","pages":"525-554"},"PeriodicalIF":2.3,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142307645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2024-11-04DOI: 10.1007/s10729-024-09691-6
Cameron Trentz, Jacklyn Engelbart, Jason Semprini, Amanda Kahl, Eric Anyimadu, John Buatti, Thomas Casavant, Mary Charlton, Guadalupe Canahuate
Background: Despite decades of pursuing health equity, racial and ethnic disparities persist in healthcare in America. For cancer specifically, one of the leading observed disparities is worse mortality among non-Hispanic Black patients compared to non-Hispanic White patients across the cancer care continuum. These real-world disparities are reflected in the data used to inform the decisions made to alleviate such inequities. Failing to account for inherently biased data underlying these observations could intensify racial cancer disparities and lead to misguided efforts that fail to appropriately address the real causes of health inequity.
Objective: Estimate the racial/ethnic bias of machine learning models in predicting two-year survival and surgery treatment recommendation for non-small cell lung cancer (NSCLC) patients.
Methods: A Cox survival model, and a LOGIT model as well as three other machine learning models for predicting surgery recommendation were trained using SEER data from NSCLC patients diagnosed from 2000-2018. Models were trained with a 70/30 train/test split (both including and excluding race/ethnicity) and evaluated using performance and fairness metrics. The effects of oversampling the training data were also evaluated.
Results: The survival models show disparate impact towards non-Hispanic Black patients regardless of whether race/ethnicity is used as a predictor. The models including race/ethnicity amplified the disparities observed in the data. The exclusion of race/ethnicity as a predictor in the survival and surgery recommendation models improved fairness metrics without degrading model performance. Stratified oversampling strategies reduced disparate impact while reducing the accuracy of the model.
Conclusion: NSCLC disparities are complex and multifaceted. Yet, even when accounting for age and stage at diagnosis, non-Hispanic Black patients with NSCLC are less often recommended to have surgery than non-Hispanic White patients. Machine learning models amplified the racial/ethnic disparities across the cancer care continuum (which are reflected in the data used to make model decisions). Excluding race/ethnicity lowered the bias of the models but did not affect disparate impact. Developing analytical strategies to improve fairness would in turn improve the utility of machine learning approaches analyzing population-based cancer data.
{"title":"Evaluating machine learning model bias and racial disparities in non-small cell lung cancer using SEER registry data.","authors":"Cameron Trentz, Jacklyn Engelbart, Jason Semprini, Amanda Kahl, Eric Anyimadu, John Buatti, Thomas Casavant, Mary Charlton, Guadalupe Canahuate","doi":"10.1007/s10729-024-09691-6","DOIUrl":"10.1007/s10729-024-09691-6","url":null,"abstract":"<p><strong>Background: </strong>Despite decades of pursuing health equity, racial and ethnic disparities persist in healthcare in America. For cancer specifically, one of the leading observed disparities is worse mortality among non-Hispanic Black patients compared to non-Hispanic White patients across the cancer care continuum. These real-world disparities are reflected in the data used to inform the decisions made to alleviate such inequities. Failing to account for inherently biased data underlying these observations could intensify racial cancer disparities and lead to misguided efforts that fail to appropriately address the real causes of health inequity.</p><p><strong>Objective: </strong>Estimate the racial/ethnic bias of machine learning models in predicting two-year survival and surgery treatment recommendation for non-small cell lung cancer (NSCLC) patients.</p><p><strong>Methods: </strong>A Cox survival model, and a LOGIT model as well as three other machine learning models for predicting surgery recommendation were trained using SEER data from NSCLC patients diagnosed from 2000-2018. Models were trained with a 70/30 train/test split (both including and excluding race/ethnicity) and evaluated using performance and fairness metrics. The effects of oversampling the training data were also evaluated.</p><p><strong>Results: </strong>The survival models show disparate impact towards non-Hispanic Black patients regardless of whether race/ethnicity is used as a predictor. The models including race/ethnicity amplified the disparities observed in the data. The exclusion of race/ethnicity as a predictor in the survival and surgery recommendation models improved fairness metrics without degrading model performance. Stratified oversampling strategies reduced disparate impact while reducing the accuracy of the model.</p><p><strong>Conclusion: </strong>NSCLC disparities are complex and multifaceted. Yet, even when accounting for age and stage at diagnosis, non-Hispanic Black patients with NSCLC are less often recommended to have surgery than non-Hispanic White patients. Machine learning models amplified the racial/ethnic disparities across the cancer care continuum (which are reflected in the data used to make model decisions). Excluding race/ethnicity lowered the bias of the models but did not affect disparate impact. Developing analytical strategies to improve fairness would in turn improve the utility of machine learning approaches analyzing population-based cancer data.</p>","PeriodicalId":12903,"journal":{"name":"Health Care Management Science","volume":" ","pages":"631-649"},"PeriodicalIF":2.3,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142566351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2024-09-10DOI: 10.1007/s10729-024-09685-4
Robin Buter, Arthur Nazarian, Hendrik Koffijberg, Erwin W Hans, Remy Stieglis, Rudolph W Koster, Derya Demirtas
Volunteer responder systems (VRS) alert and guide nearby lay rescuers towards the location of an emergency. An application of such a system is to out-of-hospital cardiac arrests, where early cardiopulmonary resuscitation (CPR) and defibrillation with an automated external defibrillator (AED) are crucial for improving survival rates. However, many AEDs remain underutilized due to poor location choices, while other areas lack adequate AED coverage. In this paper, we present a comprehensive data-driven algorithmic approach to optimize deployment of (additional) public-access AEDs to be used in a VRS. Alongside a binary integer programming (BIP) formulation, we consider two heuristic methods, namely Greedy and Greedy Randomized Adaptive Search Procedure (GRASP), to solve the gradual Maximal Covering Location (MCLP) problem with partial coverage for AED deployment. We develop realistic gradually decreasing coverage functions for volunteers going on foot, by bike, or by car. A spatial probability distribution of cardiac arrest is estimated using kernel density estimation to be used as input for the models and to evaluate the solutions. We apply our approach to 29 real-world instances (municipalities) in the Netherlands. We show that GRASP can obtain near-optimal solutions for large problem instances in significantly less time than the exact method. The results indicate that relocating existing AEDs improves the weighted average coverage from 36% to 49% across all municipalities, with relative improvements ranging from 1% to 175%. For most municipalities, strategically placing 5 to 10 additional AEDs can already provide substantial improvements.
{"title":"Strategic placement of volunteer responder system defibrillators.","authors":"Robin Buter, Arthur Nazarian, Hendrik Koffijberg, Erwin W Hans, Remy Stieglis, Rudolph W Koster, Derya Demirtas","doi":"10.1007/s10729-024-09685-4","DOIUrl":"10.1007/s10729-024-09685-4","url":null,"abstract":"<p><p>Volunteer responder systems (VRS) alert and guide nearby lay rescuers towards the location of an emergency. An application of such a system is to out-of-hospital cardiac arrests, where early cardiopulmonary resuscitation (CPR) and defibrillation with an automated external defibrillator (AED) are crucial for improving survival rates. However, many AEDs remain underutilized due to poor location choices, while other areas lack adequate AED coverage. In this paper, we present a comprehensive data-driven algorithmic approach to optimize deployment of (additional) public-access AEDs to be used in a VRS. Alongside a binary integer programming (BIP) formulation, we consider two heuristic methods, namely Greedy and Greedy Randomized Adaptive Search Procedure (GRASP), to solve the gradual Maximal Covering Location (MCLP) problem with partial coverage for AED deployment. We develop realistic gradually decreasing coverage functions for volunteers going on foot, by bike, or by car. A spatial probability distribution of cardiac arrest is estimated using kernel density estimation to be used as input for the models and to evaluate the solutions. We apply our approach to 29 real-world instances (municipalities) in the Netherlands. We show that GRASP can obtain near-optimal solutions for large problem instances in significantly less time than the exact method. The results indicate that relocating existing AEDs improves the weighted average coverage from 36% to 49% across all municipalities, with relative improvements ranging from 1% to 175%. For most municipalities, strategically placing 5 to 10 additional AEDs can already provide substantial improvements.</p>","PeriodicalId":12903,"journal":{"name":"Health Care Management Science","volume":" ","pages":"503-524"},"PeriodicalIF":2.3,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11645431/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142285860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2024-10-01DOI: 10.1007/s10729-024-09683-6
Farhad Hamidzadeh, Mir Saman Pishvaee, Naeme Zarrinpoor
Organ transplantation is one of the most complicated and challenging treatments in healthcare systems. Despite the significant medical advancements, many patients die while waiting for organ transplants because of the noticeable differences between organ supply and demand. In the organ transplantation supply chain, organ allocation is the most significant decision during the organ transplantation procedure, and kidney is the most widely transplanted organ. This research presents a novel method for assessing the efficiency and ranking of qualified organ-patient pairs as decision-making units (DMUs) for kidney allocation problem in the existence of COVID-19 pandemic and uncertain medical and logistical data. To achieve this goal, two-stage network data envelopment analysis (DEA) and credibility-based chance constraint programming (CCP) are utilized to develop a novel two-stage fuzzy network data envelopment analysis (TSFNDEA) method. The main benefits of the developed method can be summarized as follows: considering internal structures in kidney allocation system, investigating both medical and logistical aspects of the problem, the capability of expanding to other network structures, and unique efficiency decomposition under uncertainty. Moreover, in order to evaluate the validity and applicability of the proposed approach, a validation algorithm utilizing a real case study and different confidence levels is used. Finally, the numerical results indicate that the developed approach outperforms the existing kidney allocation system.
{"title":"A novel two-stage network data envelopment analysis model for kidney allocation problem under medical and logistical uncertainty: a real case study.","authors":"Farhad Hamidzadeh, Mir Saman Pishvaee, Naeme Zarrinpoor","doi":"10.1007/s10729-024-09683-6","DOIUrl":"10.1007/s10729-024-09683-6","url":null,"abstract":"<p><p>Organ transplantation is one of the most complicated and challenging treatments in healthcare systems. Despite the significant medical advancements, many patients die while waiting for organ transplants because of the noticeable differences between organ supply and demand. In the organ transplantation supply chain, organ allocation is the most significant decision during the organ transplantation procedure, and kidney is the most widely transplanted organ. This research presents a novel method for assessing the efficiency and ranking of qualified organ-patient pairs as decision-making units (DMUs) for kidney allocation problem in the existence of COVID-19 pandemic and uncertain medical and logistical data. To achieve this goal, two-stage network data envelopment analysis (DEA) and credibility-based chance constraint programming (CCP) are utilized to develop a novel two-stage fuzzy network data envelopment analysis (TSFNDEA) method. The main benefits of the developed method can be summarized as follows: considering internal structures in kidney allocation system, investigating both medical and logistical aspects of the problem, the capability of expanding to other network structures, and unique efficiency decomposition under uncertainty. Moreover, in order to evaluate the validity and applicability of the proposed approach, a validation algorithm utilizing a real case study and different confidence levels is used. Finally, the numerical results indicate that the developed approach outperforms the existing kidney allocation system.</p>","PeriodicalId":12903,"journal":{"name":"Health Care Management Science","volume":" ","pages":"555-579"},"PeriodicalIF":2.3,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142345536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Over recent years, home health care has gained significant attention as an efficient solution to the increasing demand for healthcare services. Home health care scheduling is a challenging problem involving multiple complicated assignments and routing decisions subject to various constraints. The problem becomes even more challenging when considered on a rolling horizon with stochastic patient requests. This paper discusses the Online Dynamic Home Health Care Scheduling Problem (ODHHCSP), in which a home health care agency has to decide whether to accept or reject a patient request and determine the visit schedule and routes in case of acceptance. The objective of the problem is to maximize the number of patients served, given the limited resources. When the agency receives a patient's request, a decision must be made on the spot, which poses many challenges, such as stochastic future requests or a limited time budget for decision-making. In this paper, we model the problem as a Markov decision process and propose a reinforcement learning (RL) approach. The experimental results show that the proposed approach outperforms other algorithms in the literature in terms of solution quality. In addition, a constant runtime of less than 0.001 seconds for each decision makes the approach especially suitable for an online setting like our problem.
{"title":"A reinforcement learning approach for the online dynamic home health care scheduling problem.","authors":"Quy Ta-Dinh, Tu-San Pham, Minh Hoàng Hà, Louis-Martin Rousseau","doi":"10.1007/s10729-024-09692-5","DOIUrl":"10.1007/s10729-024-09692-5","url":null,"abstract":"<p><p>Over recent years, home health care has gained significant attention as an efficient solution to the increasing demand for healthcare services. Home health care scheduling is a challenging problem involving multiple complicated assignments and routing decisions subject to various constraints. The problem becomes even more challenging when considered on a rolling horizon with stochastic patient requests. This paper discusses the Online Dynamic Home Health Care Scheduling Problem (ODHHCSP), in which a home health care agency has to decide whether to accept or reject a patient request and determine the visit schedule and routes in case of acceptance. The objective of the problem is to maximize the number of patients served, given the limited resources. When the agency receives a patient's request, a decision must be made on the spot, which poses many challenges, such as stochastic future requests or a limited time budget for decision-making. In this paper, we model the problem as a Markov decision process and propose a reinforcement learning (RL) approach. The experimental results show that the proposed approach outperforms other algorithms in the literature in terms of solution quality. In addition, a constant runtime of less than 0.001 seconds for each decision makes the approach especially suitable for an online setting like our problem.</p>","PeriodicalId":12903,"journal":{"name":"Health Care Management Science","volume":" ","pages":"650-664"},"PeriodicalIF":2.3,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142635976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2024-05-02DOI: 10.1007/s10729-024-09672-9
Grigory Korzhenevich, Anne Zander
We present a freely available data set of surgical case mixes and surgery process duration distributions based on processed data from the German Operating Room Benchmarking initiative. This initiative collects surgical process data from over 320 German, Austrian, and Swiss hospitals. The data exhibits high levels of quantity, quality, standardization, and multi-dimensionality, making it especially valuable for operating room planning in Operations Research. We consider detailed steps of the perioperative process and group the data with respect to the hospital's level of care, the surgery specialty, and the type of surgery patient. We compare case mixes for different subgroups and conclude that they differ significantly, demonstrating that it is necessary to test operating room planning methods in different settings, e.g., using data sets like ours. Further, we discuss limitations and future research directions. Finally, we encourage the extension and foundation of new operating room benchmarking initiatives and their usage for operating room planning.
{"title":"Leveraging the potential of the German operating room benchmarking initiative for planning: A ready-to-use surgical process data set.","authors":"Grigory Korzhenevich, Anne Zander","doi":"10.1007/s10729-024-09672-9","DOIUrl":"10.1007/s10729-024-09672-9","url":null,"abstract":"<p><p>We present a freely available data set of surgical case mixes and surgery process duration distributions based on processed data from the German Operating Room Benchmarking initiative. This initiative collects surgical process data from over 320 German, Austrian, and Swiss hospitals. The data exhibits high levels of quantity, quality, standardization, and multi-dimensionality, making it especially valuable for operating room planning in Operations Research. We consider detailed steps of the perioperative process and group the data with respect to the hospital's level of care, the surgery specialty, and the type of surgery patient. We compare case mixes for different subgroups and conclude that they differ significantly, demonstrating that it is necessary to test operating room planning methods in different settings, e.g., using data sets like ours. Further, we discuss limitations and future research directions. Finally, we encourage the extension and foundation of new operating room benchmarking initiatives and their usage for operating room planning.</p>","PeriodicalId":12903,"journal":{"name":"Health Care Management Science","volume":" ","pages":"328-351"},"PeriodicalIF":2.3,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11461674/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140848511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Long waiting time in outpatient departments is a crucial factor in patient dissatisfaction. We aim to analytically interpret the waiting times predicted by machine learning models and provide patients with an explanation of the expected waiting time. Here, underestimating waiting times can cause patient dissatisfaction, so preventing this in predictive models is necessary. To address this issue, we propose a framework considering dissatisfaction for estimating the waiting time in an outpatient department. In our framework, we leverage asymmetric loss functions to ensure robustness against underestimation. We also propose a dissatisfaction-aware asymmetric error score (DAES) to determine an appropriate model by considering the trade-off between underestimation and accuracy. Finally, Shapley additive explanation (SHAP) is applied to interpret the relationship trained by the model, enabling decision makers to use this information for improving outpatient service operations. We apply our framework in the endocrinology metabolism department and neurosurgery department in one of the largest hospitals in South Korea. The use of asymmetric functions prevents underestimation in the model, and with the proposed DAES, we can strike a balance in selecting the best model. By using SHAP, we can analytically interpret the waiting time in outpatient service (e.g., the length of the queue affects the waiting time the most) and provide explanations about the expected waiting time to patients. The proposed framework aids in improving operations, considering practical application in hospitals for real-time patient notification and minimizing patient dissatisfaction. Given the significance of managing hospital operations from the perspective of patients, this work is expected to contribute to operations improvement in health service practices.
{"title":"Dissatisfaction-considered waiting time prediction for outpatients with interpretable machine learning.","authors":"Jongkyung Shin, Donggi Augustine Lee, Juram Kim, Chiehyeon Lim, Byung-Kwan Choi","doi":"10.1007/s10729-024-09676-5","DOIUrl":"10.1007/s10729-024-09676-5","url":null,"abstract":"<p><p>Long waiting time in outpatient departments is a crucial factor in patient dissatisfaction. We aim to analytically interpret the waiting times predicted by machine learning models and provide patients with an explanation of the expected waiting time. Here, underestimating waiting times can cause patient dissatisfaction, so preventing this in predictive models is necessary. To address this issue, we propose a framework considering dissatisfaction for estimating the waiting time in an outpatient department. In our framework, we leverage asymmetric loss functions to ensure robustness against underestimation. We also propose a dissatisfaction-aware asymmetric error score (DAES) to determine an appropriate model by considering the trade-off between underestimation and accuracy. Finally, Shapley additive explanation (SHAP) is applied to interpret the relationship trained by the model, enabling decision makers to use this information for improving outpatient service operations. We apply our framework in the endocrinology metabolism department and neurosurgery department in one of the largest hospitals in South Korea. The use of asymmetric functions prevents underestimation in the model, and with the proposed DAES, we can strike a balance in selecting the best model. By using SHAP, we can analytically interpret the waiting time in outpatient service (e.g., the length of the queue affects the waiting time the most) and provide explanations about the expected waiting time to patients. The proposed framework aids in improving operations, considering practical application in hospitals for real-time patient notification and minimizing patient dissatisfaction. Given the significance of managing hospital operations from the perspective of patients, this work is expected to contribute to operations improvement in health service practices.</p>","PeriodicalId":12903,"journal":{"name":"Health Care Management Science","volume":" ","pages":"370-390"},"PeriodicalIF":2.3,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11461612/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141186080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2024-06-05DOI: 10.1007/s10729-024-09677-4
Chou-Chun Wu, Yiwen Cao, Sze-Chuan Suen, Eugene Lin
Forty percent of diabetics will develop chronic kidney disease (CKD) in their lifetimes. However, as many as 50% of these CKD cases may go undiagnosed. We developed screening recommendations stratified by age and previous test history for individuals with diagnosed diabetes and unknown proteinuria status by race and gender groups. To do this, we used a Partially Observed Markov Decision Process (POMDP) to identify whether a patient should be screened at every three-month interval from ages 30-85. Model inputs were drawn from nationally-representative datasets, the medical literature, and a microsimulation that integrates this information into group-specific disease progression rates. We implement the POMDP solution policy in the microsimulation to understand how this policy may impact health outcomes and generate an easily-implementable, non-belief-based approximate policy for easier clinical interpretability. We found that the status quo policy, which is to screen annually for all ages and races, is suboptimal for maximizing expected discounted future net monetary benefits (NMB). The POMDP policy suggests more frequent screening after age 40 in all race and gender groups, with screenings 2-4 times a year for ages 61-70. Black individuals are recommended for screening more frequently than their White counterparts. This policy would increase NMB from the status quo policy between $1,000 to $8,000 per diabetic patient at a willingness-to-pay of $150,000 per quality-adjusted life year (QALY).
{"title":"Examining chronic kidney disease screening frequency among diabetics: a POMDP approach.","authors":"Chou-Chun Wu, Yiwen Cao, Sze-Chuan Suen, Eugene Lin","doi":"10.1007/s10729-024-09677-4","DOIUrl":"10.1007/s10729-024-09677-4","url":null,"abstract":"<p><p>Forty percent of diabetics will develop chronic kidney disease (CKD) in their lifetimes. However, as many as 50% of these CKD cases may go undiagnosed. We developed screening recommendations stratified by age and previous test history for individuals with diagnosed diabetes and unknown proteinuria status by race and gender groups. To do this, we used a Partially Observed Markov Decision Process (POMDP) to identify whether a patient should be screened at every three-month interval from ages 30-85. Model inputs were drawn from nationally-representative datasets, the medical literature, and a microsimulation that integrates this information into group-specific disease progression rates. We implement the POMDP solution policy in the microsimulation to understand how this policy may impact health outcomes and generate an easily-implementable, non-belief-based approximate policy for easier clinical interpretability. We found that the status quo policy, which is to screen annually for all ages and races, is suboptimal for maximizing expected discounted future net monetary benefits (NMB). The POMDP policy suggests more frequent screening after age 40 in all race and gender groups, with screenings 2-4 times a year for ages 61-70. Black individuals are recommended for screening more frequently than their White counterparts. This policy would increase NMB from the status quo policy between $1,000 to $8,000 per diabetic patient at a willingness-to-pay of $150,000 per quality-adjusted life year (QALY).</p>","PeriodicalId":12903,"journal":{"name":"Health Care Management Science","volume":" ","pages":"391-414"},"PeriodicalIF":2.3,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11461555/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141246850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}