Pub Date : 2024-11-27DOI: 10.1016/j.cor.2024.106929
Xiehui Zhang, Guang-Yu Zhu
The job-shop scheduling problem (JSP) is one of the most famous production scheduling problems, and it is an NP-hard problem. Reinforcement learning (RL), a machine learning method capable of feedback-based learning, holds great potential for solving shop scheduling problems. In this paper, the literature on applying RL to solve JSPs is taken as the review object and analyzed in terms of RL methods, the number of agents, and the agent upgrade strategy. We discuss three major issues faced by RL methods for solving JSPs: the curse of dimensionality, the generalizability and the training time. The interconnectedness of the three main issues is revealed and the main factors affecting them are identified. By discussing the current solutions to the above issues as well as other challenges that exist, suggestions for solving these problems are given, and future research trends are proposed.
{"title":"A literature review of reinforcement learning methods applied to job-shop scheduling problems","authors":"Xiehui Zhang, Guang-Yu Zhu","doi":"10.1016/j.cor.2024.106929","DOIUrl":"10.1016/j.cor.2024.106929","url":null,"abstract":"<div><div>The job-shop scheduling problem (JSP) is one of the most famous production scheduling problems, and it is an NP-hard problem. Reinforcement learning (RL), a machine learning method capable of feedback-based learning, holds great potential for solving shop scheduling problems. In this paper, the literature on applying RL to solve JSPs is taken as the review object and analyzed in terms of RL methods, the number of agents, and the agent upgrade strategy. We discuss three major issues faced by RL methods for solving JSPs: the curse of dimensionality, the generalizability and the training time. The interconnectedness of the three main issues is revealed and the main factors affecting them are identified. By discussing the current solutions to the above issues as well as other challenges that exist, suggestions for solving these problems are given, and future research trends are proposed.</div></div>","PeriodicalId":10542,"journal":{"name":"Computers & Operations Research","volume":"175 ","pages":"Article 106929"},"PeriodicalIF":4.1,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142759219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-26DOI: 10.1016/j.cor.2024.106895
Zihan Quan , Yankui Liu , Aixia Chen
This study addresses the sustainable medical waste location and transportation (SMWLT) problem from the viewpoint of social risk, environmental impact, and economic performance, where model uncertainty includes risk and transportation costs. In practice, it is usually hard to obtain the exact probability distribution of uncertain parameters. To address this challenge, this study first constructs an ambiguity set to model the partial distribution information of uncertain parameters. Based on the constructed ambiguity set, this study develops a new multi-objective distributionally robust chance-constrained (DRCC) model for the SMWLT problem. Subsequently, this study adopts the robust counterpart (RC) approximation method to reformulate the proposed DRCC model as a computationally tractable mixed-integer linear programming (MILP) model. Furthermore, an accelerated Benders decomposition (BD) enhanced by valid inequalities is designed to solve the resulting MILP model, which significantly improves the solution efficiency compared with the classical BD algorithm and CPLEX solver. Finally, a practical case in Chongqing, China, is addressed to illustrate the effectiveness of our DRCC model and the accelerated BD solution method.
{"title":"An accelerated Benders decomposition method for distributionally robust sustainable medical waste location and transportation problem","authors":"Zihan Quan , Yankui Liu , Aixia Chen","doi":"10.1016/j.cor.2024.106895","DOIUrl":"10.1016/j.cor.2024.106895","url":null,"abstract":"<div><div>This study addresses the sustainable medical waste location and transportation (SMWLT) problem from the viewpoint of social risk, environmental impact, and economic performance, where model uncertainty includes risk and transportation costs. In practice, it is usually hard to obtain the exact probability distribution of uncertain parameters. To address this challenge, this study first constructs an ambiguity set to model the partial distribution information of uncertain parameters. Based on the constructed ambiguity set, this study develops a new multi-objective distributionally robust chance-constrained (DRCC) model for the SMWLT problem. Subsequently, this study adopts the robust counterpart (RC) approximation method to reformulate the proposed DRCC model as a computationally tractable mixed-integer linear programming (MILP) model. Furthermore, an accelerated Benders decomposition (BD) enhanced by valid inequalities is designed to solve the resulting MILP model, which significantly improves the solution efficiency compared with the classical BD algorithm and CPLEX solver. Finally, a practical case in Chongqing, China, is addressed to illustrate the effectiveness of our DRCC model and the accelerated BD solution method.</div></div>","PeriodicalId":10542,"journal":{"name":"Computers & Operations Research","volume":"175 ","pages":"Article 106895"},"PeriodicalIF":4.1,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142746417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-26DOI: 10.1016/j.cor.2024.106916
Jelena Tasić, Zorica Dražić, Zorica Stanimirović
This paper considers the conditional -next center problem (CPNCP) and proposes a metaheuristic method as a solution approach. The -next center problem (PNCP) is an extension of the classical -center problem that captures real-life situations when centers suddenly fail due to an accident or some other problem. When the center failure happens, the customers allocated to the closed center are redirected to the center closest to the closed one, called the backup center. On the other hand, when a service network expands, some of the existing centers are usually retained and a number of new centers are opened. The conditional -next center problem involves both of these two aspects that arise in practice and, to the best of our knowledge, has not been considered in the literature so far. Since the CPNCP is NP-hard, a metaheuristic algorithm based on the Variable Neighborhood Search is developed. The proposed VNS includes an efficient implementation of the Fast Interchange heuristic which enables the VNS to tackle with real-life problem dimensions. The exhaustive computational experiments were performed on the modified PNCP test instances from the literature with up to 900 nodes. The obtained results are compared with the results of the exact solver CPLEX. It is shown that the proposed VNS reaches optimal solutions or improves the feasible ones provided by CPLEX in a significantly shorter CPU time. The VNS also quickly returns its best solutions when CPLEX failed to provide a feasible one. In order to investigate the effects of two different approaches in service network planning, the VNS solutions of the CPNCP are compared with the optimal or best-known solutions of the -next center problem. In addition, the conducted computational study includes direct comparisons of the results obtained when the proposed SVNS is applied to PNCP (by setting the number of existing centers to 0) with the results of recent solution methods proposed for the PNCP.
{"title":"A VNS method for the conditional p-next center problem","authors":"Jelena Tasić, Zorica Dražić, Zorica Stanimirović","doi":"10.1016/j.cor.2024.106916","DOIUrl":"10.1016/j.cor.2024.106916","url":null,"abstract":"<div><div>This paper considers the conditional <span><math><mi>p</mi></math></span>-next center problem (CPNCP) and proposes a metaheuristic method as a solution approach. The <span><math><mi>p</mi></math></span>-next center problem (PNCP) is an extension of the classical <span><math><mi>p</mi></math></span>-center problem that captures real-life situations when centers suddenly fail due to an accident or some other problem. When the center failure happens, the customers allocated to the closed center are redirected to the center closest to the closed one, called the backup center. On the other hand, when a service network expands, some of the existing centers are usually retained and a number of new centers are opened. The conditional <span><math><mi>p</mi></math></span>-next center problem involves both of these two aspects that arise in practice and, to the best of our knowledge, has not been considered in the literature so far. Since the CPNCP is NP-hard, a metaheuristic algorithm based on the Variable Neighborhood Search is developed. The proposed VNS includes an efficient implementation of the Fast Interchange heuristic which enables the VNS to tackle with real-life problem dimensions. The exhaustive computational experiments were performed on the modified PNCP test instances from the literature with up to 900 nodes. The obtained results are compared with the results of the exact solver CPLEX. It is shown that the proposed VNS reaches optimal solutions or improves the feasible ones provided by CPLEX in a significantly shorter CPU time. The VNS also quickly returns its best solutions when CPLEX failed to provide a feasible one. In order to investigate the effects of two different approaches in service network planning, the VNS solutions of the CPNCP are compared with the optimal or best-known solutions of the <span><math><mi>p</mi></math></span>-next center problem. In addition, the conducted computational study includes direct comparisons of the results obtained when the proposed SVNS is applied to PNCP (by setting the number of existing centers to 0) with the results of recent solution methods proposed for the PNCP.</div></div>","PeriodicalId":10542,"journal":{"name":"Computers & Operations Research","volume":"175 ","pages":"Article 106916"},"PeriodicalIF":4.1,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142746421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-26DOI: 10.1016/j.cor.2024.106917
Zhen Zhang , Zhuolin Li , Wenyu Yu
Deriving a representative model using value function-based methods from the perspective of preference disaggregation has emerged as a prominent and growing topic in multi-criteria sorting (MCS) problems. A noteworthy observation is that many existing approaches to learning a representative model for MCS problems traditionally assume the monotonicity of criteria, which may not always align with the complexities found in real-world MCS scenarios. Consequently, this paper proposes some approaches to learning a representative model for MCS problems with non-monotonic criteria through the integration of the threshold-based value-driven sorting procedure. To do so, we first define some transformation functions to map the marginal values and category thresholds into a UTA-like functional space. Subsequently, we construct constraint sets to model non-monotonic criteria in MCS problems and develop optimization models to check and rectify the inconsistency of the decision maker’s assignment example preference information. By simultaneously considering the complexity and discriminative power of the models, two distinct lexicographic optimization-based approaches are developed to derive a representative model for MCS problems with non-monotonic criteria. Eventually, we offer an illustrative example and conduct comprehensive simulation experiments to elaborate the feasibility and validity of the proposed approaches.
{"title":"Lexicographic optimization-based approaches to learning a representative model for multi-criteria sorting with non-monotonic criteria","authors":"Zhen Zhang , Zhuolin Li , Wenyu Yu","doi":"10.1016/j.cor.2024.106917","DOIUrl":"10.1016/j.cor.2024.106917","url":null,"abstract":"<div><div>Deriving a representative model using value function-based methods from the perspective of preference disaggregation has emerged as a prominent and growing topic in multi-criteria sorting (MCS) problems. A noteworthy observation is that many existing approaches to learning a representative model for MCS problems traditionally assume the monotonicity of criteria, which may not always align with the complexities found in real-world MCS scenarios. Consequently, this paper proposes some approaches to learning a representative model for MCS problems with non-monotonic criteria through the integration of the threshold-based value-driven sorting procedure. To do so, we first define some transformation functions to map the marginal values and category thresholds into a UTA-like functional space. Subsequently, we construct constraint sets to model non-monotonic criteria in MCS problems and develop optimization models to check and rectify the inconsistency of the decision maker’s assignment example preference information. By simultaneously considering the complexity and discriminative power of the models, two distinct lexicographic optimization-based approaches are developed to derive a representative model for MCS problems with non-monotonic criteria. Eventually, we offer an illustrative example and conduct comprehensive simulation experiments to elaborate the feasibility and validity of the proposed approaches.</div></div>","PeriodicalId":10542,"journal":{"name":"Computers & Operations Research","volume":"175 ","pages":"Article 106917"},"PeriodicalIF":4.1,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142746418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-26DOI: 10.1016/j.cor.2024.106918
Cristiano Arbex Valle
Portfolio optimisation is essential in quantitative investing, but its implementation faces several practical difficulties. One particular challenge is converting optimal portfolio weights into real-life trades in the presence of realistic features, such as transaction costs and integral lots. This is especially important in automated trading, where the entire process happens without human intervention.
Several works in literature have extended portfolio optimisation models to account for these features. In this paper, we highlight and illustrate difficulties faced when employing the existing literature in a practical setting, such as computational intractability, numerical imprecision and modelling trade-offs. We then propose a two-stage framework as an alternative approach to address this issue. Its goal is to optimise portfolio weights in the first stage and to generate realistic trades in the second. Through extensive computational experiments, we show that our approach not only mitigates the difficulties discussed above but also can be successfully employed in a realistic scenario.
By splitting the problem in two, we are able to incorporate new features without adding too much complexity to any single model. With this in mind we model two novel features that are critical to many investment strategies: first, we integrate two classes of assets, futures contracts and equities, into a single framework, with an example illustrating how this can help portfolio managers in enhancing investment strategies. Second, we account for borrowing costs in short positions, which have so far been neglected in literature but which significantly impact profits in long/short strategies. Even with these new features, our two-stage approach still effectively converts optimal portfolios into actionable trades.
{"title":"Portfolio optimisation: Bridging the gap between theory and practice","authors":"Cristiano Arbex Valle","doi":"10.1016/j.cor.2024.106918","DOIUrl":"10.1016/j.cor.2024.106918","url":null,"abstract":"<div><div>Portfolio optimisation is essential in quantitative investing, but its implementation faces several practical difficulties. One particular challenge is converting optimal portfolio weights into real-life trades in the presence of realistic features, such as transaction costs and integral lots. This is especially important in automated trading, where the entire process happens without human intervention.</div><div>Several works in literature have extended portfolio optimisation models to account for these features. In this paper, we highlight and illustrate difficulties faced when employing the existing literature in a practical setting, such as computational intractability, numerical imprecision and modelling trade-offs. We then propose a two-stage framework as an alternative approach to address this issue. Its goal is to optimise portfolio weights in the first stage and to generate realistic trades in the second. Through extensive computational experiments, we show that our approach not only mitigates the difficulties discussed above but also can be successfully employed in a realistic scenario.</div><div>By splitting the problem in two, we are able to incorporate new features without adding too much complexity to any single model. With this in mind we model two novel features that are critical to many investment strategies: first, we integrate two classes of assets, futures contracts and equities, into a single framework, with an example illustrating how this can help portfolio managers in enhancing investment strategies. Second, we account for borrowing costs in short positions, which have so far been neglected in literature but which significantly impact profits in long/short strategies. Even with these new features, our two-stage approach still effectively converts optimal portfolios into actionable trades.</div></div>","PeriodicalId":10542,"journal":{"name":"Computers & Operations Research","volume":"175 ","pages":"Article 106918"},"PeriodicalIF":4.1,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142746419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-26DOI: 10.1016/j.cor.2024.106919
Haonan Song , Junqing Li , Zhaosheng Du , Xin Yu , Ying Xu , Zhixin Zheng , Jiake Li
In practical industrial production, workers are often critical resources in manufacturing systems. However, few studies have considered the level of worker fatigue when assigning resources and arranging tasks, which has a negative impact on productivity. To fill this gap, the distributed hybrid flow shop scheduling problem with dual-resource constraints considering worker fatigue (DHFSPW) is introduced in this study. Due to the complexity and diversity of distributed manufacturing and multi-objective, a Q-learning driven multi-objective evolutionary algorithm (QMOEA) is proposed to optimize both the makespan and total energy consumption of the DHFSPW at the same time. In QMOEA, solutions are represented by a four-dimensional vector, and a decoding heuristic that accounts for real-time worker productivity is proposed. Additionally, three problem-specific initialization heuristics are developed to enhance convergence and diversity capabilities. Moreover, encoding-based crossover, mirror crossover and balanced mutation methods are presented to improve the algorithm’s exploitation capabilities. Furthermore, a Q-learning based local search is employed to explore promising nondominated solutions across different dimensions. Finally, the QMOEA is assessed using a set of randomly generated instances, and a detailed comparison with state-of-the-art algorithms is performed to demonstrate its efficiency and robustness.
{"title":"A Q-learning driven multi-objective evolutionary algorithm for worker fatigue dual-resource-constrained distributed hybrid flow shop","authors":"Haonan Song , Junqing Li , Zhaosheng Du , Xin Yu , Ying Xu , Zhixin Zheng , Jiake Li","doi":"10.1016/j.cor.2024.106919","DOIUrl":"10.1016/j.cor.2024.106919","url":null,"abstract":"<div><div>In practical industrial production, workers are often critical resources in manufacturing systems. However, few studies have considered the level of worker fatigue when assigning resources and arranging tasks, which has a negative impact on productivity. To fill this gap, the distributed hybrid flow shop scheduling problem with dual-resource constraints considering worker fatigue (DHFSPW) is introduced in this study. Due to the complexity and diversity of distributed manufacturing and multi-objective, a Q-learning driven multi-objective evolutionary algorithm (QMOEA) is proposed to optimize both the makespan and total energy consumption of the DHFSPW at the same time. In QMOEA, solutions are represented by a four-dimensional vector, and a decoding heuristic that accounts for real-time worker productivity is proposed. Additionally, three problem-specific initialization heuristics are developed to enhance convergence and diversity capabilities. Moreover, encoding-based crossover, mirror crossover and balanced mutation methods are presented to improve the algorithm’s exploitation capabilities. Furthermore, a Q-learning based local search is employed to explore promising nondominated solutions across different dimensions. Finally, the QMOEA is assessed using a set of randomly generated instances, and a detailed comparison with state-of-the-art algorithms is performed to demonstrate its efficiency and robustness.</div></div>","PeriodicalId":10542,"journal":{"name":"Computers & Operations Research","volume":"175 ","pages":"Article 106919"},"PeriodicalIF":4.1,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142746420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-22DOI: 10.1016/j.cor.2024.106905
Moritz Rettinger , Stefan Minner , Jenny Birzl
Hedging against price increases is particularly important in times of significant market uncertainty and price volatility. For commodity procuring firms, futures contracts are a widespread means of financially hedging price risks. Recently, digital data-driven decision-support approaches have been developed, with deep learning-based methods achieving outstanding results in capturing non-linear relationships between external features and price trends. Digital procurement systems leverage targeted purchasing decisions of these optimization models, yet the decision-mechanisms are opaque. We employ a prescriptive deep-learning approach modeling hedging decisions as a multi-label time series classification problem. We backtest the performance in two evaluation periods, i. e., 2018–2020 and 2021–2023, for natural gas, crude oil, nickel, and copper. The approach performs well in the first evaluation period of the testing framework yet fails to capture market disruptions (pandemic, geopolitical developments) in the second evaluation period, yielding significant hedging losses or degenerating into a simple hand-to-mouth procurement policy. We employ explainable artificial intelligence to analyze the performance for both periods. The results show that the models cannot handle market regime switches and disruptive events within the considered feature set.
{"title":"Understand your decision rather than your model prescription: Towards explainable deep learning approaches for commodity procurement","authors":"Moritz Rettinger , Stefan Minner , Jenny Birzl","doi":"10.1016/j.cor.2024.106905","DOIUrl":"10.1016/j.cor.2024.106905","url":null,"abstract":"<div><div>Hedging against price increases is particularly important in times of significant market uncertainty and price volatility. For commodity procuring firms, futures contracts are a widespread means of financially hedging price risks. Recently, digital data-driven decision-support approaches have been developed, with deep learning-based methods achieving outstanding results in capturing non-linear relationships between external features and price trends. Digital procurement systems leverage targeted purchasing decisions of these optimization models, yet the decision-mechanisms are opaque. We employ a prescriptive deep-learning approach modeling hedging decisions as a multi-label time series classification problem. We backtest the performance in two evaluation periods, i.<!--> <!-->e., 2018–2020 and 2021–2023, for natural gas, crude oil, nickel, and copper. The approach performs well in the first evaluation period of the testing framework yet fails to capture market disruptions (pandemic, geopolitical developments) in the second evaluation period, yielding significant hedging losses or degenerating into a simple hand-to-mouth procurement policy. We employ explainable artificial intelligence to analyze the performance for both periods. The results show that the models cannot handle market regime switches and disruptive events within the considered feature set.</div></div>","PeriodicalId":10542,"journal":{"name":"Computers & Operations Research","volume":"175 ","pages":"Article 106905"},"PeriodicalIF":4.1,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142701940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-21DOI: 10.1016/j.cor.2024.106915
Shuai Wu , Enze Liu , Rui Cao , Qiang Bai
Flights are vulnerable to unforeseen factors, such as adverse weather, airport flow control, crew absence, unexpected aircraft maintenance, and pandemic, all of which can cause disruptions in flight schedules. Consequently, managers need to reallocate relevant resources to ensure that the airport can return to normal operations on the basis of minimum cost, a challenge known as the airline recovery problem. Airline recovery is an active research area, with a lot of publications in recent years. To provide a comprehensive overview of airline recovery, first, keywords are selected to search for relevant studies, then existing studies are analyzed in terms of the number of papers, keywords, and sources. The study then delves into an analysis of passenger-oriented airline recovery problems on both traditional and novel recovery strategies. A detailed exploration of novel recovery strategies is conducted to uncover new insights and potential solutions for addressing airline recovery problems. Furthermore, this study investigates recovery strategies for cargo-oriented airline operations, comparing them with those designed for passenger-oriented airline recovery to offer insights for future studies on airline recovery problems. Finally, conclusions are drawn and future study directions are provided. For future studies, it is recommended to conduct more in-depth studies on dynamic and real-time recovery, incorporating human factors into the modeling, multi-modal transportation coupling, optimization of other airport processes, combination of robust scheduling and airline recovery, addressing the stochasticity of parameters, and optimization algorithm improvement.
{"title":"Airline recovery problem under disruptions: A review","authors":"Shuai Wu , Enze Liu , Rui Cao , Qiang Bai","doi":"10.1016/j.cor.2024.106915","DOIUrl":"10.1016/j.cor.2024.106915","url":null,"abstract":"<div><div>Flights are vulnerable to unforeseen factors, such as adverse weather, airport flow control, crew absence, unexpected aircraft maintenance, and pandemic, all of which can cause disruptions in flight schedules. Consequently, managers need to reallocate relevant resources to ensure that the airport can return to normal operations on the basis of minimum cost, a challenge known as the airline recovery problem. Airline recovery is an active research area, with a lot of publications in recent years. To provide a comprehensive overview of airline recovery, first, keywords are selected to search for relevant studies, then existing studies are analyzed in terms of the number of papers, keywords, and sources. The study then delves into an analysis of passenger-oriented airline recovery problems on both traditional and novel recovery strategies. A detailed exploration of novel recovery strategies is conducted to uncover new insights and potential solutions for addressing airline recovery problems. Furthermore, this study investigates recovery strategies for cargo-oriented airline operations, comparing them with those designed for passenger-oriented airline recovery to offer insights for future studies on airline recovery problems. Finally, conclusions are drawn and future study directions are provided. For future studies, it is recommended to conduct more in-depth studies on dynamic and real-time recovery, incorporating human factors into the modeling, multi-modal transportation coupling, optimization of other airport processes, combination of robust scheduling and airline recovery, addressing the stochasticity of parameters, and optimization algorithm improvement.</div></div>","PeriodicalId":10542,"journal":{"name":"Computers & Operations Research","volume":"175 ","pages":"Article 106915"},"PeriodicalIF":4.1,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142701941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-19DOI: 10.1016/j.cor.2024.106913
Weiqiao Wang , Kai Yang , Lixing Yang , Ziyou Gao , Jianjun Dong , Haifeng Zhang
Social donations have played a crucial role in providing effective emergency relief and need to be particularly valued and used wisely. In this study, we address a Wasserstein distributionally robust emergency relief network design problem with demand uncertainty by taking into account the social donations. Specifically, we first formulate the problem into a two-stage stochastic programming model that requires the probability distribution information of the uncertain demand is completely known in advance, where the first stage decides on the location and pre-positioning of resources, and the second stage optimizes the delivery volume of the reserved and donated supplies offered by social organizations and individual. As the probability distribution of the demand cannot be known precisely (i.e., ambiguous) in reality, we further extend the stochastic model to a Wasserstein distributionally robust optimization model, in which the ambiguous demand is captured by the Wasserstein ambiguity set. Theoretically, we derive the tractable deterministic reformulations of the proposed distributionally robust optimization model under Type- and Type-1 Wasserstein metrics. To solve the extensive reformulations, we design a decomposition scheme on the basis of the Benders decomposition framework by adopting aggregated multiple cuts, cut-loop stabilization at root node and stabilized k-opt local branching acceleration strategies. Finally, we carry out numerical experiments to illustrate the computational advantage of the proposed solution method over the single acceleration implementation on hypothetical instances, and demonstrate the superiority of the proposed modeling approach compared with the traditional stochastic programming and robust optimization models on a real case study. The results show that the distributionally robust optimization approach used better trade-offs between cost and risk.
{"title":"A decomposition scheme for Wasserstein distributionally robust emergency relief network design under demand uncertainty and social donations","authors":"Weiqiao Wang , Kai Yang , Lixing Yang , Ziyou Gao , Jianjun Dong , Haifeng Zhang","doi":"10.1016/j.cor.2024.106913","DOIUrl":"10.1016/j.cor.2024.106913","url":null,"abstract":"<div><div>Social donations have played a crucial role in providing effective emergency relief and need to be particularly valued and used wisely. In this study, we address a Wasserstein distributionally robust emergency relief network design problem with demand uncertainty by taking into account the social donations. Specifically, we first formulate the problem into a two-stage stochastic programming model that requires the probability distribution information of the uncertain demand is completely known in advance, where the first stage decides on the location and pre-positioning of resources, and the second stage optimizes the delivery volume of the reserved and donated supplies offered by social organizations and individual. As the probability distribution of the demand cannot be known precisely (i.e., ambiguous) in reality, we further extend the stochastic model to a Wasserstein distributionally robust optimization model, in which the ambiguous demand is captured by the Wasserstein ambiguity set. Theoretically, we derive the tractable deterministic reformulations of the proposed distributionally robust optimization model under Type-<span><math><mi>∞</mi></math></span> and Type-1 Wasserstein metrics. To solve the extensive reformulations, we design a decomposition scheme on the basis of the Benders decomposition framework by adopting aggregated multiple cuts, cut-loop stabilization at root node and stabilized k-opt local branching acceleration strategies. Finally, we carry out numerical experiments to illustrate the computational advantage of the proposed solution method over the single acceleration implementation on hypothetical instances, and demonstrate the superiority of the proposed modeling approach compared with the traditional stochastic programming and robust optimization models on a real case study. The results show that the distributionally robust optimization approach used better trade-offs between cost and risk.</div></div>","PeriodicalId":10542,"journal":{"name":"Computers & Operations Research","volume":"175 ","pages":"Article 106913"},"PeriodicalIF":4.1,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142701939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-18DOI: 10.1016/j.cor.2024.106912
FengLian Yuan , Bo Huang , JianYong Lv , MeiJi Cui
The design of the heuristic function in a Petri-net(PN)-based A search significantly impacts search efficiency and schedule quality for automated manufacturing systems (AMSs). In Luo et al. (2015), two admissible heuristic functions were formulated for an A search based on place-timed PNs to schedule AMSs. To broaden its application scenarios and enhance search efficiency, this paper proposes a new heuristic function whose calculations take account of multiple resource acquisitions, weighted arcs, redundant resource units, and outdated resources, which are commonly encountered in practical AMSs but usually not considered. The proposed one can deal with generalized PNs, offering broader application scenarios than ordinary PNs. In addition, it is proven to be admissible and more informed than its counterparts, ensuring that the obtained schedules are optimal and making the timed PN-based A search more efficient. To validate the efficacy and efficiency of the proposed method, several benchmark systems are tested.
{"title":"Scheduling AMSs with generalized Petri nets and highly informed heuristic search","authors":"FengLian Yuan , Bo Huang , JianYong Lv , MeiJi Cui","doi":"10.1016/j.cor.2024.106912","DOIUrl":"10.1016/j.cor.2024.106912","url":null,"abstract":"<div><div>The design of the heuristic function in a Petri-net(PN)-based A<span><math><msup><mrow></mrow><mrow><mo>∗</mo></mrow></msup></math></span> search significantly impacts search efficiency and schedule quality for automated manufacturing systems (AMSs). In Luo et al. (2015), two admissible heuristic functions were formulated for an A<span><math><msup><mrow></mrow><mrow><mo>∗</mo></mrow></msup></math></span> search based on place-timed PNs to schedule AMSs. To broaden its application scenarios and enhance search efficiency, this paper proposes a new heuristic function whose calculations take account of multiple resource acquisitions, weighted arcs, redundant resource units, and outdated resources, which are commonly encountered in practical AMSs but usually not considered. The proposed one can deal with generalized PNs, offering broader application scenarios than ordinary PNs. In addition, it is proven to be admissible and more informed than its counterparts, ensuring that the obtained schedules are optimal and making the timed PN-based A<span><math><msup><mrow></mrow><mrow><mo>∗</mo></mrow></msup></math></span> search more efficient. To validate the efficacy and efficiency of the proposed method, several benchmark systems are tested.</div></div>","PeriodicalId":10542,"journal":{"name":"Computers & Operations Research","volume":"175 ","pages":"Article 106912"},"PeriodicalIF":4.1,"publicationDate":"2024-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142701937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}