Pub Date : 2025-11-10DOI: 10.1016/j.cor.2025.107319
Zheng Wang , Huiran Liu , Xiaojun Fan
This paper addresses a multi-scenario multi-mode resource-constrained project scheduling problem with the goal of minimizing both the makespan and cost of the project. In order to visualize the changing process of modes and priority relationships in a project, a dynamic activity-mode network graph is introduced. Based on this network, a deep reinforcement learning model based on dynamic heterogeneous graph neural network is designed, and 12 solving models are obtained by training this model using the proximal policy optimization algorithm. The convergence of the model was verified using benchmark instances from the Project Scheduling Problem Library. Meanwhile, based on the characteristics of the solved problem, 360 instances are generated by reproducing the algorithm for generating benchmark instances. The problems are addressed using these 12 solution models and 9 additional comparison algorithms. Furthermore, a sensitivity analysis is conducted regarding the configuration parameters of the problem. The results validate the optimal effectiveness, stability, and generalization ability of the proposed learning model. It also demonstrates that this model can be a robustly better solving model and scheduling scheme according to actual demands.
{"title":"A novel learning model with dynamic heterogeneous graph network for uncertain multimode resource-constrained project scheduling problem","authors":"Zheng Wang , Huiran Liu , Xiaojun Fan","doi":"10.1016/j.cor.2025.107319","DOIUrl":"10.1016/j.cor.2025.107319","url":null,"abstract":"<div><div>This paper addresses a multi-scenario multi-mode resource-constrained project scheduling problem with the goal of minimizing both the makespan and cost of the project. In order to visualize the changing process of modes and priority relationships in a project, a dynamic activity-mode network graph is introduced. Based on this network, a deep reinforcement learning model based on dynamic heterogeneous graph neural network is designed, and 12 solving models are obtained by training this model using the proximal policy optimization algorithm. The convergence of the model was verified using benchmark instances from the Project Scheduling Problem Library. Meanwhile, based on the characteristics of the solved problem, 360 instances are generated by reproducing the algorithm for generating benchmark instances. The problems are addressed using these 12 solution models and 9 additional comparison algorithms. Furthermore, a sensitivity analysis is conducted regarding the configuration parameters of the problem. The results validate the optimal effectiveness, stability, and generalization ability of the proposed learning model. It also demonstrates that this model can be a robustly better solving model and scheduling scheme according to actual demands.</div></div>","PeriodicalId":10542,"journal":{"name":"Computers & Operations Research","volume":"186 ","pages":"Article 107319"},"PeriodicalIF":4.3,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145526122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accurate travel time forecasting for China–Europe Express (CRE) trains in the international section has become a significant challenge for railway practitioners and academics, with even mainstream deep forest (DF) models and variants encountering unresolved technical issues. This paper introduces a novel dual mechanism DF regression model (DMDFR) that enhances both the predictive performance and interpretability of the DF model to predict travel times of CRE trains in the international section. The proposed DMDFR model incorporates a dual mechanism consisting of an internal and an external mechanism. The internal mechanism addresses the problem of uneven dataset partitioning by adjusting the importance of each sub-forest during cross-validation. Meanwhile, a more interpretable and transparent external mechanism is embedded within the DF framework to tackle technical issues related to error transfer. In addition, the information transfer process utilizes an incremental information transfer approach to minimize the loss of internally represented information and improve the interpretability of the model. The DMDFR model deconstructs the DF gray-box arithmetic principle and develops model arithmetic algorithms using a straightforward, explanatory computational process. Through example analysis, we demonstrate the superiority of the DMDFR model across various statistical metrics. Given the rapid advancement of deep learning, the significant improvements achieved by the DMDFR underscore the importance of researching interpretable deep learning algorithms.
{"title":"International travel time prediction for China–Europe Express trains via interpretable deep learning models","authors":"Jingwei Guo , Xiang Guo , Zhen-Song Chen , Witold Pedrycz","doi":"10.1016/j.cor.2025.107330","DOIUrl":"10.1016/j.cor.2025.107330","url":null,"abstract":"<div><div>Accurate travel time forecasting for China–Europe Express (CRE) trains in the international section has become a significant challenge for railway practitioners and academics, with even mainstream deep forest (DF) models and variants encountering unresolved technical issues. This paper introduces a novel dual mechanism DF regression model (DMDFR) that enhances both the predictive performance and interpretability of the DF model to predict travel times of CRE trains in the international section. The proposed DMDFR model incorporates a dual mechanism consisting of an internal and an external mechanism. The internal mechanism addresses the problem of uneven dataset partitioning by adjusting the importance of each sub-forest during cross-validation. Meanwhile, a more interpretable and transparent external mechanism is embedded within the DF framework to tackle technical issues related to error transfer. In addition, the information transfer process utilizes an incremental information transfer approach to minimize the loss of internally represented information and improve the interpretability of the model. The DMDFR model deconstructs the DF gray-box arithmetic principle and develops model arithmetic algorithms using a straightforward, explanatory computational process. Through example analysis, we demonstrate the superiority of the DMDFR model across various statistical metrics. Given the rapid advancement of deep learning, the significant improvements achieved by the DMDFR underscore the importance of researching interpretable deep learning algorithms.</div></div>","PeriodicalId":10542,"journal":{"name":"Computers & Operations Research","volume":"187 ","pages":"Article 107330"},"PeriodicalIF":4.3,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145518932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a simple dual-driven methodology for generating infeasible path elimination constraints within branch-and-cut algorithms for vehicle routing problems that incorporate idle times and arrival-time consistency requirements. By leveraging dual information from a feasibility-checking subproblem, the approach systematically identifies the combinatorial sources of infeasibility and uses them to generate and strengthen valid inequalities. We apply the method to the Consistent Traveling Salesperson Problem with idling, which enforces temporal consistency across multiple service days while allowing idle time between tasks. This problem, defined by basic routing and synchronization constraints, serves as an ideal case study to demonstrate the method’s effectiveness. Computational experiments on a benchmark set of 756 instances, based on multi-period extensions of classical TSPLIB datasets, show that the approach solves 536 instances to proven optimality, including cases with up to 100 customers and a five-day planning horizon, all within a two-hour time limit.
{"title":"Dual-driven path elimination for vehicle routing with idle times and arrival-time consistency","authors":"Jorge Riera-Ledesma , Inmaculada Rodríguez-Martín , Hipólito Hernández–Pérez","doi":"10.1016/j.cor.2025.107326","DOIUrl":"10.1016/j.cor.2025.107326","url":null,"abstract":"<div><div>We present a simple dual-driven methodology for generating infeasible path elimination constraints within branch-and-cut algorithms for vehicle routing problems that incorporate idle times and arrival-time consistency requirements. By leveraging dual information from a feasibility-checking subproblem, the approach systematically identifies the combinatorial sources of infeasibility and uses them to generate and strengthen valid inequalities. We apply the method to the Consistent Traveling Salesperson Problem with idling, which enforces temporal consistency across multiple service days while allowing idle time between tasks. This problem, defined by basic routing and synchronization constraints, serves as an ideal case study to demonstrate the method’s effectiveness. Computational experiments on a benchmark set of 756 instances, based on multi-period extensions of classical TSPLIB datasets, show that the approach solves 536 instances to proven optimality, including cases with up to 100 customers and a five-day planning horizon, all within a two-hour time limit.</div></div>","PeriodicalId":10542,"journal":{"name":"Computers & Operations Research","volume":"186 ","pages":"Article 107326"},"PeriodicalIF":4.3,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145526121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study is motivated by industrial applications on manufacturing companies facing complex production planning problems encompassing location, production mix, and lot sizing decisions against uncertain demand. Historically, manufacturing companies often relied on deterministic optimization models to optimize these decisions by addressing uncertain demand using average values derived from forecasting techniques. Our investigation reveals that the deterministic model effectively handles stochastic demand with normal distribution patterns. However, we observe that deterministic models encounter challenges when dealing with sustained demand shifts—instances triggered by factors like mergers, acquisitions, or unforeseen events such as the global pandemic. Hence, it becomes imperative for manufacturing companies to adopt stochastic models for effective decision-making. While these models provide invaluable insights, their complexity presents hurdles for traditional methods such as branch-and-cut. In response, our study introduces a machine learning (ML)-empowered Benders decomposition method—augmented with novel inequalities and ML-empowered Benders reformulation. Our computational experiments demonstrate the significant cost savings attainable through our proposed methodology.
{"title":"Benders decomposition for stochastic facility location and production planning","authors":"Tao Wu , Jingwen Zhang , Canrong Zhang , Zhe Liang , Xiaoning Zhang","doi":"10.1016/j.cor.2025.107316","DOIUrl":"10.1016/j.cor.2025.107316","url":null,"abstract":"<div><div>This study is motivated by industrial applications on manufacturing companies facing complex production planning problems encompassing location, production mix, and lot sizing decisions against uncertain demand. Historically, manufacturing companies often relied on deterministic optimization models to optimize these decisions by addressing uncertain demand using average values derived from forecasting techniques. Our investigation reveals that the deterministic model effectively handles stochastic demand with normal distribution patterns. However, we observe that deterministic models encounter challenges when dealing with sustained demand shifts—instances triggered by factors like mergers, acquisitions, or unforeseen events such as the global pandemic. Hence, it becomes imperative for manufacturing companies to adopt stochastic models for effective decision-making. While these models provide invaluable insights, their complexity presents hurdles for traditional methods such as branch-and-cut. In response, our study introduces a machine learning (ML)-empowered Benders decomposition method—augmented with novel inequalities and ML-empowered Benders reformulation. Our computational experiments demonstrate the significant cost savings attainable through our proposed methodology.</div></div>","PeriodicalId":10542,"journal":{"name":"Computers & Operations Research","volume":"186 ","pages":"Article 107316"},"PeriodicalIF":4.3,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145526120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-10DOI: 10.1016/j.cor.2025.107327
Franklin A. Krukoski , Arinei C.L. Silva , Carise E. Schmidt
In this paper, we extend the classical Windy Rural Postman Problem on undirected graphs by incorporating time-dependent traffic conditions and direction-sensitive traversal costs to reflect realistic scenarios. We present a mixed-integer programming formulation that models the problem with discretized travel-time intervals and prohibits vehicle stops during the tour. We develop two variants of an adaptive meta-heuristic, each enhanced with specialized local-search operators, and embed a time-dependent shortest-path algorithm to handle time discretization. To generate high-quality initial solutions, we adopt a constructive heuristic and use it as a warm start for both the mathematical formulation and the adaptive meta-heuristic. We evaluate the proposed approaches on instances generated from real traffic data. Our computational results show that the mathematical formulation, when solved by a commercial solver, can prove optimality for small instances, while both meta-heuristic variants consistently produce high-quality solutions to this challenging problem. Our findings also reveal that assuming constant travel times systematically underestimates routing costs and produces suboptimal tour plans under realistic congestion patterns.
{"title":"Time-dependent Windy Rural Postman Problem: Mathematical formulation and adaptive metaheuristic","authors":"Franklin A. Krukoski , Arinei C.L. Silva , Carise E. Schmidt","doi":"10.1016/j.cor.2025.107327","DOIUrl":"10.1016/j.cor.2025.107327","url":null,"abstract":"<div><div>In this paper, we extend the classical Windy Rural Postman Problem on undirected graphs by incorporating time-dependent traffic conditions and direction-sensitive traversal costs to reflect realistic scenarios. We present a mixed-integer programming formulation that models the problem with discretized travel-time intervals and prohibits vehicle stops during the tour. We develop two variants of an adaptive meta-heuristic, each enhanced with specialized local-search operators, and embed a time-dependent shortest-path algorithm to handle time discretization. To generate high-quality initial solutions, we adopt a constructive heuristic and use it as a warm start for both the mathematical formulation and the adaptive meta-heuristic. We evaluate the proposed approaches on instances generated from real traffic data. Our computational results show that the mathematical formulation, when solved by a commercial solver, can prove optimality for small instances, while both meta-heuristic variants consistently produce high-quality solutions to this challenging problem. Our findings also reveal that assuming constant travel times systematically underestimates routing costs and produces suboptimal tour plans under realistic congestion patterns.</div></div>","PeriodicalId":10542,"journal":{"name":"Computers & Operations Research","volume":"186 ","pages":"Article 107327"},"PeriodicalIF":4.3,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145526224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-05DOI: 10.1016/j.cor.2025.107328
Bruno Mombello , Fernando Olsina , Rolando Pringles
Market dynamics are shaped by uncertainty and competitive rivalry. In such environments, firms commit capital-intensive, irreversible investments, while preserving flexibility in terms of timing, scale and sequencing. The interaction of these factors makes investment decision-making exceedingly complex. The Real Option Games (ROG) framework addresses this complexity by combining real options analysis with game theory, offering a robust, structured approach for evaluating strategic investment decisions under oligopolistic competition. This study provides a comprehensive and systematic review of ROG literature, with a focus on non-cooperative preemption and war-of-attrition models. The review classifies the most recurrent modeling approaches, solution methods, and thematic emphases, and identifies theoretical and practical gaps that constrain further development, highlighting areas requiring deeper exploration. A comprehensive citation and critical content analysis was conducted on more than 230 scholarly works published between 1991 and 2025. Each work was classified according to its game and temporal structure, agent symmetry, strategic options, and informational setting. A detailed taxonomy is proposed to organize ROG models and address the lack of methodological classification in existing literature. The dominant modeling frameworks and topics are thoroughly mapped alongside the underexplored areas. The synthesis of methodological advances and persistent theoretical challenges suggests the agenda for future research. This review serves as a consolidated reference for researchers and practitioners, as ROG insights improve strategic capital budgeting decisions in capital-intensive sectors with investment irreversibility and strategic rivalry, such as energy, infrastructure, and high-tech industries. These insights refine investment timing triggers, optimal scaling decisions, and competitive response strategies.
{"title":"Investments under strategic competition and uncertainty: A literature review on real option games","authors":"Bruno Mombello , Fernando Olsina , Rolando Pringles","doi":"10.1016/j.cor.2025.107328","DOIUrl":"10.1016/j.cor.2025.107328","url":null,"abstract":"<div><div>Market dynamics are shaped by uncertainty and competitive rivalry. In such environments, firms commit capital-intensive, irreversible investments, while preserving flexibility in terms of timing, scale and sequencing. The interaction of these factors makes investment decision-making exceedingly complex. The Real Option Games (ROG) framework addresses this complexity by combining real options analysis with game theory, offering a robust, structured approach for evaluating strategic investment decisions under oligopolistic competition. This study provides a comprehensive and systematic review of ROG literature, with a focus on non-cooperative preemption and war-of-attrition models. The review classifies the most recurrent modeling approaches, solution methods, and thematic emphases, and identifies theoretical and practical gaps that constrain further development, highlighting areas requiring deeper exploration. A comprehensive citation and critical content analysis was conducted on more than 230 scholarly works published between 1991 and 2025. Each work was classified according to its game and temporal structure, agent symmetry, strategic options, and informational setting. A detailed taxonomy is proposed to organize ROG models and address the lack of methodological classification in existing literature. The dominant modeling frameworks and topics are thoroughly mapped alongside the underexplored areas. The synthesis of methodological advances and persistent theoretical challenges suggests the agenda for future research. This review serves as a consolidated reference for researchers and practitioners, as ROG insights improve strategic capital budgeting decisions in capital-intensive sectors with investment irreversibility and strategic rivalry, such as energy, infrastructure, and high-tech industries. These insights refine investment timing triggers, optimal scaling decisions, and competitive response strategies.</div></div>","PeriodicalId":10542,"journal":{"name":"Computers & Operations Research","volume":"186 ","pages":"Article 107328"},"PeriodicalIF":4.3,"publicationDate":"2025-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145526123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01DOI: 10.1016/j.cor.2025.107324
Qin Guo , Ya Qiu , Yong Shi , Qiaoxian Zheng , Haixiang Guo
Emergency supplies dispatch is a key component of emergency management. Most existing studies related to emergency supplies dispatch focused on the trucks transportation. Inspired by the successful application of drones in military operations and commercial logistics, this study proposes an emergency supplies dispatch model based on truck-drone-dispatch vehicle collaborative delivery, which is a further extension of the vehicle routing problem with drones (VRPD). Compared to the classical VRPD, the model in this paper incorporates dispatch vehicles to transport drones, making it more suitable for disaster scenarios. To solve the proposed model, an enhanced dual-population non-dominated sorting genetic algorithm-II is developed based on the traditional non-dominated sorting genetic algorithm-II. This algorithm incorporates a series of local search operators tailored to the model’s characteristics to enhance its local search capability. Additionally, the selection operator is improved to increase population diversity. Finally, the elite population is generated according to the Pareto front of the original population, so that the elite population searches in the vicinity of the Pareto front of the original population to improve the convergence accuracy of the algorithm. To verify the proposed model and algorithm, we randomly generate a set of instances. In small-size instances, we employed the -constraint method to validate the proposed model’s effectiveness and to demonstrate its advantages over the standard VRPD in emergency logistics scenarios. In medium- and large-size instances, we analyze the effects of three key enhancement strategies on the algorithm’s performance. Finally, the proposed model and algorithm are applied to a real-world case of the Wenchuan earthquake, showcasing their practicality and applicability.
{"title":"An enhanced dual-population NSGA-II for solving multi-objective emergency supplies dispatch problem based on truck-drone-dispatch vehicle collaboration","authors":"Qin Guo , Ya Qiu , Yong Shi , Qiaoxian Zheng , Haixiang Guo","doi":"10.1016/j.cor.2025.107324","DOIUrl":"10.1016/j.cor.2025.107324","url":null,"abstract":"<div><div>Emergency supplies dispatch is a key component of emergency management. Most existing studies related to emergency supplies dispatch focused on the trucks transportation. Inspired by the successful application of drones in military operations and commercial logistics, this study proposes an emergency supplies dispatch model based on truck-drone-dispatch vehicle collaborative delivery, which is a further extension of the vehicle routing problem with drones (VRPD). Compared to the classical VRPD, the model in this paper incorporates dispatch vehicles to transport drones, making it more suitable for disaster scenarios. To solve the proposed model, an enhanced dual-population non-dominated sorting genetic algorithm-II is developed based on the traditional non-dominated sorting genetic algorithm-II. This algorithm incorporates a series of local search operators tailored to the model’s characteristics to enhance its local search capability. Additionally, the selection operator is improved to increase population diversity. Finally, the elite population is generated according to the Pareto front of the original population, so that the elite population searches in the vicinity of the Pareto front of the original population to improve the convergence accuracy of the algorithm. To verify the proposed model and algorithm, we randomly generate a set of instances. In small-size instances, we employed the <span><math><mi>ϵ</mi></math></span>-constraint method to validate the proposed model’s effectiveness and to demonstrate its advantages over the standard VRPD in emergency logistics scenarios. In medium- and large-size instances, we analyze the effects of three key enhancement strategies on the algorithm’s performance. Finally, the proposed model and algorithm are applied to a real-world case of the Wenchuan earthquake, showcasing their practicality and applicability.</div></div>","PeriodicalId":10542,"journal":{"name":"Computers & Operations Research","volume":"187 ","pages":"Article 107324"},"PeriodicalIF":4.3,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145500308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-31DOI: 10.1016/j.cor.2025.107303
Serkan Turhan, Fatma Gzara, Samir Elhedhli
Flight dispatchers are responsible for flight planning prior to departure and flight monitoring while en-route. Their work involves multi-tasking and their workload is dynamic. We study such a problem under two nonlinear workload balancing measures: minimum peak workload and minimum absolute deviation. In order to solve practical instances efficiently, we use decomposition through Lagrangian relaxation to reduce the problem into easier-to-solve subproblems and prove that the Lagrangian lower bound has a closed-form expression for the peak workload objective. To find feasible solutions, we develop a Focus-Search-and-Improve heuristic with a genetic algorithm core where parts of the feasible solution set are explored and searched by a genetic algorithm, and solutions are further fine-tuned by an improvement heuristic. To test the efficiency of the proposed approach, we generated 231 instances based on 2019 U.S. Bureau of Transportation flight data that involve 17 different carriers and up to 3968 flights per instance. Numerical testing demonstrates the efficiency of the proposed approach in that the Lagrangian lower bound is very tight, and the heuristic finds optimal solutions in 33.4% of the instances and are on average 3.5% away from the Lagrangian lower bound. It also reveals that the difficulty of the problem increases for smaller workstation-to-flight ratios, and that the peak workload objective achieves the goal of balancing the workload at times where peaks occur but does not necessarily balance the workload throughout the workday. On the other hand, the absolute deviation objective achieves better balance between workstations at the expense of a slight increase in peak workload.
{"title":"Workload balancing for flight dispatchers","authors":"Serkan Turhan, Fatma Gzara, Samir Elhedhli","doi":"10.1016/j.cor.2025.107303","DOIUrl":"10.1016/j.cor.2025.107303","url":null,"abstract":"<div><div>Flight dispatchers are responsible for flight planning prior to departure and flight monitoring while en-route. Their work involves multi-tasking and their workload is dynamic. We study such a problem under two nonlinear workload balancing measures: minimum peak workload and minimum absolute deviation. In order to solve practical instances efficiently, we use decomposition through Lagrangian relaxation to reduce the problem into easier-to-solve subproblems and prove that the Lagrangian lower bound has a closed-form expression for the peak workload objective. To find feasible solutions, we develop a Focus-Search-and-Improve heuristic with a genetic algorithm core where parts of the feasible solution set are explored and searched by a genetic algorithm, and solutions are further fine-tuned by an improvement heuristic. To test the efficiency of the proposed approach, we generated 231 instances based on 2019 U.S. Bureau of Transportation flight data that involve 17 different carriers and up to 3968 flights per instance. Numerical testing demonstrates the efficiency of the proposed approach in that the Lagrangian lower bound is very tight, and the heuristic finds optimal solutions in 33.4% of the instances and are on average 3.5% away from the Lagrangian lower bound. It also reveals that the difficulty of the problem increases for smaller workstation-to-flight ratios, and that the peak workload objective achieves the goal of balancing the workload at times where peaks occur but does not necessarily balance the workload throughout the workday. On the other hand, the absolute deviation objective achieves better balance between workstations at the expense of a slight increase in peak workload.</div></div>","PeriodicalId":10542,"journal":{"name":"Computers & Operations Research","volume":"186 ","pages":"Article 107303"},"PeriodicalIF":4.3,"publicationDate":"2025-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145463569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-31DOI: 10.1016/j.cor.2025.107325
İbrahim Oğuz Çetinkaya , İ. Esra Büyüktahtakın , Parshin Shojaee , Chandan K. Reddy
Our study contributes to the scheduling and combinatorial optimization literature with new heuristics discovered by leveraging the power of Large Language Models (LLMs). We focus on the single-machine total tardiness (SMTT) problem, which aims to minimize total tardiness by sequencing jobs on a single processor without preemption, given processing times and due dates. We develop and benchmark two novel LLM-discovered heuristics, the EDD Challenger (EDDC) and MDD Challenger (MDDC), inspired by the well-known Earliest Due Date (EDD) and Modified Due Date (MDD) rules. In contrast to prior studies that employed simpler rule-based heuristics, we evaluate our LLM-discovered algorithms using rigorous criteria, including optimality gaps and solution time derived from a mixed-integer programming (MIP) formulation of SMTT. We compare their performance against state-of-the-art heuristics and exact methods across various job sizes (20, 100, 200, and 500 jobs). For instances with more than 100 jobs, exact methods such as MIP and dynamic programming become computationally intractable. Up to 500 jobs, EDDC improves upon the classic EDD rule and another widely used algorithm in the literature. MDDC consistently outperforms traditional heuristics and remains competitive with exact approaches, particularly on larger and more complex instances. This study shows that human-LLM collaboration can produce scalable, high-performing heuristics for NP-hard constrained combinatorial optimization, even under limited resources when effectively configured.
{"title":"Discovering heuristics with Large Language Models (LLMs) for mixed-integer programs: Single-machine scheduling","authors":"İbrahim Oğuz Çetinkaya , İ. Esra Büyüktahtakın , Parshin Shojaee , Chandan K. Reddy","doi":"10.1016/j.cor.2025.107325","DOIUrl":"10.1016/j.cor.2025.107325","url":null,"abstract":"<div><div>Our study contributes to the scheduling and combinatorial optimization literature with new heuristics discovered by leveraging the power of Large Language Models (LLMs). We focus on the single-machine total tardiness (SMTT) problem, which aims to minimize total tardiness by sequencing <span><math><mi>n</mi></math></span> jobs on a single processor without preemption, given processing times and due dates. We develop and benchmark two novel LLM-discovered heuristics, the EDD Challenger (EDDC) and MDD Challenger (MDDC), inspired by the well-known Earliest Due Date (EDD) and Modified Due Date (MDD) rules. In contrast to prior studies that employed simpler rule-based heuristics, we evaluate our LLM-discovered algorithms using rigorous criteria, including optimality gaps and solution time derived from a mixed-integer programming (MIP) formulation of SMTT. We compare their performance against state-of-the-art heuristics and exact methods across various job sizes (20, 100, 200, and 500 jobs). For instances with more than 100 jobs, exact methods such as MIP and dynamic programming become computationally intractable. Up to 500 jobs, EDDC improves upon the classic EDD rule and another widely used algorithm in the literature. MDDC consistently outperforms traditional heuristics and remains competitive with exact approaches, particularly on larger and more complex instances. This study shows that human-LLM collaboration can produce scalable, high-performing heuristics for NP-hard constrained combinatorial optimization, even under limited resources when effectively configured.</div></div>","PeriodicalId":10542,"journal":{"name":"Computers & Operations Research","volume":"186 ","pages":"Article 107325"},"PeriodicalIF":4.3,"publicationDate":"2025-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145463570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper addresses the Dynamic Sustainable Flexible Job Shop Scheduling Problem (DSFJSSP) by going beyond the traditionally emphasized economic dimension — such as makespan, flow time, or resource utilization — to include human and environmental factors, along with their related disruptions. Specifically, it considers human-related constraints such as workers’ skills and ergonomic risks, as well as environmental aspects like carbon emissions from operations. Additionally, the study investigates the impact of worker absences and variability in renewable energy availability. To solve this problem, a multi-objective non-linear integer programming model is developed and an improved Non-dominated Sorting Genetic Algorithm III (INSGA-III) is employed et generate the initial scheduling solutions. Three Machine Learning (ML)-based approaches — Q-Learning, Deep Learning, and Deep Q-Learning — are used to determine the most effective rescheduling strategy in response to disruptions. Results show that partial rescheduling maintains a good balance across all objectives and a close adherence to the initial schedule. The right shift strategy is efficient for minor disruptions, while total rescheduling, though potentially effective, is time-consuming and can significantly deviate from the original schedule. The comparison of the considered ML methods confirms that the DQL offers the best adaptability and solution quality for selecting optimal rescheduling strategies. These results underscore the importance of adaptive scheduling in enhancing the resilience and sustainability of dynamic flexible job shop systems.
{"title":"Machine learning-driven solutions for sustainable and dynamic flexible job shop scheduling under worker absences and renewable energy variability","authors":"Candice Destouet , Houda Tlahig , Belgacem Bettayeb , Bélahcène Mazari","doi":"10.1016/j.cor.2025.107323","DOIUrl":"10.1016/j.cor.2025.107323","url":null,"abstract":"<div><div>This paper addresses the Dynamic Sustainable Flexible Job Shop Scheduling Problem (DSFJSSP) by going beyond the traditionally emphasized economic dimension — such as makespan, flow time, or resource utilization — to include human and environmental factors, along with their related disruptions. Specifically, it considers human-related constraints such as workers’ skills and ergonomic risks, as well as environmental aspects like carbon emissions from operations. Additionally, the study investigates the impact of worker absences and variability in renewable energy availability. To solve this problem, a multi-objective non-linear integer programming model is developed and an improved Non-dominated Sorting Genetic Algorithm III (INSGA-III) is employed et generate the initial scheduling solutions. Three Machine Learning (ML)-based approaches — <em>Q-Learning, Deep Learning, and Deep Q-Learning</em> — are used to determine the most effective rescheduling strategy in response to disruptions. Results show that partial rescheduling maintains a good balance across all objectives and a close adherence to the initial schedule. The right shift strategy is efficient for minor disruptions, while total rescheduling, though potentially effective, is time-consuming and can significantly deviate from the original schedule. The comparison of the considered ML methods confirms that the <em>DQL</em> offers the best adaptability and solution quality for selecting optimal rescheduling strategies. These results underscore the importance of adaptive scheduling in enhancing the resilience and sustainability of dynamic flexible job shop systems.</div></div>","PeriodicalId":10542,"journal":{"name":"Computers & Operations Research","volume":"186 ","pages":"Article 107323"},"PeriodicalIF":4.3,"publicationDate":"2025-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145413711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}