Pub Date : 2025-12-01Epub Date: 2025-06-04DOI: 10.1016/j.orp.2025.100343
Dung-Ying Lin, Che-Hao Chen
This research investigates the integrated process planning and scheduling (IPPS) problem that considers process planning and production scheduling simultaneously with the aim of minimizing makespan. To solve the IPPS problem, we propose a branch-and-price (B&P) solution strategy that decomposes the problem according to the Dantzig-Wolfe principle and searches for integer solutions with a branch-and-bound framework. The decomposed master problem solves the scheduling problem and determines the corresponding timing information. The subproblem finds the optimal processing route and machine assignment based on the pricing information passed from the master problem. One of the critical features of the decomposition strategy is that the resulting subproblem can be reduced to a shortest path problem and can be solved with a proposed linear time algorithm. Numerical results show that the proposed B&P solution strategy can effectively and efficiently solve benchmark problem instances. Managerial insights are drawn based on the numerical results and sensitivity analysis to demonstrate the practical use of the proposed framework.
{"title":"A branch-and-price solution strategy for integrated process planning and scheduling problems","authors":"Dung-Ying Lin, Che-Hao Chen","doi":"10.1016/j.orp.2025.100343","DOIUrl":"10.1016/j.orp.2025.100343","url":null,"abstract":"<div><div>This research investigates the integrated process planning and scheduling (IPPS) problem that considers process planning and production scheduling simultaneously with the aim of minimizing makespan. To solve the IPPS problem, we propose a branch-and-price (B&P) solution strategy that decomposes the problem according to the Dantzig-Wolfe principle and searches for integer solutions with a branch-and-bound framework. The decomposed master problem solves the scheduling problem and determines the corresponding timing information. The subproblem finds the optimal processing route and machine assignment based on the pricing information passed from the master problem. One of the critical features of the decomposition strategy is that the resulting subproblem can be reduced to a shortest path problem and can be solved with a proposed linear time algorithm. Numerical results show that the proposed B&P solution strategy can effectively and efficiently solve benchmark problem instances. Managerial insights are drawn based on the numerical results and sensitivity analysis to demonstrate the practical use of the proposed framework.</div></div>","PeriodicalId":38055,"journal":{"name":"Operations Research Perspectives","volume":"15 ","pages":"Article 100343"},"PeriodicalIF":3.7,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144471729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-07-15DOI: 10.1016/j.orp.2025.100350
Kuo-Ching Ying , Pourya Pourhejazy , Wei-Jie Zhou
Learning takes time and hence its effects should be considered in short-term production planning (i.e., scheduling). This is especially true when human involvement is high and the shop floor experiences changes in workflow, workforce, or technology. The Single-Machine Scheduling Problem (SMSP) with the learning effect is considered to explore this interplay. The study first proves that the shortest processing time scheduling rule can solve the mathematical problems. Pseudo-polynomial solution algorithms based on Dynamic Programming (DP) are developed to solve the SMSPs with learning effects and job rejection to minimize the maximum completion time (makespan), total completion time, and total tardiness, separately. We found that the algorithms tend to reject a small number of orders with longer production times and retain more of those with shorter production times when the objective is to minimize the average response time for the new orders. This is contrary to situations when the system’s resource utilization or the delays in fulfilling demand are sought to be minimized. The study also found that orders requiring longer processing times should be scheduled later to improve all three performance metrics with higher learning rates. Finally, we establish that all three extended problems are solvable in pseudo-polynomial time, with complexities of for makespan and total completion time minimization, and for total tardiness minimization. The DP algorithms efficiently solve practical-sized instances, as validated by numerical experiments.
{"title":"The interplay between learning effect and order acceptance in production planning","authors":"Kuo-Ching Ying , Pourya Pourhejazy , Wei-Jie Zhou","doi":"10.1016/j.orp.2025.100350","DOIUrl":"10.1016/j.orp.2025.100350","url":null,"abstract":"<div><div>Learning takes time and hence its effects should be considered in short-term production planning (i.e., scheduling). This is especially true when human involvement is high and the shop floor experiences changes in workflow, workforce, or technology. The Single-Machine Scheduling Problem (SMSP) with the learning effect is considered to explore this interplay. The study first proves that the shortest processing time scheduling rule can solve the mathematical problems. Pseudo-polynomial solution algorithms based on Dynamic Programming (DP) are developed to solve the SMSPs with learning effects and job rejection to minimize the maximum completion time (makespan), total completion time, and total tardiness, separately. We found that the algorithms tend to reject a small number of orders with longer production times and retain more of those with shorter production times when the objective is to minimize the average response time for the new orders. This is contrary to situations when the system’s resource utilization or the delays in fulfilling demand are sought to be minimized. The study also found that orders requiring longer processing times should be scheduled later to improve all three performance metrics with higher learning rates. Finally, we establish that all three extended problems are solvable in pseudo-polynomial time, with complexities of <span><math><mrow><mi>O</mi><mo>(</mo><mrow><msup><mi>n</mi><mn>2</mn></msup><mi>E</mi></mrow><mo>)</mo></mrow></math></span> for makespan and total completion time minimization, and <span><math><mrow><mi>O</mi><mo>(</mo><mrow><msup><mi>n</mi><mn>2</mn></msup><mi>P</mi><mi>E</mi></mrow><mo>)</mo></mrow></math></span> for total tardiness minimization. The DP algorithms efficiently solve practical-sized instances, as validated by numerical experiments.</div></div>","PeriodicalId":38055,"journal":{"name":"Operations Research Perspectives","volume":"15 ","pages":"Article 100350"},"PeriodicalIF":3.7,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144655645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-09-02DOI: 10.1016/j.orp.2025.100352
Jorge A. Huertas, Pascal Van Hentenryck
In serial batch (s-batch) scheduling, jobs are grouped in batches and processed sequentially within their batch. This paper considers multiple parallel machines, nonidentical job weights and release times, and sequence-dependent setup times between batches of different families. Although s-batch has been widely studied in the literature, very few papers have taken into account a minimum batch size, typical in practical settings such as semiconductor manufacturing and the metal industry. The problem with this minimum batch size requirement has been mostly tackled with dynamic programming and meta-heuristics, and no article has ever used constraint programming (CP) to do so. This paper fills this gap by proposing, three CP models for s-batching with minimum batch size: (i) an Interval Assignment model that computes and bounds the size of the batches using the presence literals of interval variables of the jobs. (ii) A Global model that exclusively uses global constraints that track the size of the batches over time. (iii) And a Hybrid model that combines the benefits of the extra global constraints with the efficiency of the sum-of-presences constraints to ensure the minimum batch sizes.The computational experiments on standard cases compare the three CP models with two existing mixed-integer programming (MIP) models from the literature. The results demonstrate the versatility of the proposed CP models to handle multiple variations of s-batching; and their ability to produce, in large instances, better solutions than the MIP models faster.
{"title":"Constraint programming models for serial batch scheduling with minimum batch size","authors":"Jorge A. Huertas, Pascal Van Hentenryck","doi":"10.1016/j.orp.2025.100352","DOIUrl":"10.1016/j.orp.2025.100352","url":null,"abstract":"<div><div>In serial batch (s-batch) scheduling, jobs are grouped in batches and processed sequentially within their batch. This paper considers multiple parallel machines, nonidentical job weights and release times, and sequence-dependent setup times between batches of different families. Although s-batch has been widely studied in the literature, very few papers have taken into account a minimum batch size, typical in practical settings such as semiconductor manufacturing and the metal industry. The problem with this minimum batch size requirement has been mostly tackled with dynamic programming and meta-heuristics, and no article has ever used constraint programming (CP) to do so. This paper fills this gap by proposing, three CP models for s-batching with minimum batch size: (i) an <em>Interval Assignment</em> model that computes and bounds the size of the batches using the presence literals of interval variables of the jobs. (ii) A <em>Global</em> model that exclusively uses global constraints that track the size of the batches over time. (iii) And a <em>Hybrid</em> model that combines the benefits of the extra global constraints with the efficiency of the sum-of-presences constraints to ensure the minimum batch sizes.The computational experiments on standard cases compare the three CP models with two existing mixed-integer programming (MIP) models from the literature. The results demonstrate the versatility of the proposed CP models to handle multiple variations of s-batching; and their ability to produce, in large instances, better solutions than the MIP models faster.</div></div>","PeriodicalId":38055,"journal":{"name":"Operations Research Perspectives","volume":"15 ","pages":"Article 100352"},"PeriodicalIF":3.7,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145009938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-09-19DOI: 10.1016/j.orp.2025.100355
Vo Thanh Nha , Kyungjin Park , Hyeonae Jang , Gyu M. Lee , Tuan-Ho Le , Seong Hoon Jeong , Sangmun Shin
Experimental design and robust design (RD) methodologies have received attention from researchers to improve the performance of many different quality characteristics and solve problems at low costs. However, there is room for improvement to simultaneously solve interdisciplinary optimization problems associated with time-oriented, multiple, and hierarchical responses. This paper proposes a new RD modeling and optimization algorithm for drug development based on three research motivations: Firstly, customized experiments and estimation frameworks for representing pharmaceutical quality characteristics (i.e., time-oriented, multiple, and hierarchical responses) and functional relationships between input factors and hierarchical time-oriented output responses are proposed. Secondly, new hierarchical time-oriented robust design (HTRD) optimization models (i.e., priority-based, weight-based, and integrated models) are developed for these interdisciplinary pharmaceutical formulation problems. Finally, the pharmaceutical case study for drug formulation development is conducted for demonstration purposes. Based on the case study results, the proposed approach can provide optimal solutions with significantly small biases and variances.
{"title":"Development of a robust design optimization algorithm for hierarchical time series pharmaceutical problems","authors":"Vo Thanh Nha , Kyungjin Park , Hyeonae Jang , Gyu M. Lee , Tuan-Ho Le , Seong Hoon Jeong , Sangmun Shin","doi":"10.1016/j.orp.2025.100355","DOIUrl":"10.1016/j.orp.2025.100355","url":null,"abstract":"<div><div>Experimental design and robust design (RD) methodologies have received attention from researchers to improve the performance of many different quality characteristics and solve problems at low costs. However, there is room for improvement to simultaneously solve interdisciplinary optimization problems associated with time-oriented, multiple, and hierarchical responses. This paper proposes a new RD modeling and optimization algorithm for drug development based on three research motivations: Firstly, customized experiments and estimation frameworks for representing pharmaceutical quality characteristics (i.e., time-oriented, multiple, and hierarchical responses) and functional relationships between input factors and hierarchical time-oriented output responses are proposed. Secondly, new hierarchical time-oriented robust design (HTRD) optimization models (i.e., priority-based, weight-based, and integrated models) are developed for these interdisciplinary pharmaceutical formulation problems. Finally, the pharmaceutical case study for drug formulation development is conducted for demonstration purposes. Based on the case study results, the proposed approach can provide optimal solutions with significantly small biases and variances.</div></div>","PeriodicalId":38055,"journal":{"name":"Operations Research Perspectives","volume":"15 ","pages":"Article 100355"},"PeriodicalIF":3.7,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145265391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-10-10DOI: 10.1016/j.orp.2025.100356
Mohammad Sadeghi, Saeed Yaghoubi
The occurrence of sequential droughts and various forms of water shortages globally underscores the urgent need for sustainable water management solutions. In this context, cloud seeding has gained attention for its potential to enhance precipitation, yet its effectiveness is often uncertain due to complex cloud microphysics and atmospheric conditions. Acknowledging the inherent uncertainty in this endeavor, in this study, we employ a two-stage stochastic framework, integrating strategic decisions (facility location and network design) and operational realizations (seeding planning according to storm trajectories). Additionally, our model also considers fuzzy nature of seeding parameters. Above all, we develop a Markov chain procedure to mathematically model the prediction of expected increase in precipitation across cloud seeding decision-making processes. The integration of these stochastic methods into existing deterministic models from the literature results in a multi-objective Mixed-Integer Linear Programming (MILP) model designed to maximize rain probability and coverage while minimizing system-wide costs. To enhance the scalability and efficiency of the model, valid inequalities are developed to reduce the domain of binary variables. Additionally, a Lagrangian relaxation technique is proposed, yielding exact optimal solutions within reasonable timeframes and facilitating the handling of continuous space instances. Finally, a real-world case study in Iran demonstrates significant enhancements in precipitation predictions, with the Markov chain procedure showing an average 55 % increase in expected rain probability based on optimized seeding decisions. Scenario-based stochastic programming yields an 11.7 % value of stochastic solution and 16.5 % expected value of perfect information for cloud seeding initiatives.
{"title":"Cloud seeding optimization under uncertainty: A Markov chain approach in a two-stage fuzzy-stochastic framework","authors":"Mohammad Sadeghi, Saeed Yaghoubi","doi":"10.1016/j.orp.2025.100356","DOIUrl":"10.1016/j.orp.2025.100356","url":null,"abstract":"<div><div>The occurrence of sequential droughts and various forms of water shortages globally underscores the urgent need for sustainable water management solutions. In this context, cloud seeding has gained attention for its potential to enhance precipitation, yet its effectiveness is often uncertain due to complex cloud microphysics and atmospheric conditions. Acknowledging the inherent uncertainty in this endeavor, in this study, we employ a two-stage stochastic framework, integrating strategic decisions (facility location and network design) and operational realizations (seeding planning according to storm trajectories). Additionally, our model also considers fuzzy nature of seeding parameters. Above all, we develop a Markov chain procedure to mathematically model the prediction of expected increase in precipitation across cloud seeding decision-making processes. The integration of these stochastic methods into existing deterministic models from the literature results in a multi-objective Mixed-Integer Linear Programming (MILP) model designed to maximize rain probability and coverage while minimizing system-wide costs. To enhance the scalability and efficiency of the model, valid inequalities are developed to reduce the domain of binary variables. Additionally, a Lagrangian relaxation technique is proposed, yielding exact optimal solutions within reasonable timeframes and facilitating the handling of continuous space instances. Finally, a real-world case study in Iran demonstrates significant enhancements in precipitation predictions, with the Markov chain procedure showing an average 55 % increase in expected rain probability based on optimized seeding decisions. Scenario-based stochastic programming yields an 11.7 % value of stochastic solution and 16.5 % expected value of perfect information for cloud seeding initiatives.</div></div>","PeriodicalId":38055,"journal":{"name":"Operations Research Perspectives","volume":"15 ","pages":"Article 100356"},"PeriodicalIF":3.7,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145332419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-11-01DOI: 10.1016/j.orp.2025.100361
Li Zhang, Jianqin Zhou, Xufeng Yang
To ensure the timely supply of relief materials at a low cost, many countries have adopted the Joint Government and Enterprises Storage (JGES) mode to prepositioning relief materials, where some enterprises replace the government in stockpiling emergency supplies for disasters. A critical problem faced by the enterprise is how to manage its inventory considering its daily business demand and the possible emergency demand. The government also wants to know the performance of the mode and how to subsidize the enterprise. To address these questions, we first consider the single-period problem and formulate it as a newsvendor-type model. We obtain the optimal conditions and analyze the impacts of some parameters on the optimal policy. Furthermore, we consider the multi-period case and the government’s optimal subsidy for the enterprise. For the former, we show that the optimal inventory policy is still the base-stock policy if the fixed ordering cost is zero, and is the policy if the cost is positive. The government’s subsidy to the firm increases first and then decreases as the occurrence probability of the emergency increases. Finally, we conduct numerical experiments to compare the performance of the mode with that of the Separate Government-Enterprise Storage (SGES) mode, to demonstrate its advantages and the impacts of some parameters on its performance.
{"title":"Inventory prepositioning of relief material under the Joint Government-Enterprise Storage mode","authors":"Li Zhang, Jianqin Zhou, Xufeng Yang","doi":"10.1016/j.orp.2025.100361","DOIUrl":"10.1016/j.orp.2025.100361","url":null,"abstract":"<div><div>To ensure the timely supply of relief materials at a low cost, many countries have adopted the Joint Government and Enterprises Storage (JGES) mode to prepositioning relief materials, where some enterprises replace the government in stockpiling emergency supplies for disasters. A critical problem faced by the enterprise is how to manage its inventory considering its daily business demand and the possible emergency demand. The government also wants to know the performance of the mode and how to subsidize the enterprise. To address these questions, we first consider the single-period problem and formulate it as a newsvendor-type model. We obtain the optimal conditions and analyze the impacts of some parameters on the optimal policy. Furthermore, we consider the multi-period case and the government’s optimal subsidy for the enterprise. For the former, we show that the optimal inventory policy is still the base-stock policy if the fixed ordering cost is zero, and is the <span><math><mrow><mo>(</mo><mi>s</mi><mo>,</mo><mi>S</mi><mo>)</mo></mrow></math></span> policy if the cost is positive. The government’s subsidy to the firm increases first and then decreases as the occurrence probability of the emergency increases. Finally, we conduct numerical experiments to compare the performance of the mode with that of the Separate Government-Enterprise Storage (SGES) mode, to demonstrate its advantages and the impacts of some parameters on its performance.</div></div>","PeriodicalId":38055,"journal":{"name":"Operations Research Perspectives","volume":"15 ","pages":"Article 100361"},"PeriodicalIF":3.7,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145473767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-11-03DOI: 10.1016/j.orp.2025.100362
Julius Hoffmann , Janis S. Neufeld , Udo Buscher
Various recent scheduling literature has studied the so called customer order scheduling problem. In this class of scheduling problems, there are multiple customer orders, and each of them consists of several jobs. The order finishes and is ready to be shipped when the last job of the order finishes. In this paper, we consider the customer order scheduling problem in a permutation flow shop environment with machines. There are orders and each order has jobs. The objective is to minimize the total completion time of the orders. We present multiple problem properties and a MINLP formulation of the problem. Furthermore, four heuristics which follow the Iterated Greedy Algorithm are developed. In a computational experiment, we evaluated the four heuristics on their practicability. They showed good results in short calculation time when compared with the MINLP solution from a solver. Afterwards, we compared the four heuristics with each other for different problem sizes.
{"title":"Customer order scheduling in a permutation flow shop environment","authors":"Julius Hoffmann , Janis S. Neufeld , Udo Buscher","doi":"10.1016/j.orp.2025.100362","DOIUrl":"10.1016/j.orp.2025.100362","url":null,"abstract":"<div><div>Various recent scheduling literature has studied the so called customer order scheduling problem. In this class of scheduling problems, there are multiple customer orders, and each of them consists of several jobs. The order finishes and is ready to be shipped when the last job of the order finishes. In this paper, we consider the customer order scheduling problem in a permutation flow shop environment with <span><math><mi>m</mi></math></span> machines. There are <span><math><mi>n</mi></math></span> orders and each order has <span><math><mi>o</mi></math></span> jobs. The objective is to minimize the total completion time of the orders. We present multiple problem properties and a MINLP formulation of the problem. Furthermore, four heuristics which follow the Iterated Greedy Algorithm are developed. In a computational experiment, we evaluated the four heuristics on their practicability. They showed good results in short calculation time when compared with the MINLP solution from a solver. Afterwards, we compared the four heuristics with each other for different problem sizes.</div></div>","PeriodicalId":38055,"journal":{"name":"Operations Research Perspectives","volume":"15 ","pages":"Article 100362"},"PeriodicalIF":3.7,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145473768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-07-13DOI: 10.1016/j.orp.2025.100348
Spyros Giannelos, Danny Pudjianto, Goran Strbac
This paper investigates optimal day-ahead operation of a building-scale energy hub equipped with photovoltaics and a battery. Electricity demand and PV availability are uncertain and are represented in two ways: (i) thin-tailed normal distributions and (ii) kernel density estimation (KDE) fitted to empirical CityLearn data. For each representation we evaluate (a) deterministic Monte Carlo analysis, where the hub is optimised separately for 1 000 daily scenarios, and (b) a two-stage stochastic optimisation that fixes one set of decisions for hours 0–11 and adapts for hours 12–23 after conditions are observed. Gaussian inputs yield clustered costs (mean= $51.6, σ= $0.2) and a 99 % CVaR below $52, suggesting negligible risk. KDE inputs raise the Monte Carlo mean to $80.6 and lift the 99 % CVaR to $114, exposing heavy-tailed risk. Within the stochastic model the identical first-stage policy costs $79.0 with Gaussian data but only $71.3 with KDE, as recourse exploits sunny scenarios and trims the 95 % CVaR from $106.4 to $93.5. Consequently, Gaussian assumptions obscure true operating costs and financial exposure, whereas incorporating empirically derived KDE uncertainty within stochastic optimisation both lowers the average cost and provides stronger protection against extreme cost outcomes.
{"title":"Smart home economic operation under uncertainty: comparing monte carlo and stochastic optimization using gaussian and KDE-based data","authors":"Spyros Giannelos, Danny Pudjianto, Goran Strbac","doi":"10.1016/j.orp.2025.100348","DOIUrl":"10.1016/j.orp.2025.100348","url":null,"abstract":"<div><div>This paper investigates optimal day-ahead operation of a building-scale energy hub equipped with photovoltaics and a battery. Electricity demand and PV availability are uncertain and are represented in two ways: (i) thin-tailed normal distributions and (ii) kernel density estimation (KDE) fitted to empirical CityLearn data. For each representation we evaluate (a) deterministic Monte Carlo analysis, where the hub is optimised separately for 1 000 daily scenarios, and (b) a two-stage stochastic optimisation that fixes one set of decisions for hours 0–11 and adapts for hours 12–23 after conditions are observed. Gaussian inputs yield clustered costs (mean= $51.6, σ= $0.2) and a 99 % CVaR below $52, suggesting negligible risk. KDE inputs raise the Monte Carlo mean to $80.6 and lift the 99 % CVaR to $114, exposing heavy-tailed risk. Within the stochastic model the identical first-stage policy costs $79.0 with Gaussian data but only $71.3 with KDE, as recourse exploits sunny scenarios and trims the 95 % CVaR from $106.4 to $93.5. Consequently, Gaussian assumptions obscure true operating costs and financial exposure, whereas incorporating empirically derived KDE uncertainty within stochastic optimisation both lowers the average cost and provides stronger protection against extreme cost outcomes.</div></div>","PeriodicalId":38055,"journal":{"name":"Operations Research Perspectives","volume":"15 ","pages":"Article 100348"},"PeriodicalIF":3.7,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144633259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-11-21DOI: 10.1016/j.orp.2025.100368
Joonrak Kim , Seunghoon Lee
This study develops a robust optimization framework for closed-loop supply chain (CLSC) planning that explicitly accounts for uncertainty in the quality of recycled and remanufactured inputs. While such materials are critical for sustainability, their variable quality poses risks to production feasibility and supply reliability. To address this challenge, we propose an ordering-proportion-based robust model that distributes uncertainty across sourcing proportions and leverages the Bertsimas–Sim budget of uncertainty to balance conservatism and flexibility. A reformulation ensures tractability and preserves robust feasibility. Computational experiments demonstrate that the proposed model reduces shortages and stabilizes performance under independently realized uncertainties, while quantity-based robust models are more effective when uncertainties are correlated. Additional scalability tests confirm that the model remains computationally tractable for medium-sized networks. The findings highlight practical implications for managers, showing how proportion-based sourcing improves resilience, supports reliable demand fulfillment, and strengthens sustainability in CLSCs facing quality risks.
{"title":"Robust optimization model for closed-loop supply chain planning with collected material quality uncertainty","authors":"Joonrak Kim , Seunghoon Lee","doi":"10.1016/j.orp.2025.100368","DOIUrl":"10.1016/j.orp.2025.100368","url":null,"abstract":"<div><div>This study develops a robust optimization framework for closed-loop supply chain (CLSC) planning that explicitly accounts for uncertainty in the quality of recycled and remanufactured inputs. While such materials are critical for sustainability, their variable quality poses risks to production feasibility and supply reliability. To address this challenge, we propose an ordering-proportion-based robust model that distributes uncertainty across sourcing proportions and leverages the Bertsimas–Sim budget of uncertainty to balance conservatism and flexibility. A reformulation ensures tractability and preserves robust feasibility. Computational experiments demonstrate that the proposed model reduces shortages and stabilizes performance under independently realized uncertainties, while quantity-based robust models are more effective when uncertainties are correlated. Additional scalability tests confirm that the model remains computationally tractable for medium-sized networks. The findings highlight practical implications for managers, showing how proportion-based sourcing improves resilience, supports reliable demand fulfillment, and strengthens sustainability in CLSCs facing quality risks.</div></div>","PeriodicalId":38055,"journal":{"name":"Operations Research Perspectives","volume":"15 ","pages":"Article 100368"},"PeriodicalIF":3.7,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145623418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-09-16DOI: 10.1016/j.orp.2025.100354
Sina Sayardoost Tabrizi , Saeed Yousefi , Keikhosro Yakideh
The efficiency of supply chains is essential for improving managerial decision-making and enhancing strategic planning capabilities. This research presents a novel integration of deep learning with a two-stage supply chain framework to assess the efficiency of 28 petrochemical units over a period of 90 months. Based on sustainability principles, a dynamic network data envelopment analysis (DEA) model is employed to measure and compare the relative efficiency of supply chains operating across different time horizons. To forecast future input–output relationships in the supply chain, an advanced two-layer Long Short-Term Memory (LSTM) model is proposed. This LSTM-based prediction system demonstrated exceptional accuracy, achieving a low Mean Squared Error (MSE) of 0.0004 and a Root Mean Square Error (RMSE) of 0.0208. Additionally, the trend of the loss function during training confirmed the reliability and stability of the proposed deep learning approach. The precise forecasting capability of the LSTM model enables managers to proactively identify and address inefficiencies in production facilities before they occur, rather than relying on reactive strategies. This proactive approach allows for better resource allocation and improved operational performance across petrochemical supply chains. By integrating deep learning with dynamic network DEA models, this study offers a robust framework for predictive efficiency analysis and performance evaluation in industrial applications. The suggested framework provides decision-makers with a pragmatic assessment instrument to identify efficient and underperforming supply chains and set realistic benchmarks for improvement. This methodology is designed to be scalable and adaptable, making it suitable for real-world evaluations of multi-stage supply chains and production systems. The research culminates in a two-phase case study, illustrating the practical applicability of the proposed framework.
{"title":"Forecasting efficiency of two-stage Petrochemical sustainable supply chains using Deep Learning and DNDEA Model","authors":"Sina Sayardoost Tabrizi , Saeed Yousefi , Keikhosro Yakideh","doi":"10.1016/j.orp.2025.100354","DOIUrl":"10.1016/j.orp.2025.100354","url":null,"abstract":"<div><div>The efficiency of supply chains is essential for improving managerial decision-making and enhancing strategic planning capabilities. This research presents a novel integration of deep learning with a two-stage supply chain framework to assess the efficiency of 28 petrochemical units over a period of 90 months. Based on sustainability principles, a dynamic network data envelopment analysis (DEA) model is employed to measure and compare the relative efficiency of supply chains operating across different time horizons. To forecast future input–output relationships in the supply chain, an advanced two-layer Long Short-Term Memory (LSTM) model is proposed. This LSTM-based prediction system demonstrated exceptional accuracy, achieving a low Mean Squared Error (MSE) of 0.0004 and a Root Mean Square Error (RMSE) of 0.0208. Additionally, the trend of the loss function during training confirmed the reliability and stability of the proposed deep learning approach. The precise forecasting capability of the LSTM model enables managers to proactively identify and address inefficiencies in production facilities before they occur, rather than relying on reactive strategies. This proactive approach allows for better resource allocation and improved operational performance across petrochemical supply chains. By integrating deep learning with dynamic network DEA models, this study offers a robust framework for predictive efficiency analysis and performance evaluation in industrial applications. The suggested framework provides decision-makers with a pragmatic assessment instrument to identify efficient and underperforming supply chains and set realistic benchmarks for improvement. This methodology is designed to be scalable and adaptable, making it suitable for real-world evaluations of multi-stage supply chains and production systems. The research culminates in a two-phase case study, illustrating the practical applicability of the proposed framework.</div></div>","PeriodicalId":38055,"journal":{"name":"Operations Research Perspectives","volume":"15 ","pages":"Article 100354"},"PeriodicalIF":3.7,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145117677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}