Pub Date : 2025-12-03DOI: 10.1016/j.cie.2025.111724
Giuseppe Olivieri, Agostino Marcello Mangini, Maria Pia Fanti
In large organizations, the allocation of personnel within office spaces presents significant challenges, particularly with the adoption of modern working methodologies such as smart working, co-working, and agile working. This paper addresses the optimization of workspace assignments to balance occupancy levels while ensuring cohesion within organizational units and compliance with individual work schedules. The proposed approach incorporates constraints to prevent overcrowding, maintain consistent desk assignments, and enforce separation between specific personnel groups. A multi-objective Integer Linear Programming formulation is developed and validated through a real case study. Results demonstrate that the methodology effectively reduces peak occupancy imbalances and strengthens team cohesion, providing human resources departments with a practical decision-support tool that requires minimal technical expertise. The solution features an intuitive web interface that facilitates efficient space management in dynamic working environments.
{"title":"Optimizing personnel allocation: An integer linear programming problem for enhanced workplace efficiency","authors":"Giuseppe Olivieri, Agostino Marcello Mangini, Maria Pia Fanti","doi":"10.1016/j.cie.2025.111724","DOIUrl":"10.1016/j.cie.2025.111724","url":null,"abstract":"<div><div>In large organizations, the allocation of personnel within office spaces presents significant challenges, particularly with the adoption of modern working methodologies such as smart working, co-working, and agile working. This paper addresses the optimization of workspace assignments to balance occupancy levels while ensuring cohesion within organizational units and compliance with individual work schedules. The proposed approach incorporates constraints to prevent overcrowding, maintain consistent desk assignments, and enforce separation between specific personnel groups. A multi-objective Integer Linear Programming formulation is developed and validated through a real case study. Results demonstrate that the methodology effectively reduces peak occupancy imbalances and strengthens team cohesion, providing human resources departments with a practical decision-support tool that requires minimal technical expertise. The solution features an intuitive web interface that facilitates efficient space management in dynamic working environments.</div></div>","PeriodicalId":55220,"journal":{"name":"Computers & Industrial Engineering","volume":"212 ","pages":"Article 111724"},"PeriodicalIF":6.5,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145747498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-03DOI: 10.1016/j.cie.2025.111725
Berkay Eren
This review provides a structured analysis of AI-based techniques for weld defect detection, covering model architectures, industrial deployment, and future directions. Conventional machine learning approaches are first outlined, followed by a detailed comparison of modern deep learning models. Convolutional neural networks (CNNs) remain widely applied, achieving over 95% accuracy in classification and mean IoU around 85–91% in segmentation, though their performance is often dataset-dependent. Detection-oriented architectures, especially YOLO derivatives, stand out by combining high accuracy (mAP above 95%) with real-time inference, making them the most practical for industrial use. Attention-augmented and transformer hybrids further improve small-defect recognition and multimodal learning, reaching up to 99% precision, but their computational demand limits deployment. Meta-analytical synthesis highlights that CNN classifiers are robust but sensitive to data bias, segmentation networks are effective yet variable, YOLO-based detectors consistently provide the best accuracy–speed balance, and Transformer hybrids achieve the highest precision at greater cost. Lightweight models such as MobileNet, YOLOv7-tiny, and EfficientNet-lite, often enhanced by quantization or pruning, enable deployment on resource-constrained hardware like Jetson Nano and Raspberry Pi. The adoption of Explainable AI (XAI) tools is also growing, particularly in safety–critical contexts requiring interpretability. Remaining challenges include the lack of standardized evaluation protocols, limited use of multimodal fusion and self-supervised learning, and uncertain benefits of GAN-based synthetic data. Overall, this review emphasizes that the most suitable approach depends on balancing accuracy, robustness, efficiency, and deployment feasibility, offering guidance toward reliable industrial solutions.
{"title":"Deep learning approaches for weld defect detection: A comprehensive review of models, applications, and future directions","authors":"Berkay Eren","doi":"10.1016/j.cie.2025.111725","DOIUrl":"10.1016/j.cie.2025.111725","url":null,"abstract":"<div><div>This review provides a structured analysis of AI-based techniques for weld defect detection, covering model architectures, industrial deployment, and future directions. Conventional machine learning approaches are first outlined, followed by a detailed comparison of modern deep learning models. Convolutional neural networks (CNNs) remain widely applied, achieving over 95% accuracy in classification and mean IoU around 85–91% in segmentation, though their performance is often dataset-dependent. Detection-oriented architectures, especially YOLO derivatives, stand out by combining high accuracy (mAP above 95%) with real-time inference, making them the most practical for industrial use. Attention-augmented and transformer hybrids further improve small-defect recognition and multimodal learning, reaching up to 99% precision, but their computational demand limits deployment. Meta-analytical synthesis highlights that CNN classifiers are robust but sensitive to data bias, segmentation networks are effective yet variable, YOLO-based detectors consistently provide the best accuracy–speed balance, and Transformer hybrids achieve the highest precision at greater cost. Lightweight models such as MobileNet, YOLOv7-tiny, and EfficientNet-lite, often enhanced by quantization or pruning, enable deployment on resource-constrained hardware like Jetson Nano and Raspberry Pi. The adoption of Explainable AI (XAI) tools is also growing, particularly in safety–critical contexts requiring interpretability. Remaining challenges include the lack of standardized evaluation protocols, limited use of multimodal fusion and self-supervised learning, and uncertain benefits of GAN-based synthetic data. Overall, this review emphasizes that the most suitable approach depends on balancing accuracy, robustness, efficiency, and deployment feasibility, offering guidance toward reliable industrial solutions.</div></div>","PeriodicalId":55220,"journal":{"name":"Computers & Industrial Engineering","volume":"212 ","pages":"Article 111725"},"PeriodicalIF":6.5,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145694148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-02DOI: 10.1016/j.cie.2025.111719
Cang Wu , Min Luo , Dong Wang , Wenpo Huang , Lijun Shang , Shubin Si
Monitoring shifts in location and scale (L&S) parameters during production processes is crucial, and control charts serve as indispensable tools for this purpose. They can be categorized into single chart and two-chart, with the former being more advantageous due to their simplicity and effectiveness in identifying changes. The problem at issue is that existing methods for monitoring unknown process distributions inadequately address simultaneous shifts in both L&S parameters. This paper presents the rank-based Exponentially Weighted Moving Average (r-EWMA) control chart, designed to monitor multiple processes without relying on the conventional premise of a multivariate normal distribution. This method combines rank-based statistics with local statistics derived from the k-nearest neighbors method and employs an EWMA control scheme. To assess the effectiveness of the proposed scheme, a Monte Carlo simulation has been executed and real-world case studies have been examined. The results of the simulation demonstrate that r-EWMA outperforms comparative control charts regarding the Median Run Length (MRL), when detecting out-of-control (OC) signals across various changes in non-normally distributed and nonlinear mixed distributions. Two case studies further validate the superiority of r-EWMA in handling shifts in L&S parameters under unknown process distributions, particularly when considering nonlinear correlations in multiple quality characteristics.
{"title":"Fast initial response-based r-EWMA single control chart for joint monitoring of location and scale parameters with nonlinear multiple quality characteristics","authors":"Cang Wu , Min Luo , Dong Wang , Wenpo Huang , Lijun Shang , Shubin Si","doi":"10.1016/j.cie.2025.111719","DOIUrl":"10.1016/j.cie.2025.111719","url":null,"abstract":"<div><div>Monitoring shifts in location and scale (L&S) parameters during production processes is crucial, and control charts serve as indispensable tools for this purpose. They can be categorized into single chart and two-chart, with the former being more advantageous due to their simplicity and effectiveness in identifying changes. The problem at issue is that existing methods for monitoring unknown process distributions inadequately address simultaneous shifts in both L&S parameters. This paper presents the rank-based Exponentially Weighted Moving Average (r-EWMA) control chart, designed to monitor multiple processes without relying on the conventional premise of a multivariate normal distribution. This method combines rank-based statistics with local statistics derived from the k-nearest neighbors method and employs an EWMA control scheme. To assess the effectiveness of the proposed scheme, a Monte Carlo simulation has been executed and real-world case studies have been examined. The results of the simulation demonstrate that r-EWMA outperforms comparative control charts regarding the Median Run Length (MRL), when detecting out-of-control (OC) signals across various changes in non-normally distributed and nonlinear mixed distributions. Two case studies further validate the superiority of r-EWMA in handling shifts in L&S parameters under unknown process distributions, particularly when considering nonlinear correlations in multiple quality characteristics.</div></div>","PeriodicalId":55220,"journal":{"name":"Computers & Industrial Engineering","volume":"212 ","pages":"Article 111719"},"PeriodicalIF":6.5,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145694146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01DOI: 10.1016/j.cie.2025.111722
Xu Luo, Yong Liu, Jiawei Wu
The increasing frequency and complexity of maritime activities demand more efficient and responsive rescue operations. To address this need in long-range scenarios, this study develops a dynamic “Location-Allocation-Routing” model that coordinates multiple accident points and rescue centers using shipboard helicopters. A distinguishing feature of this model is its incorporation of several dynamic factors: the ongoing drift of accident locations, the psychological panic of personnel, and the emergence of unexpected tasks during operations. The model is structured around two primary objectives: to minimize the maximum time required for any single rescue and to reduce the overall psychological panic costs involved. This bi-objective approach serves to highlight and prioritize the most critical rescue tasks. To solve the model, a Q-learning-based hyper-heuristic algorithm framework is proposed. This algorithm is designed to effectively integrate global exploration with adaptive learning, enabling a robust response to evolving rescue scenarios. Furthermore, a dynamic path segmentation mechanism is embedded within the algorithm to enhance its flexibility. The model’s adaptability and the algorithm’s effectiveness are confirmed through extensive numerical experiments and case studies, providing a solid foundation of both theoretical and practical support for advanced oceanic emergency rescue systems.
{"title":"Dynamic Location–Allocation–Routing scheduling for long-range maritime rescue operations using shipboard helicopters: A Q-learning-based hyper-heuristic approach","authors":"Xu Luo, Yong Liu, Jiawei Wu","doi":"10.1016/j.cie.2025.111722","DOIUrl":"10.1016/j.cie.2025.111722","url":null,"abstract":"<div><div>The increasing frequency and complexity of maritime activities demand more efficient and responsive rescue operations. To address this need in long-range scenarios, this study develops a dynamic “Location-Allocation-Routing” model that coordinates multiple accident points and rescue centers using shipboard helicopters. A distinguishing feature of this model is its incorporation of several dynamic factors: the ongoing drift of accident locations, the psychological panic of personnel, and the emergence of unexpected tasks during operations. The model is structured around two primary objectives: to minimize the maximum time required for any single rescue and to reduce the overall psychological panic costs involved. This bi-objective approach serves to highlight and prioritize the most critical rescue tasks. To solve the model, a Q-learning-based hyper-heuristic algorithm framework is proposed. This algorithm is designed to effectively integrate global exploration with adaptive learning, enabling a robust response to evolving rescue scenarios. Furthermore, a dynamic path segmentation mechanism is embedded within the algorithm to enhance its flexibility. The model’s adaptability and the algorithm’s effectiveness are confirmed through extensive numerical experiments and case studies, providing a solid foundation of both theoretical and practical support for advanced oceanic emergency rescue systems.</div></div>","PeriodicalId":55220,"journal":{"name":"Computers & Industrial Engineering","volume":"212 ","pages":"Article 111722"},"PeriodicalIF":6.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145694152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01DOI: 10.1016/j.cie.2025.111723
Chad Uhles , Hugh Medal , Michael Sherwin , Jessica Bean , Dallas Rosson , Kristi-Anna Stageberg , Seth Shuchat
Diminishing manufacturing sources and material shortages is a growing problem for industries that rely on systems with a long life expectancy. Technological developments and economic factors often cause parts within the system to become obsolete, the effects of which must be mitigated. An under-utilized resolution approach is the proactive design refresh, which replaces system designs that use soon-to-be obsolete parts with designs that do not use soon-to-be obsolete parts. In this work, we propose a mixed-integer programming optimization model that minimizes the present value of costs of an obsolescence management plan for a system subject to part obsolescence, leveraging proactive design refreshes and balancing part inventories. We experimentally test the computational scalability of our model and compare the policies to existing models in the literature and a reactive approach used in practice. The computational scalability experiment shows that our model scales well to the size of the bill of materials. Likewise, the comparison of policies experiment shows that our model produces solutions that are more cost-effective than other methodologies while being more flexible to different system production schedules. The main benefit of this work is our model’s ability to identify opportunities for significant obsolescence management cost avoidance due to the detailed resolution schedules that are generated and the utilization of part inventories to inform resolution schedules. Furthermore, this work validates the notion that a properly leveraged proactive design refresh is indeed an effective way to resolve events of part obsolescence.
{"title":"Optimizing design refresh decisions for long life systems production","authors":"Chad Uhles , Hugh Medal , Michael Sherwin , Jessica Bean , Dallas Rosson , Kristi-Anna Stageberg , Seth Shuchat","doi":"10.1016/j.cie.2025.111723","DOIUrl":"10.1016/j.cie.2025.111723","url":null,"abstract":"<div><div>Diminishing manufacturing sources and material shortages is a growing problem for industries that rely on systems with a long life expectancy. Technological developments and economic factors often cause parts within the system to become obsolete, the effects of which must be mitigated. An under-utilized resolution approach is the proactive design refresh, which replaces system designs that use soon-to-be obsolete parts with designs that do not use soon-to-be obsolete parts. In this work, we propose a mixed-integer programming optimization model that minimizes the present value of costs of an obsolescence management plan for a system subject to part obsolescence, leveraging proactive design refreshes and balancing part inventories. We experimentally test the computational scalability of our model and compare the policies to existing models in the literature and a reactive approach used in practice. The computational scalability experiment shows that our model scales well to the size of the bill of materials. Likewise, the comparison of policies experiment shows that our model produces solutions that are more cost-effective than other methodologies while being more flexible to different system production schedules. The main benefit of this work is our model’s ability to identify opportunities for significant obsolescence management cost avoidance due to the detailed resolution schedules that are generated and the utilization of part inventories to inform resolution schedules. Furthermore, this work validates the notion that a properly leveraged proactive design refresh is indeed an effective way to resolve events of part obsolescence.</div></div>","PeriodicalId":55220,"journal":{"name":"Computers & Industrial Engineering","volume":"212 ","pages":"Article 111723"},"PeriodicalIF":6.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145694133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01DOI: 10.1016/j.cie.2025.111701
Reza Aalikhani , Mohammad Fathian , Mohammad Reza Rasouli , Rik Eshuis
In the context of Cloud Manufacturing (CMfg), Service Composition (SC) approaches are used to improve the delivery of flexible and personalized services. Dynamic SC necessitates adapting to changes in Quality of Service (QoS) in real-time. Current SC methods lack the ability to dynamically predict QoS through monitoring of providers’ capabilities and resources. This paper proposes a novel method for dynamic SC that addresses real-time QoS predictions for the CMfg services by monitoring the related resources. For this purpose, a predictive process monitoring model is first proposed to predict the next sub-task of a CMfg process. Then, an adaptive service selection model is developed to predict the run-time QoS of relevant services that can fulfill the next sub-task. At runtime, the method dynamically selects the resource with the shortest predicted completion time for the SC. The proposed method was evaluated through a case study involving networked medical laboratories in Iran. Results demonstrate the method’s ability to accurately predict both subsequent process sub-tasks and overall process completion time during SC. Specifically, it achieved a next sub-task prediction precision exceeding 0.82 and produced feasible CMfg processes with over 72% feasibility. Furthermore, by accurately predicting the completion time of candidate resources, with a MAE below 7.5 minutes, the method proposed SCs that were over 94% similar to the best historical compositions. The core contribution of this research is the proposal of a dynamic SC method, incorporating adaptive service monitoring models to predict process completion times by considering real-time resource workloads within CMfg processes.
{"title":"Dynamic cloud manufacturing service composition based on runtime QoS prediction: A predictive process monitoring-based method","authors":"Reza Aalikhani , Mohammad Fathian , Mohammad Reza Rasouli , Rik Eshuis","doi":"10.1016/j.cie.2025.111701","DOIUrl":"10.1016/j.cie.2025.111701","url":null,"abstract":"<div><div>In the context of Cloud Manufacturing (CMfg), Service Composition (SC) approaches are used to improve the delivery of flexible and personalized services. Dynamic SC necessitates adapting to changes in Quality of Service (QoS) in real-time. Current SC methods lack the ability to dynamically predict QoS through monitoring of providers’ capabilities and resources. This paper proposes a novel method for dynamic SC that addresses real-time QoS predictions for the CMfg services by monitoring the related resources. For this purpose, a predictive process monitoring model is first proposed to predict the next sub-task of a CMfg process. Then, an adaptive service selection model is developed to predict the run-time QoS of relevant services that can fulfill the next sub-task. At runtime, the method dynamically selects the resource with the shortest predicted completion time for the SC. The proposed method was evaluated through a case study involving networked medical laboratories in Iran. Results demonstrate the method’s ability to accurately predict both subsequent process sub-tasks and overall process completion time during SC. Specifically, it achieved a next sub-task prediction precision exceeding 0.82 and produced feasible CMfg processes with over 72% feasibility. Furthermore, by accurately predicting the completion time of candidate resources, with a MAE below 7.5 minutes, the method proposed SCs that were over 94% similar to the best historical compositions. The core contribution of this research is the proposal of a dynamic SC method, incorporating adaptive service monitoring models to predict process completion times by considering real-time resource workloads within CMfg processes.</div></div>","PeriodicalId":55220,"journal":{"name":"Computers & Industrial Engineering","volume":"212 ","pages":"Article 111701"},"PeriodicalIF":6.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145694086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-30DOI: 10.1016/j.cie.2025.111720
R.J. Kuo , Kai-Wen Zheng , Ferani E. Zulvia , Timothy Kuo
The problem of imbalanced data classification is prevalent in many real-world applications, where certain classes contain significantly more instances than others. This imbalance negatively impacts classification performance, often leading to the misclassification of minority class instances. Data resampling techniques, such as oversampling and undersampling, offer a promising solution. However, conventional resampling approaches rely on local neighbor information to generate new instances in a linear manner, which can result in inaccurate and redundant samples.
Therefore, this study proposes a novel automatic clustering oversampling method to address imbalanced data classification problem. First, an advanced clustering technique is used to improve Affinity Propagation (AP) algorithm’s clustering quality and identifies clusters for oversampling using the Gini impurity index. This technique can automatically construct clusters and define the number of clusters. Second, an improved Borderline Synthetic Minority Over-sampling Technique (iB-SMOTE) differentiates between safe and risky minority samples, generating new data in both areas to reinforce class boundaries. By using different formulas to generate synthetic minority data, this algorithm strengthens the border between minority and majority classes while expanding the area of minority data without encroaching on the majority area.
Experimental results on 27 imbalanced datasets using six classifiers show that the proposed AP-iB-SMOTE algorithm significantly outperforms conventional methods in terms of F1-score and AUC metrics. Furthermore, statistical tests confirm the superiority of AP-iB-SMOTE algorithm in effectively handling imbalanced data.
{"title":"Impurity-based borderline SMOTE with affinity propagation for imbalanced data classification","authors":"R.J. Kuo , Kai-Wen Zheng , Ferani E. Zulvia , Timothy Kuo","doi":"10.1016/j.cie.2025.111720","DOIUrl":"10.1016/j.cie.2025.111720","url":null,"abstract":"<div><div>The problem of imbalanced data classification is prevalent in many real-world applications, where certain classes contain significantly more instances than others. This imbalance negatively impacts classification performance, often leading to the misclassification of minority class instances. Data resampling techniques, such as oversampling and undersampling, offer a promising solution. However, conventional resampling approaches rely on local neighbor information to generate new instances in a linear manner, which can result in inaccurate and redundant samples.</div><div>Therefore, this study proposes a novel automatic clustering oversampling method to address imbalanced data classification problem. First, an advanced clustering technique is used to improve Affinity Propagation (AP) algorithm’s clustering quality and identifies clusters for oversampling using the Gini impurity index. This technique can automatically construct clusters and define the number of clusters. Second, an improved Borderline Synthetic Minority Over-sampling Technique (iB-SMOTE) differentiates between safe and risky minority samples, generating new data in both areas to reinforce class boundaries. By using different formulas to generate synthetic minority data, this algorithm strengthens the border between minority and majority classes while expanding the area of minority data without encroaching on the majority area.</div><div>Experimental results on 27 imbalanced datasets using six classifiers show that the proposed AP-iB-SMOTE algorithm significantly outperforms conventional methods in terms of F1-score and AUC metrics. Furthermore, statistical tests confirm the superiority of AP-iB-SMOTE algorithm in effectively handling imbalanced data.</div></div>","PeriodicalId":55220,"journal":{"name":"Computers & Industrial Engineering","volume":"212 ","pages":"Article 111720"},"PeriodicalIF":6.5,"publicationDate":"2025-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145694132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-29DOI: 10.1016/j.cie.2025.111712
Tanzila Azad , Humyun Fuad Rahman , Daryl Essam , Ripon K. Chakrabortty
This research addresses an integrated production scheduling and vehicle routing problem in a flexible job-shop-based manufacturing supply chain, with a focus on achieving both economic and environmental sustainability. A bi-objective mathematical model is developed to minimize total tardiness from delivery delays and CO2 emissions from production and distribution operations. To solve this complex problem, we propose a hybrid, non-dominated sorting genetic algorithm (HNSGA-II). The proposed approach is benchmarked against classical optimization methods using CPLEX, non-hybridized versions of NSGA-II and NSGA-III, and Yağmur & Kesen (2023)s’ algorithm as a state-of-the-art approach. Performance comparisons on realistic problem instances reveal that HNSGA-II consistently provides higher-quality Pareto solutions, achieving better trade-offs between objectives within comparable runtimes. These findings demonstrate the proposed algorithm’s efficiency and applicability to integrated production and distribution optimization in sustainable supply chains.
{"title":"Optimizing sustainable production scheduling and routing in supply chains","authors":"Tanzila Azad , Humyun Fuad Rahman , Daryl Essam , Ripon K. Chakrabortty","doi":"10.1016/j.cie.2025.111712","DOIUrl":"10.1016/j.cie.2025.111712","url":null,"abstract":"<div><div>This research addresses an integrated production scheduling and vehicle routing problem in a flexible job-shop-based manufacturing supply chain, with a focus on achieving both economic and environmental sustainability. A bi-objective mathematical model is developed to minimize total tardiness from delivery delays and CO<sub>2</sub> emissions from production and distribution operations. To solve this complex problem, we propose a hybrid, non-dominated sorting genetic algorithm (HNSGA-II). The proposed approach is benchmarked against classical optimization methods using CPLEX, non-hybridized versions of NSGA-II and NSGA-III, and Yağmur & Kesen (2023)s’ algorithm as a state-of-the-art approach. Performance comparisons on realistic problem instances reveal that HNSGA-II consistently provides higher-quality Pareto solutions, achieving better trade-offs between objectives within comparable runtimes. These findings demonstrate the proposed algorithm’s efficiency and applicability to integrated production and distribution optimization in sustainable supply chains.</div></div>","PeriodicalId":55220,"journal":{"name":"Computers & Industrial Engineering","volume":"212 ","pages":"Article 111712"},"PeriodicalIF":6.5,"publicationDate":"2025-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145694088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-29DOI: 10.1016/j.cie.2025.111692
Maria Meneses, Daniel Santos, Ana Barbosa-Póvoa
The timely and efficient availability of blood products is essential in the Blood Supply Chain. Yet, the unpredictable nature of blood donations and demand, combined with various disturbances to scheduled activities, poses a significant challenge for the effective management of this network at the operational level. To address these issues, this research proposes a reactive-proactive rescheduling model for Blood Supply Chain management, referred to as Blood-OPE. This model considered real-time data and anticipated disturbances to adjust the existing master plan. The main goal is to sustain the service level provided to demand nodes and safety stock targets, while lowering deviations from planned activities (nervousness) and reducing waste. To demonstrate the model’s applicability, Blood-OPE is applied to the Portuguese Blood Supply Chain network, quantifying the trade-offs between various rescheduling flexibilities. The main findings indicate that having rescheduling flexibility helps meet demand and safety stock targets while reducing waste compared to relying solely on reactive measures. The sensitivity analysis emphasizes the importance of tailored strategies for different blood types and safety stock targets.
{"title":"Reactive-proactive rescheduling in blood supply chain management","authors":"Maria Meneses, Daniel Santos, Ana Barbosa-Póvoa","doi":"10.1016/j.cie.2025.111692","DOIUrl":"10.1016/j.cie.2025.111692","url":null,"abstract":"<div><div>The timely and efficient availability of blood products is essential in the Blood Supply Chain. Yet, the unpredictable nature of blood donations and demand, combined with various disturbances to scheduled activities, poses a significant challenge for the effective management of this network at the operational level. To address these issues, this research proposes a reactive-proactive rescheduling model for Blood Supply Chain management, referred to as Blood-OPE. This model considered real-time data and anticipated disturbances to adjust the existing master plan. The main goal is to sustain the service level provided to demand nodes and safety stock targets, while lowering deviations from planned activities (nervousness) and reducing waste. To demonstrate the model’s applicability, Blood-OPE is applied to the Portuguese Blood Supply Chain network, quantifying the trade-offs between various rescheduling flexibilities. The main findings indicate that having rescheduling flexibility helps meet demand and safety stock targets while reducing waste compared to relying solely on reactive measures. The sensitivity analysis emphasizes the importance of tailored strategies for different blood types and safety stock targets.</div></div>","PeriodicalId":55220,"journal":{"name":"Computers & Industrial Engineering","volume":"212 ","pages":"Article 111692"},"PeriodicalIF":6.5,"publicationDate":"2025-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145694153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-29DOI: 10.1016/j.cie.2025.111721
Bayan Hamdan , Pingfeng Wang
Multi-Fidelity Networks (MFNets) are a promising approach for surrogate modeling, particularly in scenarios with limited data and heterogeneous models. They establish relationships between models using parameters rather than relying solely on inputs or outputs. The covariance matrix, which captures the interconnections between the parameters, typically follows a peer structure assumption. When low-fidelity models exhibit dependencies, alternative architectures can better capture these relationships. This paper proposes a modified MFNets model that incorporates a hierarchical structure and presents a generalized formulation applicable to diverse applications. Two benchmark numerical problems are implemented to demonstrate the advantages of considering different underlying model architectures. The results showcase improved predictive capabilities of MFNets when estimating high-fidelity functions and leveraging available low-fidelity data and their relationships with each other.
{"title":"Exploring multi-fidelity networks and adapting their architecture: A paradigm for enhanced learning and efficiency","authors":"Bayan Hamdan , Pingfeng Wang","doi":"10.1016/j.cie.2025.111721","DOIUrl":"10.1016/j.cie.2025.111721","url":null,"abstract":"<div><div>Multi-Fidelity Networks (MFNets) are a promising approach for surrogate modeling, particularly in scenarios with limited data and heterogeneous models. They establish relationships between models using parameters rather than relying solely on inputs or outputs. The covariance matrix, which captures the interconnections between the parameters, typically follows a peer structure assumption. When low-fidelity models exhibit dependencies, alternative architectures can better capture these relationships. This paper proposes a modified MFNets model that incorporates a hierarchical structure and presents a generalized formulation applicable to diverse applications. Two benchmark numerical problems are implemented to demonstrate the advantages of considering different underlying model architectures. The results showcase improved predictive capabilities of MFNets when estimating high-fidelity functions and leveraging available low-fidelity data and their relationships with each other.</div></div>","PeriodicalId":55220,"journal":{"name":"Computers & Industrial Engineering","volume":"212 ","pages":"Article 111721"},"PeriodicalIF":6.5,"publicationDate":"2025-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145747495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}