Pub Date : 2025-06-30DOI: 10.1016/j.simpat.2025.103174
Wujiu Pan , Yuanbin Chen , Xi Li , Junyi Wang , Jianwen Bao
In this paper, a bearing compound fault diagnosis model considering the actual variable working conditions, which combines segment data and multi head attention mechanism, is proposed to improve the accurate recognition ability of compound fault signals. The design of the overall model architecture, which combines the advantages of the convolution layer and the multi-head attention layer, enables the model to better handle fragmented compound fault signals under multiple conditions in engineering practice. In addition, the application strategies under different working conditions are also discussed to ensure that the model has good robustness in the real environment. Through a series of experiments, the excellent diagnostic performance of the proposed model under different working conditions and noise environment is demonstrated. Compared with other existing models, the results showed that the proposed model not only improves the accuracy of fault diagnosis but also demonstrated excellent industrial field adaptability and stability. This research not only provides a new perspective and methodology for the field of fault diagnosis, but also provides a technical basis for industrial intelligence and digital transformation, which has a broad application prospect and value.
{"title":"Bearing compound fault diagnosis considering the fusion fragment data and multi-head attention mechanism considering the actual variable working conditions","authors":"Wujiu Pan , Yuanbin Chen , Xi Li , Junyi Wang , Jianwen Bao","doi":"10.1016/j.simpat.2025.103174","DOIUrl":"10.1016/j.simpat.2025.103174","url":null,"abstract":"<div><div>In this paper, a bearing compound fault diagnosis model considering the actual variable working conditions, which combines segment data and multi head attention mechanism, is proposed to improve the accurate recognition ability of compound fault signals. The design of the overall model architecture, which combines the advantages of the convolution layer and the multi-head attention layer, enables the model to better handle fragmented compound fault signals under multiple conditions in engineering practice. In addition, the application strategies under different working conditions are also discussed to ensure that the model has good robustness in the real environment. Through a series of experiments, the excellent diagnostic performance of the proposed model under different working conditions and noise environment is demonstrated. Compared with other existing models, the results showed that the proposed model not only improves the accuracy of fault diagnosis but also demonstrated excellent industrial field adaptability and stability. This research not only provides a new perspective and methodology for the field of fault diagnosis, but also provides a technical basis for industrial intelligence and digital transformation, which has a broad application prospect and value.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"144 ","pages":"Article 103174"},"PeriodicalIF":3.5,"publicationDate":"2025-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144571583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-25DOI: 10.1016/j.simpat.2025.103169
Seyyed Meysam Rozehkhani, Farnaz Mahan
Virtual Machine Migration (VMM) is a critical component in cloud computing environments, enabling dynamic resource management and system optimization. However, existing approaches often face challenges such as increased downtime, excessive resource consumption, and complex decision-making processes in heterogeneous environments. This paper presents a novel framework based on Granular Computing (GrC) principles to address these challenges through systematic VM categorization and prioritization. The proposed framework employs a three-stage approach: (1) feature extraction and granule formation, converting VM attributes such as workload, downtime sensitivity, and resource utilization into meaningful information granules; (2) granule-based decision rule generation using formal GrC methodologies; and (3) priority-based classification using weighted membership functions. Experimental evaluations conducted using CloudSim 5.0 demonstrate the framework’s effectiveness across multiple performance dimensions. The results show 92. 1% classification accuracy, 83. 7% resource utilization and reduced migration downtime of 1.9 s. The framework exhibits linear computational complexity O(N), confirming its scalability for large-scale deployments. Additionally, performance analysis under various workload patterns (resource-intensive, service-oriented, and mixed) validates the framework’s robustness and adaptability. These results suggest that the proposed GrC-based approach offers a promising solution to optimize VM migration in cloud environments while maintaining operational efficiency and service quality.
{"title":"GrC-VMM: An intelligent framework for virtual machine migration optimization using granular computing","authors":"Seyyed Meysam Rozehkhani, Farnaz Mahan","doi":"10.1016/j.simpat.2025.103169","DOIUrl":"10.1016/j.simpat.2025.103169","url":null,"abstract":"<div><div>Virtual Machine Migration (VMM) is a critical component in cloud computing environments, enabling dynamic resource management and system optimization. However, existing approaches often face challenges such as increased downtime, excessive resource consumption, and complex decision-making processes in heterogeneous environments. This paper presents a novel framework based on Granular Computing (GrC) principles to address these challenges through systematic VM categorization and prioritization. The proposed framework employs a three-stage approach: (1) feature extraction and granule formation, converting VM attributes such as workload, downtime sensitivity, and resource utilization into meaningful information granules; (2) granule-based decision rule generation using formal GrC methodologies; and (3) priority-based classification using weighted membership functions. Experimental evaluations conducted using CloudSim 5.0 demonstrate the framework’s effectiveness across multiple performance dimensions. The results show 92. 1% classification accuracy, 83. 7% resource utilization and reduced migration downtime of 1.9 s. The framework exhibits linear computational complexity O(N), confirming its scalability for large-scale deployments. Additionally, performance analysis under various workload patterns (resource-intensive, service-oriented, and mixed) validates the framework’s robustness and adaptability. These results suggest that the proposed GrC-based approach offers a promising solution to optimize VM migration in cloud environments while maintaining operational efficiency and service quality.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"144 ","pages":"Article 103169"},"PeriodicalIF":3.5,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144517220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-24DOI: 10.1016/j.simpat.2025.103149
Haijing Ning , Herong Zhu , Yisheng An , Naiqi Wu , Yupeng Cao , Xiangmo Zhao
The autonomous emergency braking (AEB) system constitutes a critical safety function within advanced driver assistance systems (ADAS). Verifying its functionality is essential to ensure its operational correctness and reliability. Currently, AEB systems developed by different vendors employ diverse algorithms and lack a unified simulation, verification, and fault-detection framework. To bridge these gaps, this paper proposes a comprehensive modeling and functional verification framework for AEB systems. First, we establish a basic model using extended colored hybrid Petri nets (ECHPN). Next, we enhance this model by incorporating fault observation points to form an FD-ECHPN, thereby enabling fault detection and localization. Furthermore, this paper develops a universal simulation and testing approach to verify the functionality of AEB systems from various vendors by transforming the FD-ECHPN model into a Simulink/Stateflow model. The simulation results demonstrate that the proposed method can accurately assess the functionality of an AEB system and effectively identify and localize faults during model execution. Finally, we examine the state evolution and formal properties of the FD-ECHPN model to verify its correctness.
{"title":"Modeling and functional verification of autonomous emergency braking systems based on extended colored hybrid petri nets","authors":"Haijing Ning , Herong Zhu , Yisheng An , Naiqi Wu , Yupeng Cao , Xiangmo Zhao","doi":"10.1016/j.simpat.2025.103149","DOIUrl":"10.1016/j.simpat.2025.103149","url":null,"abstract":"<div><div>The autonomous emergency braking (AEB) system constitutes a critical safety function within advanced driver assistance systems (ADAS). Verifying its functionality is essential to ensure its operational correctness and reliability. Currently, AEB systems developed by different vendors employ diverse algorithms and lack a unified simulation, verification, and fault-detection framework. To bridge these gaps, this paper proposes a comprehensive modeling and functional verification framework for AEB systems. First, we establish a basic model using extended colored hybrid Petri nets (ECHPN). Next, we enhance this model by incorporating fault observation points to form an FD-ECHPN, thereby enabling fault detection and localization. Furthermore, this paper develops a universal simulation and testing approach to verify the functionality of AEB systems from various vendors by transforming the FD-ECHPN model into a Simulink/Stateflow model. The simulation results demonstrate that the proposed method can accurately assess the functionality of an AEB system and effectively identify and localize faults during model execution. Finally, we examine the state evolution and formal properties of the FD-ECHPN model to verify its correctness.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"144 ","pages":"Article 103149"},"PeriodicalIF":3.5,"publicationDate":"2025-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144517221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Heavy-duty gears are extensively utilized in high-power equipment such as helicopters, ships, and commercial vehicles, often leading to significant frictional power losses. Accurate friction prediction is essential for designing energy-efficient transmission systems. This study proposes a data-driven model to predict the friction coefficient and applies it to estimate the meshing efficiency of heavy-duty gears. By training on friction test data under various lubrication conditions, an extreme gradient boosting (XGBoost) model is developed to predict the friction coefficient, with hyperparameters optimized through grid search and cross-validation. The model’s decision mechanism is interpreted using Shapley additive explanations, highlighting the influence of speed, load, surface roughness, and lubricant viscosity on the friction coefficient. When applied to predict meshing efficiency, the model is experimentally validated, achieving a maximum prediction error of 0.211 % and an average error of 0.108 %. The effects of major operating and geometrical parameters are analyzed, showing that meshing efficiency increases with higher speeds, torque, pressure angles, tip relief length, and lower addendum coefficients. The results indicate that proper parameter optimization and the use of high-viscosity lubricants can enhance the energy efficiency of heavy-duty gears.
{"title":"A data-driven friction coefficient model and its application in meshing efficiency prediction of heavy-duty gears","authors":"Ningwei Xia , Changjiang Zhou , Shengwen Hou , Fa Zhang","doi":"10.1016/j.simpat.2025.103173","DOIUrl":"10.1016/j.simpat.2025.103173","url":null,"abstract":"<div><div>Heavy-duty gears are extensively utilized in high-power equipment such as helicopters, ships, and commercial vehicles, often leading to significant frictional power losses. Accurate friction prediction is essential for designing energy-efficient transmission systems. This study proposes a data-driven model to predict the friction coefficient and applies it to estimate the meshing efficiency of heavy-duty gears. By training on friction test data under various lubrication conditions, an extreme gradient boosting (XGBoost) model is developed to predict the friction coefficient, with hyperparameters optimized through grid search and cross-validation. The model’s decision mechanism is interpreted using Shapley additive explanations, highlighting the influence of speed, load, surface roughness, and lubricant viscosity on the friction coefficient. When applied to predict meshing efficiency, the model is experimentally validated, achieving a maximum prediction error of 0.211 % and an average error of 0.108 %. The effects of major operating and geometrical parameters are analyzed, showing that meshing efficiency increases with higher speeds, torque, pressure angles, tip relief length, and lower addendum coefficients. The results indicate that proper parameter optimization and the use of high-viscosity lubricants can enhance the energy efficiency of heavy-duty gears.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"144 ","pages":"Article 103173"},"PeriodicalIF":3.5,"publicationDate":"2025-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144510757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Effective path planning in flooding emergency rescue scenarios is essential for ensuring timely evacuation while minimizing safety risks. Conventional path-planning algorithms often prioritize the shortest or most cost-efficient routes, potentially neglecting safety considerations. To address this limitation, this study introduces an improved path-planning method using a behavior-based A-star (A*) algorithm designed for evacuation scenarios. A cellular automata (CA) environment is applied to address common challenges associated with traditional A* algorithms, including path inefficiencies, longer distances, and difficulties in handling dynamic flood environments. The key innovation of this study is the optimization of a heuristic function by integrating depth sensitivity perception (DSP), which directly influences evacuation behavior by prioritizing safer paths based on real-time water depth assessments during path selection. Experimental results across diverse flood scenarios demonstrate that the optimized A* algorithm significantly outperforms traditional A-star and Dijkstra’s algorithms, achieving reductions in explored nodes by 90.06 % and 93.13 %, lowering safety risks, and shortening computational times by 87.65 % and 88.06 %, respectively. These findings validate the efficacy of the depth-sensitive heuristic in enhancing evacuation pathfinding within complex flood environments.
{"title":"Simulating optimal flood evacuation using heuristic algorithms and path-choice behaviors","authors":"Housseyn Chebika , Guoqiang Shen , Haoying Han , Mahmoud Mabrouk , Brahim Nouibat","doi":"10.1016/j.simpat.2025.103167","DOIUrl":"10.1016/j.simpat.2025.103167","url":null,"abstract":"<div><div>Effective path planning in flooding emergency rescue scenarios is essential for ensuring timely evacuation while minimizing safety risks. Conventional path-planning algorithms often prioritize the shortest or most cost-efficient routes, potentially neglecting safety considerations. To address this limitation, this study introduces an improved path-planning method using a behavior-based A-star (A*) algorithm designed for evacuation scenarios. A cellular automata (CA) environment is applied to address common challenges associated with traditional A* algorithms, including path inefficiencies, longer distances, and difficulties in handling dynamic flood environments. The key innovation of this study is the optimization of a heuristic function by integrating depth sensitivity perception (DSP), which directly influences evacuation behavior by prioritizing safer paths based on real-time water depth assessments during path selection. Experimental results across diverse flood scenarios demonstrate that the optimized A* algorithm significantly outperforms traditional A-star and Dijkstra’s algorithms, achieving reductions in explored nodes by 90.06 % and 93.13 %, lowering safety risks, and shortening computational times by 87.65 % and 88.06 %, respectively. These findings validate the efficacy of the depth-sensitive heuristic in enhancing evacuation pathfinding within complex flood environments.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"144 ","pages":"Article 103167"},"PeriodicalIF":3.5,"publicationDate":"2025-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144596951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-21DOI: 10.1016/j.simpat.2025.103165
Ioannis Kleitsiotis , George Tsirogiannis , Spiridon Likothanassis
Procedural modelling programs can be used to generate 3D scenes of infinite variety, alleviating the need for manual repetitive tasks in 3D modelling. We utilize a probabilistic programming interpretation of controlled procedural modelling programs, and address the issue of prior misspecification, which can hinder the accurate representation of 3D models. We are interested in cases where prior knowledge is available as probabilistic tail bounds on global, high-level features of the 3D scene. In general, specifying the prior parameters satisfying the aforementioned high-level prior knowledge requires a parameter space search. However, programs with a large number of random variables, 3D scenes described by multiple procedural modelling programs and the need for repeated prior predictive checks might necessitate a prolonged prior parameter search. We reduce the time complexity of prior parameter search, and thus improve the process of modelling 3D scenes, by replacing computationally expensive computations of tail bounds constraints with the lower bounds provided by Selberg’s inequality. We present the theoretical underpinnings of our method and a detailed feasibility problem formulation that can be solved numerically. We compare our method to related approaches in the literature, and finally, we demonstrate its application in the procedural generation of 3D scenes in the agricultural domain.
{"title":"Efficient prior specification in procedural 3D modelling","authors":"Ioannis Kleitsiotis , George Tsirogiannis , Spiridon Likothanassis","doi":"10.1016/j.simpat.2025.103165","DOIUrl":"10.1016/j.simpat.2025.103165","url":null,"abstract":"<div><div>Procedural modelling programs can be used to generate 3D scenes of infinite variety, alleviating the need for manual repetitive tasks in 3D modelling. We utilize a probabilistic programming interpretation of controlled procedural modelling programs, and address the issue of prior misspecification, which can hinder the accurate representation of 3D models. We are interested in cases where prior knowledge is available as probabilistic tail bounds on global, high-level features of the 3D scene. In general, specifying the prior parameters satisfying the aforementioned high-level prior knowledge requires a parameter space search. However, programs with a large number of random variables, 3D scenes described by multiple procedural modelling programs and the need for repeated prior predictive checks might necessitate a prolonged prior parameter search. We reduce the time complexity of prior parameter search, and thus improve the process of modelling 3D scenes, by replacing computationally expensive computations of tail bounds constraints with the lower bounds provided by Selberg’s inequality. We present the theoretical underpinnings of our method and a detailed feasibility problem formulation that can be solved numerically. We compare our method to related approaches in the literature, and finally, we demonstrate its application in the procedural generation of 3D scenes in the agricultural domain.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"144 ","pages":"Article 103165"},"PeriodicalIF":3.5,"publicationDate":"2025-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144472339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-20DOI: 10.1016/j.simpat.2025.103172
Eden Teshome Hunde , Shereen Ismail
Wireless Sensor and Actuator Networks (WSANs) consist of numerous embedded devices that collaborate to perform complex tasks, surpassing the capabilities of traditional wired networks. This collaboration is efficiently enabled through multicast protocols. While multicast protocols offer significant advantages for WSANs, many fail to meet certain performance requirements. To address these challenges, we propose the Modified Bidirectional Multicast RPL Forwarding (MBMRF) protocol.
This study tackles limitations in existing Internet Protocol version 6 (IPv6) multicast protocols, including the Routing Protocol for Low Power and Lossy Networks (RPL) and Bidirectional Multicast RPL Forwarding (BMRF). The proposed MBMRF protocol introduces a novel mixed upward and downward multicast packet forwarding mechanism optimized for multi-channel Time Slotted Channel Hopping (TSCH) networks. Furthermore, to ensure sufficient timeslot allocation for scheduling mixed up-and-down packet transmissions, the protocol incorporates a modified version of the Orchestra scheduling algorithm.
The proposed MBMRF protocol was implemented and simulated on Zolertia (Z1) motes using the Contiki operating system and evaluated against existing IPv6 multicast protocols, including Stateless Multicast RPL Forwarding (SMRF), Enhanced Stateless Multicast RPL Forwarding (ESMRF), and BMRF. Results show that MBMRF significantly reduces buffer overflow and network-wide energy consumption compared to SMRF, ESMRF, and BMRF, with only a marginal increase in memory usage.
{"title":"MBMRF: A modified bidirectional IPv6 multicast protocol with mixed upward and downward forwarding for TSCH-enabled WSANs","authors":"Eden Teshome Hunde , Shereen Ismail","doi":"10.1016/j.simpat.2025.103172","DOIUrl":"10.1016/j.simpat.2025.103172","url":null,"abstract":"<div><div>Wireless Sensor and Actuator Networks (WSANs) consist of numerous embedded devices that collaborate to perform complex tasks, surpassing the capabilities of traditional wired networks. This collaboration is efficiently enabled through multicast protocols. While multicast protocols offer significant advantages for WSANs, many fail to meet certain performance requirements. To address these challenges, we propose the Modified Bidirectional Multicast RPL Forwarding (MBMRF) protocol.</div><div>This study tackles limitations in existing Internet Protocol version 6 (IPv6) multicast protocols, including the Routing Protocol for Low Power and Lossy Networks (RPL) and Bidirectional Multicast RPL Forwarding (BMRF). The proposed MBMRF protocol introduces a novel mixed upward and downward multicast packet forwarding mechanism optimized for multi-channel Time Slotted Channel Hopping (TSCH) networks. Furthermore, to ensure sufficient timeslot allocation for scheduling mixed up-and-down packet transmissions, the protocol incorporates a modified version of the Orchestra scheduling algorithm.</div><div>The proposed MBMRF protocol was implemented and simulated on Zolertia (Z1) motes using the Contiki operating system and evaluated against existing IPv6 multicast protocols, including Stateless Multicast RPL Forwarding (SMRF), Enhanced Stateless Multicast RPL Forwarding (ESMRF), and BMRF. Results show that MBMRF significantly reduces buffer overflow and network-wide energy consumption compared to SMRF, ESMRF, and BMRF, with only a marginal increase in memory usage.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"144 ","pages":"Article 103172"},"PeriodicalIF":3.5,"publicationDate":"2025-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144482452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-19DOI: 10.1016/j.simpat.2025.103162
Gabriel Carvalho, Sandra Lagén
Multiple-Input Multiple-Output (MIMO) is crucial for enhancing spectral efficiency, channel capacity, coverage, and robustness. However, it requires significant computations to determine a precoding matrix for transmitted data streams. In closed-loop MIMO, as adopted in 3GPP 5G NR, these computations occur on the user side. To avoid transmitting large matrices, 3GPP defined codebooks with pre-defined precoding matrices indexed by the Precoding Matrix Indicator (PMI). The User Equipment (UE) selects a PMI and a Rank Indicator (RI) to report to the Next Generation Node Base (gNB) as part of the Channel State Information (CSI) feedback. PMI/RI selection can be done via exhaustive search or more efficient techniques, which are crucial for real UE implementations due to their impact on computational complexity and energy consumption. This paper analyzes various PMI/RI selection techniques using the open-source ns-3 5G-LENA simulator. We have implemented state-of-the-art techniques in the system-level simulator and carried out extensive simulation campaigns. Also, we propose new PMI/RI selection methods by focusing on performance versus computational complexity trade-offs. Our proposed techniques show a superior simulation speedup (3.71x to 1.119x) with minimal throughput degradation (3% to 3.3%) compared to exhaustive search, depending on sub-band downsampling settings. Other state-of-the-art techniques implemented exhibit greater throughput losses (up to 8.3%) for a lower speedup (up to 3.54x) or similar losses with smaller speedups and potential slowdowns.
{"title":"Analysis and optimizations of PMI and rank selection algorithms for 5G NR","authors":"Gabriel Carvalho, Sandra Lagén","doi":"10.1016/j.simpat.2025.103162","DOIUrl":"10.1016/j.simpat.2025.103162","url":null,"abstract":"<div><div>Multiple-Input Multiple-Output (MIMO) is crucial for enhancing spectral efficiency, channel capacity, coverage, and robustness. However, it requires significant computations to determine a precoding matrix for transmitted data streams. In closed-loop MIMO, as adopted in 3GPP 5G NR, these computations occur on the user side. To avoid transmitting large matrices, 3GPP defined codebooks with pre-defined precoding matrices indexed by the Precoding Matrix Indicator (PMI). The User Equipment (UE) selects a PMI and a Rank Indicator (RI) to report to the Next Generation Node Base (gNB) as part of the Channel State Information (CSI) feedback. PMI/RI selection can be done via exhaustive search or more efficient techniques, which are crucial for real UE implementations due to their impact on computational complexity and energy consumption. This paper analyzes various PMI/RI selection techniques using the open-source ns-3 5G-LENA simulator. We have implemented state-of-the-art techniques in the system-level simulator and carried out extensive simulation campaigns. Also, we propose new PMI/RI selection methods by focusing on performance versus computational complexity trade-offs. Our proposed techniques show a superior simulation speedup (3.71x to 1.119x) with minimal throughput degradation (3% to 3.3%) compared to exhaustive search, depending on sub-band downsampling settings. Other state-of-the-art techniques implemented exhibit greater throughput losses (up to 8.3%) for a lower speedup (up to 3.54x) or similar losses with smaller speedups and potential slowdowns.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"144 ","pages":"Article 103162"},"PeriodicalIF":3.5,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144338635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-17DOI: 10.1016/j.simpat.2025.103171
Maryam Shamsoddini, Ali Ghaffari, Masoud Kargar, Nahideh Derakhshanfard
Software-Defined Networking (SDN) is a novel network architecture that separates the control plane from the data plane, enabling centralized and programmable management of network resources. One of the key challenges in SDN is determining the optimal number and locations of controllers, called the Controller Placement Problem (CPP), to ensure balanced load distribution, minimal latency, and high network reliability. This paper introduces a novel three-phase approach called Reliable Controller Placement using Fuzzy Logic and Metaheuristic Algorithms (RCPFH), which efficiently optimizes controller placement in SDN environments. In the first phase, the approach employs a fuzzy logic system guided by Levy Flight parameters to estimate the optimal number of controllers by evaluating critical factors such as energy consumption, congestion levels, and load variance across the network. The second phase utilizes a Modified Walrus Optimization Algorithm to identify the most suitable controller positions, considering path reliability, processing capacity, and propagation delay. Finally, in the third phase, backup controllers are selected to enhance system reliability in the event of controller failure. The proposed RCPFH framework is evaluated using four real-world network topologies from the ZOO Topology dataset. Comparative experiments with state-of-the-art approaches demonstrate significant performance improvements: up to a 38 % reduction in energy consumption, an 11 % decrease in load variance, a 36 % increase in network availability, a 17 % reduction in average latency, and a 15 % decrease in link failure rate. These results validate the effectiveness of RCPFH in optimizing SDN performance while maintaining robustness and operational efficiency.
{"title":"RCPFH: Reliable controller placement in software-defined networks using fuzzy systems and a modified walrus optimization algorithm","authors":"Maryam Shamsoddini, Ali Ghaffari, Masoud Kargar, Nahideh Derakhshanfard","doi":"10.1016/j.simpat.2025.103171","DOIUrl":"10.1016/j.simpat.2025.103171","url":null,"abstract":"<div><div>Software-Defined Networking (SDN) is a novel network architecture that separates the control plane from the data plane, enabling centralized and programmable management of network resources. One of the key challenges in SDN is determining the optimal number and locations of controllers, called the Controller Placement Problem (CPP), to ensure balanced load distribution, minimal latency, and high network reliability. This paper introduces a novel three-phase approach called Reliable Controller Placement using Fuzzy Logic and Metaheuristic Algorithms (RCPFH), which efficiently optimizes controller placement in SDN environments. In the first phase, the approach employs a fuzzy logic system guided by Levy Flight parameters to estimate the optimal number of controllers by evaluating critical factors such as energy consumption, congestion levels, and load variance across the network. The second phase utilizes a Modified Walrus Optimization Algorithm to identify the most suitable controller positions, considering path reliability, processing capacity, and propagation delay. Finally, in the third phase, backup controllers are selected to enhance system reliability in the event of controller failure. The proposed RCPFH framework is evaluated using four real-world network topologies from the ZOO Topology dataset. Comparative experiments with state-of-the-art approaches demonstrate significant performance improvements: up to a 38 % reduction in energy consumption, an 11 % decrease in load variance, a 36 % increase in network availability, a 17 % reduction in average latency, and a 15 % decrease in link failure rate. These results validate the effectiveness of RCPFH in optimizing SDN performance while maintaining robustness and operational efficiency.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"144 ","pages":"Article 103171"},"PeriodicalIF":3.5,"publicationDate":"2025-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144472338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-17DOI: 10.1016/j.simpat.2025.103163
Hemant Kumar Apat , Bibhudatta Sahoo
In recent years, applications of the Internet of Things (IoT) have experienced rapid growth, driven by the widespread adoption of IoT devices in various sectors. However, these devices are typically resource-constrained in terms of computing power and storage capacity. As a result, they often offload the generated data and tasks to nearby edge devices or fog computing layers for further processing and execution. The fog computing layer is located in close vicinity of the IoT devices and comprises a set of heterogeneous fog computing nodes to supplement the capacities of resource-constrained IoT devices. The fog computing nodes often pose computational challenges for various computation-intensive tasks such as image processing application, comprises various machine learning and artificial intelligence enabled tasks. In such a scenario, finding the effective task placement for dynamic and heterogeneous applications is computationally hard. In this work, we formulate the IoT application workflow placement problem as a multi-objective optimization problem formulated as Integer Linear Programming (ILP) model with the objective of minimizing the makespan, cost of execution, and energy consumption. A hybrid metaheuristic approach is proposed that combines the strengths of the Jaya algorithm (JA) and Grey Wolf Optimization (GWO) named as JaGW to derive a sub-optimal solution. The proposed JaGW is compared with conventional GWO and other state of the art algorithms such as Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO) using the Montage scientific workflow dataset. The simulation results demonstrate that the proposed algorithm achieves an average reduction in energy consumption of 24.84% compared to JAYA, 14.67% compared to ACO, 14.65% compared to PSO, and 8.78% compared to GWO, thereby exemplifying its superior performance over other metaheuristic algorithms.
{"title":"JaGW: A hybrid meta-heuristic algorithm for IoT workflow placement in fog computing environment","authors":"Hemant Kumar Apat , Bibhudatta Sahoo","doi":"10.1016/j.simpat.2025.103163","DOIUrl":"10.1016/j.simpat.2025.103163","url":null,"abstract":"<div><div>In recent years, applications of the Internet of Things (IoT) have experienced rapid growth, driven by the widespread adoption of IoT devices in various sectors. However, these devices are typically resource-constrained in terms of computing power and storage capacity. As a result, they often offload the generated data and tasks to nearby edge devices or fog computing layers for further processing and execution. The fog computing layer is located in close vicinity of the IoT devices and comprises a set of heterogeneous fog computing nodes to supplement the capacities of resource-constrained IoT devices. The fog computing nodes often pose computational challenges for various computation-intensive tasks such as image processing application, comprises various machine learning and artificial intelligence enabled tasks. In such a scenario, finding the effective task placement for dynamic and heterogeneous applications is computationally hard. In this work, we formulate the IoT application workflow placement problem as a multi-objective optimization problem formulated as Integer Linear Programming (ILP) model with the objective of minimizing the makespan, cost of execution, and energy consumption. A hybrid metaheuristic approach is proposed that combines the strengths of the Jaya algorithm (JA) and Grey Wolf Optimization (GWO) named as JaGW to derive a sub-optimal solution. The proposed JaGW is compared with conventional GWO and other state of the art algorithms such as Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO) using the Montage scientific workflow dataset. The simulation results demonstrate that the proposed algorithm achieves an average reduction in energy consumption of 24.84% compared to JAYA, 14.67% compared to ACO, 14.65% compared to PSO, and 8.78% compared to GWO, thereby exemplifying its superior performance over other metaheuristic algorithms.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"144 ","pages":"Article 103163"},"PeriodicalIF":3.5,"publicationDate":"2025-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144314121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}