Pub Date : 2026-01-01Epub Date: 2025-10-28DOI: 10.1016/j.simpat.2025.103218
Yuanchen Li , Lin Guan , Ziyang Zhang , George Vogiatzis
Vehicular Ad Hoc Networks (VANETs) are an important component of modern network systems, supporting applications such as real-time entertainment, traffic notifications, and emergency services. However, the highly dynamic and rapidly changing topology of VANETs presents serious challenges for conventional data retrieval mechanisms designed for Mobile Ad Hoc Networks (MANETs), resulting in degraded performance. To address this issue, a novel Density-Based Probability VANET Caching Framework Built Upon the Named Data Networking (NDN) was proposed, namely DPNVC. This original framework dynamically calculates caching probabilities based on local traffic density, enabling to adapt to frequent topology changes. Additionally, the NDN communication model is applied to effectively suppress redundant packet forwarding in VANET environments. Empirical simulation results show that DPNVC significantly enhances Quality of Service (QoS) in various scenarios, including urban, highway, and city settings. Compared to baseline methods, it reduces link load by up to 25 %, decreases data retrieval time by up to 30 %, and improves the local satisfaction ratio by up to 66 %. It also maintains a competitive one-hop hit ratio performance.
{"title":"DPNVC: A novel density-based probability VANET caching framework built upon the NDN","authors":"Yuanchen Li , Lin Guan , Ziyang Zhang , George Vogiatzis","doi":"10.1016/j.simpat.2025.103218","DOIUrl":"10.1016/j.simpat.2025.103218","url":null,"abstract":"<div><div>Vehicular Ad Hoc Networks (VANETs) are an important component of modern network systems, supporting applications such as real-time entertainment, traffic notifications, and emergency services. However, the highly dynamic and rapidly changing topology of VANETs presents serious challenges for conventional data retrieval mechanisms designed for Mobile Ad Hoc Networks (MANETs), resulting in degraded performance. To address this issue, a novel Density-Based Probability VANET Caching Framework Built Upon the Named Data Networking (NDN) was proposed, namely DPNVC. This original framework dynamically calculates caching probabilities based on local traffic density, enabling to adapt to frequent topology changes. Additionally, the NDN communication model is applied to effectively suppress redundant packet forwarding in VANET environments. Empirical simulation results show that DPNVC significantly enhances Quality of Service (QoS) in various scenarios, including urban, highway, and city settings. Compared to baseline methods, it reduces link load by up to 25 %, decreases data retrieval time by up to 30 %, and improves the local satisfaction ratio by up to 66 %. It also maintains a competitive one-hop hit ratio performance.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"146 ","pages":"Article 103218"},"PeriodicalIF":3.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145398230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-11-02DOI: 10.1016/j.simpat.2025.103222
Arun Ananthanarayanan , S. Kanithan , Sathish Kumar Hari , Naeem Ahmed , Nadeem Pasha
To apply efficient beamforming, we need to be able to estimate channel state information (CSI) accurately. It is an essential factor that determines the success of high-data-rate, reliable communication in modern wireless networks. However, classic approaches tend to be inefficient in complex and fast-changing environments. This paper proposes a Deep Single-Carrier Orthogonal Frequency Division Multiplexing (DS-OFDM) to solve these difficulties. Division Multiplexing (Deep SCOFDM) framework, which incorporates Convolutional Neural End-to-End Long Short Term Memory (LSTM) & CNN networks for adaptive networks. Signal processing for 6 G systems. The proposed model simultaneously performs modulation and equalization, overcoming the drawbacks of standard OFDM systems — such as high PAPR and poor interference tolerance — by leveraging CNNs' spatial feature extraction and LSTMs' temporal feature extraction. The identifier can minimize signal degradation and increase symbol detection accuracy, as demonstrated by simulation results. In addition, it shows that the Deep SCOFDM framework exhibits lower PAPR with improved BER performance. Thus, our proposed approach outperforms other deep learning based MIMO and beamforming methods in terms of performance, faster convergence, and higher spectral efficiency. These findings suggest that the proposed approach is highly suitable for selecting intelligent and energy-efficient transceiver architectures in future 6 G networks.
{"title":"Enhancing 6G wireless performance through advanced MIMO techniques","authors":"Arun Ananthanarayanan , S. Kanithan , Sathish Kumar Hari , Naeem Ahmed , Nadeem Pasha","doi":"10.1016/j.simpat.2025.103222","DOIUrl":"10.1016/j.simpat.2025.103222","url":null,"abstract":"<div><div>To apply efficient beamforming, we need to be able to estimate channel state information (CSI) accurately. It is an essential factor that determines the success of high-data-rate, reliable communication in modern wireless networks. However, classic approaches tend to be inefficient in complex and fast-changing environments. This paper proposes a Deep Single-Carrier Orthogonal Frequency Division Multiplexing (DS-OFDM) to solve these difficulties. Division Multiplexing (Deep SC<img>OFDM) framework, which incorporates Convolutional Neural End-to-End Long Short Term Memory (LSTM) & CNN networks for adaptive networks. Signal processing for 6 G systems. The proposed model simultaneously performs modulation and equalization, overcoming the drawbacks of standard OFDM systems — such as high PAPR and poor interference tolerance — by leveraging CNNs' spatial feature extraction and LSTMs' temporal feature extraction. The identifier can minimize signal degradation and increase symbol detection accuracy, as demonstrated by simulation results. In addition, it shows that the Deep SC<img>OFDM framework exhibits lower PAPR with improved BER performance. Thus, our proposed approach outperforms other deep learning based MIMO and beamforming methods in terms of performance, faster convergence, and higher spectral efficiency. These findings suggest that the proposed approach is highly suitable for selecting intelligent and energy-efficient transceiver architectures in future 6 G networks.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"146 ","pages":"Article 103222"},"PeriodicalIF":3.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145623772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-11-20DOI: 10.1016/j.simpat.2025.103230
Raju Singh
In mission-critical environments that require secure, scalable, and resource-efficient communication, Flying Ad Hoc Networks (FANETs) are increasing in utility. This paper proposed a Python-based simulation framework to analyse a Hybrid Trust–Cryptographic (HTC) protocol designed for unmanned aerial vehicle (UAV) swarm networks. The framework couples’ lightweight cryptographic primitives: Elliptic Curve Cryptography (ECC), AES-GCM, and ECDSA, with an adaptive trust management mechanism that qualifies UAV behaviour in a dynamic way. The trust–key coupling strategy is feedback-driven; declining trust will evoke key refresh or revocation on a pre-emptive basis to address the threats of collusion and insider attacks. Parameter values are validated against existing available cryptographic profiling benchmarks on embedded hardware platforms to ensure realism in modelling computational cost. The simulation environment is built under Gauss–Markov mobility and probabilistic attack model and has scalability with UAV nodes up to 200. The results show an increase in resilience and efficiency with almost 14 % higher packet delivery ratio, 17 % lower end-to-end latency, and 92 % of malicious node detection accuracy, also keeping energy overhead below 15 %. These results establish that adaptive trust evaluation coupled with lightweight cryptographic operations creates an optimal trade-off between security assurance and system performance. With an emphasis on reproducibility, this proposed simulation framework should thus serve as a benchmark for future research into secure communication systems for large-scale UAV swarms.
{"title":"Simulation and evaluation of a hybrid trust–cryptographic protocol for UAV swarm communications","authors":"Raju Singh","doi":"10.1016/j.simpat.2025.103230","DOIUrl":"10.1016/j.simpat.2025.103230","url":null,"abstract":"<div><div>In mission-critical environments that require secure, scalable, and resource-efficient communication, Flying Ad Hoc Networks (FANETs) are increasing in utility. This paper proposed a Python-based simulation framework to analyse a Hybrid Trust–Cryptographic (HTC) protocol designed for unmanned aerial vehicle (UAV) swarm networks. The framework couples’ lightweight cryptographic primitives: Elliptic Curve Cryptography (ECC), AES-GCM, and ECDSA, with an adaptive trust management mechanism that qualifies UAV behaviour in a dynamic way. The trust–key coupling strategy is feedback-driven; declining trust will evoke key refresh or revocation on a pre-emptive basis to address the threats of collusion and insider attacks. Parameter values are validated against existing available cryptographic profiling benchmarks on embedded hardware platforms to ensure realism in modelling computational cost. The simulation environment is built under Gauss–Markov mobility and probabilistic attack model and has scalability with UAV nodes up to 200. The results show an increase in resilience and efficiency with almost 14 % higher packet delivery ratio, 17 % lower end-to-end latency, and 92 % of malicious node detection accuracy, also keeping energy overhead below 15 %. These results establish that adaptive trust evaluation coupled with lightweight cryptographic operations creates an optimal trade-off between security assurance and system performance. With an emphasis on reproducibility, this proposed simulation framework should thus serve as a benchmark for future research into secure communication systems for large-scale UAV swarms.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"146 ","pages":"Article 103230"},"PeriodicalIF":3.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145623773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-11-13DOI: 10.1016/j.simpat.2025.103227
Jasmine Kaur , Inderveer Chana, Anju Bala
Serverless computing has revolutionized cloud platforms by enabling developers to deploy applications without the burden of managing infrastructure. However, challenges such as workload unpredictability, inefficient job scheduling, and high energy consumption remain critical concerns. To address these issues, this paper introduces LMP-Opt, a simulation-driven hybrid model that integrates Long Short-Term Memory (LSTM) for workload prediction, Multi-Agent Deep Q-Learning (MADQL) for job scheduling, and Proximal Policy Optimization (PPO) for fine-tuning scheduling decisions. Firstly, LSTM predicts workload patterns by capturing temporal dependencies, enabling efficient resource provisioning, and reducing performance degradation caused by unpredictable workloads. Secondly, MADQL utilizes multiple agents to optimize job scheduling by dynamically adjusting allocation strategies in response to workload fluctuations. Third, PPO refines scheduling policies by balancing exploration and exploitation, improving stability and convergence in decision-making. The proposed approach has been validated using ServerlessSimPro, a specialized simulation environment, and is further tested in AWS Lambda to ensure applicability to real-world serverless platforms. Extensive experiments using an e-commerce transaction processing workload demonstrate that LMP-Opt significantly improves system performance. The simulation results show a reduction in the average response time by 4.79% over MADQL and 6.09% over PPO, in addition to savings in energy consumption of 4.35% and 6.14%, respectively. The model also improves cost efficiency, CPU utilization, and resource scalability by reducing node requirements. These results confirm the value of hybrid learning-based simulation models for optimizing scheduling and energy efficiency in serverless computing environments.
{"title":"LMP-Opt: A simulation-based hybrid model for dynamic job scheduling and energy optimization in serverless computing","authors":"Jasmine Kaur , Inderveer Chana, Anju Bala","doi":"10.1016/j.simpat.2025.103227","DOIUrl":"10.1016/j.simpat.2025.103227","url":null,"abstract":"<div><div>Serverless computing has revolutionized cloud platforms by enabling developers to deploy applications without the burden of managing infrastructure. However, challenges such as workload unpredictability, inefficient job scheduling, and high energy consumption remain critical concerns. To address these issues, this paper introduces LMP-Opt, a simulation-driven hybrid model that integrates Long Short-Term Memory (LSTM) for workload prediction, Multi-Agent Deep Q-Learning (MADQL) for job scheduling, and Proximal Policy Optimization (PPO) for fine-tuning scheduling decisions. Firstly, LSTM predicts workload patterns by capturing temporal dependencies, enabling efficient resource provisioning, and reducing performance degradation caused by unpredictable workloads. Secondly, MADQL utilizes multiple agents to optimize job scheduling by dynamically adjusting allocation strategies in response to workload fluctuations. Third, PPO refines scheduling policies by balancing exploration and exploitation, improving stability and convergence in decision-making. The proposed approach has been validated using ServerlessSimPro, a specialized simulation environment, and is further tested in AWS Lambda to ensure applicability to real-world serverless platforms. Extensive experiments using an e-commerce transaction processing workload demonstrate that LMP-Opt significantly improves system performance. The simulation results show a reduction in the average response time by 4.79% over MADQL and 6.09% over PPO, in addition to savings in energy consumption of 4.35% and 6.14%, respectively. The model also improves cost efficiency, CPU utilization, and resource scalability by reducing node requirements. These results confirm the value of hybrid learning-based simulation models for optimizing scheduling and energy efficiency in serverless computing environments.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"146 ","pages":"Article 103227"},"PeriodicalIF":3.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-11-07DOI: 10.1016/j.simpat.2025.103223
Jinli Wei , Chunyue Cui , Xiaoxia Yang
The enclosed spaces and high-density population in subway stations significantly complicate evacuation during fires, thus increasing the difficulty of emergency response. To enhance fire rescue capabilities, this study conducts robust optimization modeling for firefighting routes from costs of station facility layout, passenger flow distribution, smoke propagation patterns, and human resource expenditure. Firstly, the BKA-GRU deep learning method is designed to calculate passenger passage time at critical nodes such as gates, improving the rationality of firefighting route design. Secondly, a firefighting value function based on the importance of fire nodes is constructed, making the firefighting routes more conducive to efficient and safe passenger evacuation. Thirdly, a box-based intersection polyhedron uncertainty set is employed to model the uncertainties in firefighting travel time and firefighting time, enhancing the adaptability and robustness of the routes. Fourthly, the advanced Ivy algorithm combined with Gurobi is adopted to solve the developed robust optimization model, enabling rapid identification of efficient and stable firefighting routes in complex environments. Finally, both quantitative and qualitative analyses are used to comprehensively evaluate firefighting effectiveness. The results indicate that: (i) The BKA-GRU prediction model exhibits high accuracy and reliability in predicting node passage time. (ii) The robust optimization model for firefighting routes significantly reduces fire by-products, shortens passenger evacuation time, and mitigates congestion. (iii) The firefighting route design achieves significant improvements in temperature control and visibility enhancement, effectively improving the fire environment and enhancing rescue efficiency and safety. This study provides an innovative solution for fire rescue in complex environments.
{"title":"Dynamic firefighting route planning for efficient evacuation in complex subway stations: A deep learning-enhanced robust optimization approach","authors":"Jinli Wei , Chunyue Cui , Xiaoxia Yang","doi":"10.1016/j.simpat.2025.103223","DOIUrl":"10.1016/j.simpat.2025.103223","url":null,"abstract":"<div><div>The enclosed spaces and high-density population in subway stations significantly complicate evacuation during fires, thus increasing the difficulty of emergency response. To enhance fire rescue capabilities, this study conducts robust optimization modeling for firefighting routes from costs of station facility layout, passenger flow distribution, smoke propagation patterns, and human resource expenditure. Firstly, the BKA-GRU deep learning method is designed to calculate passenger passage time at critical nodes such as gates, improving the rationality of firefighting route design. Secondly, a firefighting value function based on the importance of fire nodes is constructed, making the firefighting routes more conducive to efficient and safe passenger evacuation. Thirdly, a box-based intersection polyhedron uncertainty set is employed to model the uncertainties in firefighting travel time and firefighting time, enhancing the adaptability and robustness of the routes. Fourthly, the advanced Ivy algorithm combined with Gurobi is adopted to solve the developed robust optimization model, enabling rapid identification of efficient and stable firefighting routes in complex environments. Finally, both quantitative and qualitative analyses are used to comprehensively evaluate firefighting effectiveness. The results indicate that: (i) The BKA-GRU prediction model exhibits high accuracy and reliability in predicting node passage time. (ii) The robust optimization model for firefighting routes significantly reduces fire by-products, shortens passenger evacuation time, and mitigates congestion. (iii) The firefighting route design achieves significant improvements in temperature control and visibility enhancement, effectively improving the fire environment and enhancing rescue efficiency and safety. This study provides an innovative solution for fire rescue in complex environments.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"146 ","pages":"Article 103223"},"PeriodicalIF":3.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-11-07DOI: 10.1016/j.simpat.2025.103226
Waseem Abbass , Nasim Abbas , Uzma Majeed
The rapid growth of latency-sensitive Internet of Things (IoT) applications necessitates intelligent and scalable task offloading strategies in edge computing environments operating under dynamic workloads and limited energy resources. This paper introduces SimEdgeAI, a novel Deep Reinforcement Learning (DRL) framework that formulates task offloading as a stochastic decision-making problem over a multi-discrete action space, effectively capturing the trade-offs among local execution, edge offloading, and task dropping. The framework adopts an actor–critic architecture enhanced with a Gumbel–Softmax-based policy representation, enabling differentiable and stable learning over discrete actions. The actor network produces temperature-controlled stochastic policies, while the critic estimates long-term utilities based on system-wide features such as queue lengths, transmission delays, and energy states. A multi-objective reward function penalizing latency violations, excessive energy use, and fairness deviations guides the agent towards globally efficient and equitable offloading decisions. Extensive evaluations demonstrate that SimEdgeAI reduces average task latency by up to 35% and energy consumption by 25% compared to baseline methods including Deep Deterministic Policy Gradient (DDPG), Centralized DQN (C-DQN), and Greedy policies. It achieves over 91% deadline satisfaction and superior fairness measured by Jain’s index across edge clients. Ablation and sensitivity analyses confirm the contribution of each architectural component, while visualization studies underline the framework’s multi-objective consistency. These results highlight SimEdgeAI as an effective and fair solution for real-time, large-scale IoT–edge task offloading problems.
{"title":"SimEdgeAI: A deep reinforcement learning framework for simulating task offloading in resource-constrained IoT networks","authors":"Waseem Abbass , Nasim Abbas , Uzma Majeed","doi":"10.1016/j.simpat.2025.103226","DOIUrl":"10.1016/j.simpat.2025.103226","url":null,"abstract":"<div><div>The rapid growth of latency-sensitive Internet of Things (IoT) applications necessitates intelligent and scalable task offloading strategies in edge computing environments operating under dynamic workloads and limited energy resources. This paper introduces SimEdgeAI, a novel Deep Reinforcement Learning (DRL) framework that formulates task offloading as a stochastic decision-making problem over a multi-discrete action space, effectively capturing the trade-offs among local execution, edge offloading, and task dropping. The framework adopts an actor–critic architecture enhanced with a Gumbel–Softmax-based policy representation, enabling differentiable and stable learning over discrete actions. The actor network produces temperature-controlled stochastic policies, while the critic estimates long-term utilities based on system-wide features such as queue lengths, transmission delays, and energy states. A multi-objective reward function penalizing latency violations, excessive energy use, and fairness deviations guides the agent towards globally efficient and equitable offloading decisions. Extensive evaluations demonstrate that SimEdgeAI reduces average task latency by up to 35% and energy consumption by 25% compared to baseline methods including Deep Deterministic Policy Gradient (DDPG), Centralized DQN (C-DQN), and Greedy policies. It achieves over 91% deadline satisfaction and superior fairness measured by Jain’s index across edge clients. Ablation and sensitivity analyses confirm the contribution of each architectural component, while visualization studies underline the framework’s multi-objective consistency. These results highlight SimEdgeAI as an effective and fair solution for real-time, large-scale IoT–edge task offloading problems.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"146 ","pages":"Article 103226"},"PeriodicalIF":3.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-10-28DOI: 10.1016/j.simpat.2025.103216
Isabelle M. van Schilt , Jan H. Kwakkel , Jelte P. Mense , Alexander Verbraeck
Data on supply chains is often sparse due to reluctance among actors to share their data, making supply chain simulation modeling difficult. As a result, supply chain simulation models suffer from parametric and structural uncertainties, and there is a large variety of plausible simulation models that would align with the sparse observations about the real-world supply chain. Constructing a diverse set of models that fit sparse data is not an easy task. A relatively unknown approach to generating this diverse set of plausible models is the Quality Diversity (QD) algorithm. This study evaluates the feasibility of using QD to generate a diverse ensemble of supply chain simulation models for a varying degree of data sparseness. The results show that QD is able to generate a diverse ensemble of supply chain models, including the ground truth. As expected, QD successfully identifies the structure of the ground truth most frequently for a low level of data sparseness. When the sparseness of the data increases, QD is prone to overfitting, identifying supply chain structures that are more complex than the ground truth. Further research should focus on reviewing the calibration metric for sparse data, to reduce the overfitting of complex network structures.
{"title":"A simulation-based approach for reconstructing a diverse set of supply chain models with sparse data using a quality diversity algorithm","authors":"Isabelle M. van Schilt , Jan H. Kwakkel , Jelte P. Mense , Alexander Verbraeck","doi":"10.1016/j.simpat.2025.103216","DOIUrl":"10.1016/j.simpat.2025.103216","url":null,"abstract":"<div><div>Data on supply chains is often sparse due to reluctance among actors to share their data, making supply chain simulation modeling difficult. As a result, supply chain simulation models suffer from parametric and structural uncertainties, and there is a large variety of plausible simulation models that would align with the sparse observations about the real-world supply chain. Constructing a diverse set of models that fit sparse data is not an easy task. A relatively unknown approach to generating this diverse set of plausible models is the Quality Diversity (QD) algorithm. This study evaluates the feasibility of using QD to generate a diverse ensemble of supply chain simulation models for a varying degree of data sparseness. The results show that QD is able to generate a diverse ensemble of supply chain models, including the ground truth. As expected, QD successfully identifies the structure of the ground truth most frequently for a low level of data sparseness. When the sparseness of the data increases, QD is prone to overfitting, identifying supply chain structures that are more complex than the ground truth. Further research should focus on reviewing the calibration metric for sparse data, to reduce the overfitting of complex network structures.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"146 ","pages":"Article 103216"},"PeriodicalIF":3.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-11-13DOI: 10.1016/j.simpat.2025.103229
Shahed Almobydeen , Gaith Rjoub , Jamal Bentahar , Ahmad Irjoob , Muhammad Younas
In cloud computing systems, the proliferation of intelligent edge devices necessitates novel communication and coordination protocols that can operate under significant bandwidth and latency constraints. This necessity is driven not only by performance requirements but also by the growing imperative for sustainable computing, as inefficient communication is a primary driver of resources consumption in large-scale systems. This paper introduces the Hierarchical and Causal-Graph Network (HCGN), a framework designed for efficient, sustainable, and decentralized decision-making in large-scale edge computing environments. HCGN integrates a hierarchical control paradigm, mapping naturally to edge-fog architectures, with a Graph Neural Network (GNN) that learns a bandwidth-efficient communication policy between edge nodes. Furthermore, a novel Causal Credit Assignment Module (CCAM) enables intelligent and sustainable resource allocation by quantifying each node’s true causal contribution to system-wide objectives, ensuring that computational and communication resources are directed to the most effective parts of the network. We demonstrate through extensive simulations, including a novel edge-based collaborative video analytics task, that HCGN significantly outperforms traditional communication protocols in terms of task success rate, communication overhead, and robustness to network degradation. Our results validate HCGN as a scalable and resource-aware solution building the next generation of sustainable decentralized edge-fog-based systems.
{"title":"HCGN: A Hierarchical Causal-Graph Network for sustainable communication and coordination in edge–fog systems","authors":"Shahed Almobydeen , Gaith Rjoub , Jamal Bentahar , Ahmad Irjoob , Muhammad Younas","doi":"10.1016/j.simpat.2025.103229","DOIUrl":"10.1016/j.simpat.2025.103229","url":null,"abstract":"<div><div>In cloud computing systems, the proliferation of intelligent edge devices necessitates novel communication and coordination protocols that can operate under significant bandwidth and latency constraints. This necessity is driven not only by performance requirements but also by the growing imperative for sustainable computing, as inefficient communication is a primary driver of resources consumption in large-scale systems. This paper introduces the Hierarchical and Causal-Graph Network (HCGN), a framework designed for efficient, sustainable, and decentralized decision-making in large-scale edge computing environments. HCGN integrates a hierarchical control paradigm, mapping naturally to edge-fog architectures, with a Graph Neural Network (GNN) that learns a bandwidth-efficient communication policy between edge nodes. Furthermore, a novel Causal Credit Assignment Module (CCAM) enables intelligent and sustainable resource allocation by quantifying each node’s true causal contribution to system-wide objectives, ensuring that computational and communication resources are directed to the most effective parts of the network. We demonstrate through extensive simulations, including a novel edge-based collaborative video analytics task, that HCGN significantly outperforms traditional communication protocols in terms of task success rate, communication overhead, and robustness to network degradation. Our results validate HCGN as a scalable and resource-aware solution building the next generation of sustainable decentralized edge-fog-based systems.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"146 ","pages":"Article 103229"},"PeriodicalIF":3.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-11-25DOI: 10.1016/j.simpat.2025.103232
Selsabil Ines Bouhidel, Nabil Belala
We introduce a dual-log process mining approach for jointly modeling and optimizing behaviors in Vehicular Ad Hoc Networks (VANETs) and urban road traffic. Simulation event logs from SUMO (traffic dynamics) and NS2 (network communications) are synchronized, preprocessed, and mined using Fuzzy Miner and Petri-net discovery in the ProM tool to produce interpretable process models. These models uncover critical anomalies, congestion hotspots, CO emissions peaks, and packet-delivery bottlenecks and drive a continuous feedback loop that adaptively tunes routing protocols and eco-driving strategies in real-time. Experimental evaluation demonstrated the framework’s effectiveness in identifying recurring high-emission behaviors, communication bottlenecks, and incomplete packet flows across a large-scale VANET and traffic simulation dataset. The process models significantly improved behavioral interpretability and reduced the time required for manual analysis and anomaly tracing. Future work will extend this approach with predictive modules and online mining capabilities for enhanced adaptability in dynamic VANET environments.
{"title":"Optimization of urban mobility processes through the integration of process mining","authors":"Selsabil Ines Bouhidel, Nabil Belala","doi":"10.1016/j.simpat.2025.103232","DOIUrl":"10.1016/j.simpat.2025.103232","url":null,"abstract":"<div><div>We introduce a dual-log process mining approach for jointly modeling and optimizing behaviors in Vehicular Ad Hoc Networks (VANETs) and urban road traffic. Simulation event logs from SUMO (traffic dynamics) and NS2 (network communications) are synchronized, preprocessed, and mined using Fuzzy Miner and Petri-net discovery in the ProM tool to produce interpretable process models. These models uncover critical anomalies, congestion hotspots, CO<span><math><msub><mrow></mrow><mrow><mn>2</mn></mrow></msub></math></span> emissions peaks, and packet-delivery bottlenecks and drive a continuous feedback loop that adaptively tunes routing protocols and eco-driving strategies in real-time. Experimental evaluation demonstrated the framework’s effectiveness in identifying recurring high-emission behaviors, communication bottlenecks, and incomplete packet flows across a large-scale VANET and traffic simulation dataset. The process models significantly improved behavioral interpretability and reduced the time required for manual analysis and anomaly tracing. Future work will extend this approach with predictive modules and online mining capabilities for enhanced adaptability in dynamic VANET environments.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"146 ","pages":"Article 103232"},"PeriodicalIF":3.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145623831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-11-07DOI: 10.1016/j.simpat.2025.103224
Jiajun Feng, Panpan Guo, Penghui Xue, Siyao Liu, Gan Wang, Yixian Wang
This study investigates ground-surface settlement and tunnel deformation induced by the construction of a TBM driven tunnel that obliquely undercrosses in-service high-speed railway tunnels. An analytical solution for predicting surface settlement is proposed by introducing the undercrossing angle and high-speed train load correction coefficients into the classical Peck formula. We validate the model’s applicability to oblique undercrossing with numerical simulations and field measurements. Building on these insights, we conduct three-dimensional finite-element (FE) modelling to quantify the effects of undercrossing angle (50°, 78°, 90°), tunnel clear distance (17.3, 13.3, 9.3 m), and excavation staging (10, 50, 100 steps) on surface settlement. The influence mechanism of train load on the deformation of the railway tunnel is analyzed. The results show that the proposed analytical solution improves surface-settlement prediction, keeping the error within 15 %. Specifically, larger undercrossing angles narrow the settlement trough and reduce the maximum settlement. Decreasing the clear distance from 17.3 to 9.3 m increases surface settlement by 65.96 %. Under train loading, surface settlement increases progressively with the number of TBM excavation steps. Train loading markedly amplifies overall tunnel deformation, increasing longitudinal deformation by 150 % and intensifying non-uniformity. The integrated analytical–numerical framework provides a practical basis for safety assessment and for optimising protective measures in similar undercrossing projects.
{"title":"Ground surface settlements and deformation behavior of in-service high-speed railway tunnel induced by obliquely undercrossed TBM tunnelling","authors":"Jiajun Feng, Panpan Guo, Penghui Xue, Siyao Liu, Gan Wang, Yixian Wang","doi":"10.1016/j.simpat.2025.103224","DOIUrl":"10.1016/j.simpat.2025.103224","url":null,"abstract":"<div><div>This study investigates ground-surface settlement and tunnel deformation induced by the construction of a TBM driven tunnel that obliquely undercrosses in-service high-speed railway tunnels. An analytical solution for predicting surface settlement is proposed by introducing the undercrossing angle and high-speed train load correction coefficients into the classical Peck formula. We validate the model’s applicability to oblique undercrossing with numerical simulations and field measurements. Building on these insights, we conduct three-dimensional finite-element (FE) modelling to quantify the effects of undercrossing angle (50°, 78°, 90°), tunnel clear distance (17.3, 13.3, 9.3 m), and excavation staging (10, 50, 100 steps) on surface settlement. The influence mechanism of train load on the deformation of the railway tunnel is analyzed. The results show that the proposed analytical solution improves surface-settlement prediction, keeping the error within 15 %. Specifically, larger undercrossing angles narrow the settlement trough and reduce the maximum settlement. Decreasing the clear distance from 17.3 to 9.3 m increases surface settlement by 65.96 %. Under train loading, surface settlement increases progressively with the number of TBM excavation steps. Train loading markedly amplifies overall tunnel deformation, increasing longitudinal deformation by 150 % and intensifying non<strong>-</strong>uniformity. The integrated analytical–numerical framework provides a practical basis for safety assessment and for optimising protective measures in similar undercrossing projects.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"146 ","pages":"Article 103224"},"PeriodicalIF":3.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}