Pub Date : 2025-11-22DOI: 10.1016/j.simpat.2025.103231
Helen D. Karatza
Cooperating cloud-fog-mist computing frameworks have been methodically designed to balance computational efficiency and data privacy during the execution of complex applications with diverse security demands. To guarantee the proper execution of these applications, the implementation of security-aware scheduling strategies is crucial. This paper explores security-aware scheduling policies, with a focus on developing algorithms tailored for heterogeneous workloads, including both simple single-task jobs and Bags of Linear Workflows (BoLWs) with varying priority levels. Multi-criteria scheduling algorithms are utilized to handle tasks by priority in the three layers. These algorithms are evaluated under different conditions, including varying system utilization, security requirements, and task service demands. Building on the epoch policy discussed in prior research, which considers job security levels, we propose an enhanced epoch-based approach that also accounts for the number of virtual machines allocated to each BoLW job alongside its security requirements. Simulation results demonstrate the superior performance of this novel epoch strategy compared to the previously established approach.
{"title":"Scheduling mixed workloads with security requirements in a cloud-fog-mist computing environment","authors":"Helen D. Karatza","doi":"10.1016/j.simpat.2025.103231","DOIUrl":"10.1016/j.simpat.2025.103231","url":null,"abstract":"<div><div>Cooperating cloud-fog-mist computing frameworks have been methodically designed to balance computational efficiency and data privacy during the execution of complex applications with diverse security demands. To guarantee the proper execution of these applications, the implementation of security-aware scheduling strategies is crucial. This paper explores security-aware scheduling policies, with a focus on developing algorithms tailored for heterogeneous workloads, including both simple single-task jobs and Bags of Linear Workflows (BoLWs) with varying priority levels. Multi-criteria scheduling algorithms are utilized to handle tasks by priority in the three layers. These algorithms are evaluated under different conditions, including varying system utilization, security requirements, and task service demands. Building on the epoch policy discussed in prior research, which considers job security levels, we propose an enhanced epoch-based approach that also accounts for the number of virtual machines allocated to each BoLW job alongside its security requirements. Simulation results demonstrate the superior performance of this novel epoch strategy compared to the previously established approach.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"146 ","pages":"Article 103231"},"PeriodicalIF":3.5,"publicationDate":"2025-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145623771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-20DOI: 10.1016/j.simpat.2025.103230
Raju Singh
In mission-critical environments that require secure, scalable, and resource-efficient communication, Flying Ad Hoc Networks (FANETs) are increasing in utility. This paper proposed a Python-based simulation framework to analyse a Hybrid Trust–Cryptographic (HTC) protocol designed for unmanned aerial vehicle (UAV) swarm networks. The framework couples’ lightweight cryptographic primitives: Elliptic Curve Cryptography (ECC), AES-GCM, and ECDSA, with an adaptive trust management mechanism that qualifies UAV behaviour in a dynamic way. The trust–key coupling strategy is feedback-driven; declining trust will evoke key refresh or revocation on a pre-emptive basis to address the threats of collusion and insider attacks. Parameter values are validated against existing available cryptographic profiling benchmarks on embedded hardware platforms to ensure realism in modelling computational cost. The simulation environment is built under Gauss–Markov mobility and probabilistic attack model and has scalability with UAV nodes up to 200. The results show an increase in resilience and efficiency with almost 14 % higher packet delivery ratio, 17 % lower end-to-end latency, and 92 % of malicious node detection accuracy, also keeping energy overhead below 15 %. These results establish that adaptive trust evaluation coupled with lightweight cryptographic operations creates an optimal trade-off between security assurance and system performance. With an emphasis on reproducibility, this proposed simulation framework should thus serve as a benchmark for future research into secure communication systems for large-scale UAV swarms.
{"title":"Simulation and evaluation of a hybrid trust–cryptographic protocol for UAV swarm communications","authors":"Raju Singh","doi":"10.1016/j.simpat.2025.103230","DOIUrl":"10.1016/j.simpat.2025.103230","url":null,"abstract":"<div><div>In mission-critical environments that require secure, scalable, and resource-efficient communication, Flying Ad Hoc Networks (FANETs) are increasing in utility. This paper proposed a Python-based simulation framework to analyse a Hybrid Trust–Cryptographic (HTC) protocol designed for unmanned aerial vehicle (UAV) swarm networks. The framework couples’ lightweight cryptographic primitives: Elliptic Curve Cryptography (ECC), AES-GCM, and ECDSA, with an adaptive trust management mechanism that qualifies UAV behaviour in a dynamic way. The trust–key coupling strategy is feedback-driven; declining trust will evoke key refresh or revocation on a pre-emptive basis to address the threats of collusion and insider attacks. Parameter values are validated against existing available cryptographic profiling benchmarks on embedded hardware platforms to ensure realism in modelling computational cost. The simulation environment is built under Gauss–Markov mobility and probabilistic attack model and has scalability with UAV nodes up to 200. The results show an increase in resilience and efficiency with almost 14 % higher packet delivery ratio, 17 % lower end-to-end latency, and 92 % of malicious node detection accuracy, also keeping energy overhead below 15 %. These results establish that adaptive trust evaluation coupled with lightweight cryptographic operations creates an optimal trade-off between security assurance and system performance. With an emphasis on reproducibility, this proposed simulation framework should thus serve as a benchmark for future research into secure communication systems for large-scale UAV swarms.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"146 ","pages":"Article 103230"},"PeriodicalIF":3.5,"publicationDate":"2025-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145623773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-14DOI: 10.1016/j.simpat.2025.103225
Buchuan Zhang , Chuan-Zhi Thomas Xie
As a distinct form of pedestrian motion, skiing possesses a long-standing history, yet the recurrent occurrence of ski-related accidents underscores the necessity of deeper inquiry into this dynamic system. In light of such a need, the present study adopts a modelling and simulation perspective to construct a framework for analysing skier trajectories and performance, with explicit consideration of the complex interactions between human behaviour, varying environmental and physical conditions. To this end, in specific, a cellular automaton (CA)-based model was developed, incorporating six critical factors: slope angle, surface friction, boundary constraints, terrain curvature, aerodynamic drag, and directional inertia. Probabilistic decision rules combined with physics-based speed updates enabled realistic skier movement simulations across a discretized slope grid. The simulation shows that slope angle predominantly drives skier speed, while surface friction and aerodynamic drag reduce efficiency by increasing resistance and prolonging descent. Boundary effects, though minor under wide-slope conditions, help confine lateral motion and influence path shaping. Terrain curvature impacts turning dynamics, especially on rough or irregular surfaces, while inertia enhances straight-line speed but reduces adaptability. The study underscores the importance of capturing both environmental and behavioural interactions to accurately model downhill skiing dynamics and provides detailed insights into the mechanisms shaping skiing efficiency, offering a powerful tool for advanced skier simulation and slope performance analysis. This study presents a cellular automaton (CA)-based modelling framework for simulating skier dynamics. Model integrates six environmental factors – slope, friction, boundary, curvature, aerodynamic drag, and inertia – to reproduce realistic motion patterns on alpine slopes. This study primarily focuses on the dynamics of a single skier, while multi-agent interactions will be explored in future work.
{"title":"Exploring the complexity of pedestrian dynamics in skiing: A modelling and simulation framework","authors":"Buchuan Zhang , Chuan-Zhi Thomas Xie","doi":"10.1016/j.simpat.2025.103225","DOIUrl":"10.1016/j.simpat.2025.103225","url":null,"abstract":"<div><div>As a distinct form of pedestrian motion, skiing possesses a long-standing history, yet the recurrent occurrence of ski-related accidents underscores the necessity of deeper inquiry into this dynamic system. In light of such a need, the present study adopts a modelling and simulation perspective to construct a framework for analysing skier trajectories and performance, with explicit consideration of the complex interactions between human behaviour, varying environmental and physical conditions. To this end, in specific, a cellular automaton (CA)-based model was developed, incorporating six critical factors: slope angle, surface friction, boundary constraints, terrain curvature, aerodynamic drag, and directional inertia. Probabilistic decision rules combined with physics-based speed updates enabled realistic skier movement simulations across a discretized slope grid. The simulation shows that slope angle predominantly drives skier speed, while surface friction and aerodynamic drag reduce efficiency by increasing resistance and prolonging descent. Boundary effects, though minor under wide-slope conditions, help confine lateral motion and influence path shaping. Terrain curvature impacts turning dynamics, especially on rough or irregular surfaces, while inertia enhances straight-line speed but reduces adaptability. The study underscores the importance of capturing both environmental and behavioural interactions to accurately model downhill skiing dynamics and provides detailed insights into the mechanisms shaping skiing efficiency, offering a powerful tool for advanced skier simulation and slope performance analysis. This study presents a cellular automaton (CA)-based modelling framework for simulating skier dynamics. Model integrates six environmental factors – slope, friction, boundary, curvature, aerodynamic drag, and inertia – to reproduce realistic motion patterns on alpine slopes. This study primarily focuses on the dynamics of a single skier, while multi-agent interactions will be explored in future work.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"146 ","pages":"Article 103225"},"PeriodicalIF":3.5,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-13DOI: 10.1016/j.simpat.2025.103227
Jasmine Kaur , Inderveer Chana, Anju Bala
Serverless computing has revolutionized cloud platforms by enabling developers to deploy applications without the burden of managing infrastructure. However, challenges such as workload unpredictability, inefficient job scheduling, and high energy consumption remain critical concerns. To address these issues, this paper introduces LMP-Opt, a simulation-driven hybrid model that integrates Long Short-Term Memory (LSTM) for workload prediction, Multi-Agent Deep Q-Learning (MADQL) for job scheduling, and Proximal Policy Optimization (PPO) for fine-tuning scheduling decisions. Firstly, LSTM predicts workload patterns by capturing temporal dependencies, enabling efficient resource provisioning, and reducing performance degradation caused by unpredictable workloads. Secondly, MADQL utilizes multiple agents to optimize job scheduling by dynamically adjusting allocation strategies in response to workload fluctuations. Third, PPO refines scheduling policies by balancing exploration and exploitation, improving stability and convergence in decision-making. The proposed approach has been validated using ServerlessSimPro, a specialized simulation environment, and is further tested in AWS Lambda to ensure applicability to real-world serverless platforms. Extensive experiments using an e-commerce transaction processing workload demonstrate that LMP-Opt significantly improves system performance. The simulation results show a reduction in the average response time by 4.79% over MADQL and 6.09% over PPO, in addition to savings in energy consumption of 4.35% and 6.14%, respectively. The model also improves cost efficiency, CPU utilization, and resource scalability by reducing node requirements. These results confirm the value of hybrid learning-based simulation models for optimizing scheduling and energy efficiency in serverless computing environments.
{"title":"LMP-Opt: A simulation-based hybrid model for dynamic job scheduling and energy optimization in serverless computing","authors":"Jasmine Kaur , Inderveer Chana, Anju Bala","doi":"10.1016/j.simpat.2025.103227","DOIUrl":"10.1016/j.simpat.2025.103227","url":null,"abstract":"<div><div>Serverless computing has revolutionized cloud platforms by enabling developers to deploy applications without the burden of managing infrastructure. However, challenges such as workload unpredictability, inefficient job scheduling, and high energy consumption remain critical concerns. To address these issues, this paper introduces LMP-Opt, a simulation-driven hybrid model that integrates Long Short-Term Memory (LSTM) for workload prediction, Multi-Agent Deep Q-Learning (MADQL) for job scheduling, and Proximal Policy Optimization (PPO) for fine-tuning scheduling decisions. Firstly, LSTM predicts workload patterns by capturing temporal dependencies, enabling efficient resource provisioning, and reducing performance degradation caused by unpredictable workloads. Secondly, MADQL utilizes multiple agents to optimize job scheduling by dynamically adjusting allocation strategies in response to workload fluctuations. Third, PPO refines scheduling policies by balancing exploration and exploitation, improving stability and convergence in decision-making. The proposed approach has been validated using ServerlessSimPro, a specialized simulation environment, and is further tested in AWS Lambda to ensure applicability to real-world serverless platforms. Extensive experiments using an e-commerce transaction processing workload demonstrate that LMP-Opt significantly improves system performance. The simulation results show a reduction in the average response time by 4.79% over MADQL and 6.09% over PPO, in addition to savings in energy consumption of 4.35% and 6.14%, respectively. The model also improves cost efficiency, CPU utilization, and resource scalability by reducing node requirements. These results confirm the value of hybrid learning-based simulation models for optimizing scheduling and energy efficiency in serverless computing environments.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"146 ","pages":"Article 103227"},"PeriodicalIF":3.5,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-13DOI: 10.1016/j.simpat.2025.103229
Shahed Almobydeen , Gaith Rjoub , Jamal Bentahar , Ahmad Irjoob , Muhammad Younas
In cloud computing systems, the proliferation of intelligent edge devices necessitates novel communication and coordination protocols that can operate under significant bandwidth and latency constraints. This necessity is driven not only by performance requirements but also by the growing imperative for sustainable computing, as inefficient communication is a primary driver of resources consumption in large-scale systems. This paper introduces the Hierarchical and Causal-Graph Network (HCGN), a framework designed for efficient, sustainable, and decentralized decision-making in large-scale edge computing environments. HCGN integrates a hierarchical control paradigm, mapping naturally to edge-fog architectures, with a Graph Neural Network (GNN) that learns a bandwidth-efficient communication policy between edge nodes. Furthermore, a novel Causal Credit Assignment Module (CCAM) enables intelligent and sustainable resource allocation by quantifying each node’s true causal contribution to system-wide objectives, ensuring that computational and communication resources are directed to the most effective parts of the network. We demonstrate through extensive simulations, including a novel edge-based collaborative video analytics task, that HCGN significantly outperforms traditional communication protocols in terms of task success rate, communication overhead, and robustness to network degradation. Our results validate HCGN as a scalable and resource-aware solution building the next generation of sustainable decentralized edge-fog-based systems.
{"title":"HCGN: A Hierarchical Causal-Graph Network for sustainable communication and coordination in edge–fog systems","authors":"Shahed Almobydeen , Gaith Rjoub , Jamal Bentahar , Ahmad Irjoob , Muhammad Younas","doi":"10.1016/j.simpat.2025.103229","DOIUrl":"10.1016/j.simpat.2025.103229","url":null,"abstract":"<div><div>In cloud computing systems, the proliferation of intelligent edge devices necessitates novel communication and coordination protocols that can operate under significant bandwidth and latency constraints. This necessity is driven not only by performance requirements but also by the growing imperative for sustainable computing, as inefficient communication is a primary driver of resources consumption in large-scale systems. This paper introduces the Hierarchical and Causal-Graph Network (HCGN), a framework designed for efficient, sustainable, and decentralized decision-making in large-scale edge computing environments. HCGN integrates a hierarchical control paradigm, mapping naturally to edge-fog architectures, with a Graph Neural Network (GNN) that learns a bandwidth-efficient communication policy between edge nodes. Furthermore, a novel Causal Credit Assignment Module (CCAM) enables intelligent and sustainable resource allocation by quantifying each node’s true causal contribution to system-wide objectives, ensuring that computational and communication resources are directed to the most effective parts of the network. We demonstrate through extensive simulations, including a novel edge-based collaborative video analytics task, that HCGN significantly outperforms traditional communication protocols in terms of task success rate, communication overhead, and robustness to network degradation. Our results validate HCGN as a scalable and resource-aware solution building the next generation of sustainable decentralized edge-fog-based systems.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"146 ","pages":"Article 103229"},"PeriodicalIF":3.5,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-12DOI: 10.1016/j.simpat.2025.103228
Agostino Bruzzone , Alessia Giulianetti , Marco Gotelli , Anna Sciomachen
In this paper, we analyze different scenarios for container flows arriving at marine terminals to different destinations in the hinterland. The aim of the study is to verify how the type of import containers — standard, hazardous, and refrigerated — and their size affect the operational efficiency of the terminal. Relevant performance indicators, such as container dwell time, average and maximum number of waiting containers, and equipment utilization rate, are evaluated. To this end, we present a discrete-event simulation study that, although generalizable to any port, refers to a terminal in the port network of Genoa (Italy). The number of considered scenarios, illustrated in this paper, are taken from a synthetic data generator for logistics flows and used in Witness Horizon v.24 simulation software environment to execute independent runs at a steady state condition. To the authors’ knowledge, this is the first time that a sensitivity analysis based on the variation in the types of containers is presented. The performed simulation experiments can be of great interest to various port stakeholders. Indeed, the results show that the percentage composition of the type of import container over the annual time horizon considered has an impact on the indicators under analysis, favoring a more balanced distribution. However, again in relation to the same indicators, the variation in container size appears to be negligible. The study highlights how advance knowledge of the type of import containers can support port terminal management in terms of efficient management and optimization of resources, providing specific advice on the operational decisions concerning equipment and block yard allocation.
{"title":"The impact of import container flow characteristics on port operational efficiency","authors":"Agostino Bruzzone , Alessia Giulianetti , Marco Gotelli , Anna Sciomachen","doi":"10.1016/j.simpat.2025.103228","DOIUrl":"10.1016/j.simpat.2025.103228","url":null,"abstract":"<div><div>In this paper, we analyze different scenarios for container flows arriving at marine terminals to different destinations in the hinterland. The aim of the study is to verify how the type of import containers — standard, hazardous, and refrigerated — and their size affect the operational efficiency of the terminal. Relevant performance indicators, such as container dwell time, average and maximum number of waiting containers, and equipment utilization rate, are evaluated. To this end, we present a discrete-event simulation study that, although generalizable to any port, refers to a terminal in the port network of Genoa (Italy). The number of considered scenarios, illustrated in this paper, are taken from a synthetic data generator for logistics flows and used in Witness Horizon v.24 simulation software environment to execute independent runs at a steady state condition. To the authors’ knowledge, this is the first time that a sensitivity analysis based on the variation in the types of containers is presented. The performed simulation experiments can be of great interest to various port stakeholders. Indeed, the results show that the percentage composition of the type of import container over the annual time horizon considered has an impact on the indicators under analysis, favoring a more balanced distribution. However, again in relation to the same indicators, the variation in container size appears to be negligible. The study highlights how advance knowledge of the type of import containers can support port terminal management in terms of efficient management and optimization of resources, providing specific advice on the operational decisions concerning equipment and block yard allocation.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"146 ","pages":"Article 103228"},"PeriodicalIF":3.5,"publicationDate":"2025-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-07DOI: 10.1016/j.simpat.2025.103223
Jinli Wei , Chunyue Cui , Xiaoxia Yang
The enclosed spaces and high-density population in subway stations significantly complicate evacuation during fires, thus increasing the difficulty of emergency response. To enhance fire rescue capabilities, this study conducts robust optimization modeling for firefighting routes from costs of station facility layout, passenger flow distribution, smoke propagation patterns, and human resource expenditure. Firstly, the BKA-GRU deep learning method is designed to calculate passenger passage time at critical nodes such as gates, improving the rationality of firefighting route design. Secondly, a firefighting value function based on the importance of fire nodes is constructed, making the firefighting routes more conducive to efficient and safe passenger evacuation. Thirdly, a box-based intersection polyhedron uncertainty set is employed to model the uncertainties in firefighting travel time and firefighting time, enhancing the adaptability and robustness of the routes. Fourthly, the advanced Ivy algorithm combined with Gurobi is adopted to solve the developed robust optimization model, enabling rapid identification of efficient and stable firefighting routes in complex environments. Finally, both quantitative and qualitative analyses are used to comprehensively evaluate firefighting effectiveness. The results indicate that: (i) The BKA-GRU prediction model exhibits high accuracy and reliability in predicting node passage time. (ii) The robust optimization model for firefighting routes significantly reduces fire by-products, shortens passenger evacuation time, and mitigates congestion. (iii) The firefighting route design achieves significant improvements in temperature control and visibility enhancement, effectively improving the fire environment and enhancing rescue efficiency and safety. This study provides an innovative solution for fire rescue in complex environments.
{"title":"Dynamic firefighting route planning for efficient evacuation in complex subway stations: A deep learning-enhanced robust optimization approach","authors":"Jinli Wei , Chunyue Cui , Xiaoxia Yang","doi":"10.1016/j.simpat.2025.103223","DOIUrl":"10.1016/j.simpat.2025.103223","url":null,"abstract":"<div><div>The enclosed spaces and high-density population in subway stations significantly complicate evacuation during fires, thus increasing the difficulty of emergency response. To enhance fire rescue capabilities, this study conducts robust optimization modeling for firefighting routes from costs of station facility layout, passenger flow distribution, smoke propagation patterns, and human resource expenditure. Firstly, the BKA-GRU deep learning method is designed to calculate passenger passage time at critical nodes such as gates, improving the rationality of firefighting route design. Secondly, a firefighting value function based on the importance of fire nodes is constructed, making the firefighting routes more conducive to efficient and safe passenger evacuation. Thirdly, a box-based intersection polyhedron uncertainty set is employed to model the uncertainties in firefighting travel time and firefighting time, enhancing the adaptability and robustness of the routes. Fourthly, the advanced Ivy algorithm combined with Gurobi is adopted to solve the developed robust optimization model, enabling rapid identification of efficient and stable firefighting routes in complex environments. Finally, both quantitative and qualitative analyses are used to comprehensively evaluate firefighting effectiveness. The results indicate that: (i) The BKA-GRU prediction model exhibits high accuracy and reliability in predicting node passage time. (ii) The robust optimization model for firefighting routes significantly reduces fire by-products, shortens passenger evacuation time, and mitigates congestion. (iii) The firefighting route design achieves significant improvements in temperature control and visibility enhancement, effectively improving the fire environment and enhancing rescue efficiency and safety. This study provides an innovative solution for fire rescue in complex environments.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"146 ","pages":"Article 103223"},"PeriodicalIF":3.5,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-07DOI: 10.1016/j.simpat.2025.103226
Waseem Abbass , Nasim Abbas , Uzma Majeed
The rapid growth of latency-sensitive Internet of Things (IoT) applications necessitates intelligent and scalable task offloading strategies in edge computing environments operating under dynamic workloads and limited energy resources. This paper introduces SimEdgeAI, a novel Deep Reinforcement Learning (DRL) framework that formulates task offloading as a stochastic decision-making problem over a multi-discrete action space, effectively capturing the trade-offs among local execution, edge offloading, and task dropping. The framework adopts an actor–critic architecture enhanced with a Gumbel–Softmax-based policy representation, enabling differentiable and stable learning over discrete actions. The actor network produces temperature-controlled stochastic policies, while the critic estimates long-term utilities based on system-wide features such as queue lengths, transmission delays, and energy states. A multi-objective reward function penalizing latency violations, excessive energy use, and fairness deviations guides the agent towards globally efficient and equitable offloading decisions. Extensive evaluations demonstrate that SimEdgeAI reduces average task latency by up to 35% and energy consumption by 25% compared to baseline methods including Deep Deterministic Policy Gradient (DDPG), Centralized DQN (C-DQN), and Greedy policies. It achieves over 91% deadline satisfaction and superior fairness measured by Jain’s index across edge clients. Ablation and sensitivity analyses confirm the contribution of each architectural component, while visualization studies underline the framework’s multi-objective consistency. These results highlight SimEdgeAI as an effective and fair solution for real-time, large-scale IoT–edge task offloading problems.
{"title":"SimEdgeAI: A deep reinforcement learning framework for simulating task offloading in resource-constrained IoT networks","authors":"Waseem Abbass , Nasim Abbas , Uzma Majeed","doi":"10.1016/j.simpat.2025.103226","DOIUrl":"10.1016/j.simpat.2025.103226","url":null,"abstract":"<div><div>The rapid growth of latency-sensitive Internet of Things (IoT) applications necessitates intelligent and scalable task offloading strategies in edge computing environments operating under dynamic workloads and limited energy resources. This paper introduces SimEdgeAI, a novel Deep Reinforcement Learning (DRL) framework that formulates task offloading as a stochastic decision-making problem over a multi-discrete action space, effectively capturing the trade-offs among local execution, edge offloading, and task dropping. The framework adopts an actor–critic architecture enhanced with a Gumbel–Softmax-based policy representation, enabling differentiable and stable learning over discrete actions. The actor network produces temperature-controlled stochastic policies, while the critic estimates long-term utilities based on system-wide features such as queue lengths, transmission delays, and energy states. A multi-objective reward function penalizing latency violations, excessive energy use, and fairness deviations guides the agent towards globally efficient and equitable offloading decisions. Extensive evaluations demonstrate that SimEdgeAI reduces average task latency by up to 35% and energy consumption by 25% compared to baseline methods including Deep Deterministic Policy Gradient (DDPG), Centralized DQN (C-DQN), and Greedy policies. It achieves over 91% deadline satisfaction and superior fairness measured by Jain’s index across edge clients. Ablation and sensitivity analyses confirm the contribution of each architectural component, while visualization studies underline the framework’s multi-objective consistency. These results highlight SimEdgeAI as an effective and fair solution for real-time, large-scale IoT–edge task offloading problems.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"146 ","pages":"Article 103226"},"PeriodicalIF":3.5,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-07DOI: 10.1016/j.simpat.2025.103224
Jiajun Feng, Panpan Guo, Penghui Xue, Siyao Liu, Gan Wang, Yixian Wang
This study investigates ground-surface settlement and tunnel deformation induced by the construction of a TBM driven tunnel that obliquely undercrosses in-service high-speed railway tunnels. An analytical solution for predicting surface settlement is proposed by introducing the undercrossing angle and high-speed train load correction coefficients into the classical Peck formula. We validate the model’s applicability to oblique undercrossing with numerical simulations and field measurements. Building on these insights, we conduct three-dimensional finite-element (FE) modelling to quantify the effects of undercrossing angle (50°, 78°, 90°), tunnel clear distance (17.3, 13.3, 9.3 m), and excavation staging (10, 50, 100 steps) on surface settlement. The influence mechanism of train load on the deformation of the railway tunnel is analyzed. The results show that the proposed analytical solution improves surface-settlement prediction, keeping the error within 15 %. Specifically, larger undercrossing angles narrow the settlement trough and reduce the maximum settlement. Decreasing the clear distance from 17.3 to 9.3 m increases surface settlement by 65.96 %. Under train loading, surface settlement increases progressively with the number of TBM excavation steps. Train loading markedly amplifies overall tunnel deformation, increasing longitudinal deformation by 150 % and intensifying non-uniformity. The integrated analytical–numerical framework provides a practical basis for safety assessment and for optimising protective measures in similar undercrossing projects.
{"title":"Ground surface settlements and deformation behavior of in-service high-speed railway tunnel induced by obliquely undercrossed TBM tunnelling","authors":"Jiajun Feng, Panpan Guo, Penghui Xue, Siyao Liu, Gan Wang, Yixian Wang","doi":"10.1016/j.simpat.2025.103224","DOIUrl":"10.1016/j.simpat.2025.103224","url":null,"abstract":"<div><div>This study investigates ground-surface settlement and tunnel deformation induced by the construction of a TBM driven tunnel that obliquely undercrosses in-service high-speed railway tunnels. An analytical solution for predicting surface settlement is proposed by introducing the undercrossing angle and high-speed train load correction coefficients into the classical Peck formula. We validate the model’s applicability to oblique undercrossing with numerical simulations and field measurements. Building on these insights, we conduct three-dimensional finite-element (FE) modelling to quantify the effects of undercrossing angle (50°, 78°, 90°), tunnel clear distance (17.3, 13.3, 9.3 m), and excavation staging (10, 50, 100 steps) on surface settlement. The influence mechanism of train load on the deformation of the railway tunnel is analyzed. The results show that the proposed analytical solution improves surface-settlement prediction, keeping the error within 15 %. Specifically, larger undercrossing angles narrow the settlement trough and reduce the maximum settlement. Decreasing the clear distance from 17.3 to 9.3 m increases surface settlement by 65.96 %. Under train loading, surface settlement increases progressively with the number of TBM excavation steps. Train loading markedly amplifies overall tunnel deformation, increasing longitudinal deformation by 150 % and intensifying non<strong>-</strong>uniformity. The integrated analytical–numerical framework provides a practical basis for safety assessment and for optimising protective measures in similar undercrossing projects.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"146 ","pages":"Article 103224"},"PeriodicalIF":3.5,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-02DOI: 10.1016/j.simpat.2025.103222
Arun Ananthanarayanan , S. Kanithan , Sathish Kumar Hari , Naeem Ahmed , Nadeem Pasha
To apply efficient beamforming, we need to be able to estimate channel state information (CSI) accurately. It is an essential factor that determines the success of high-data-rate, reliable communication in modern wireless networks. However, classic approaches tend to be inefficient in complex and fast-changing environments. This paper proposes a Deep Single-Carrier Orthogonal Frequency Division Multiplexing (DS-OFDM) to solve these difficulties. Division Multiplexing (Deep SCOFDM) framework, which incorporates Convolutional Neural End-to-End Long Short Term Memory (LSTM) & CNN networks for adaptive networks. Signal processing for 6 G systems. The proposed model simultaneously performs modulation and equalization, overcoming the drawbacks of standard OFDM systems — such as high PAPR and poor interference tolerance — by leveraging CNNs' spatial feature extraction and LSTMs' temporal feature extraction. The identifier can minimize signal degradation and increase symbol detection accuracy, as demonstrated by simulation results. In addition, it shows that the Deep SCOFDM framework exhibits lower PAPR with improved BER performance. Thus, our proposed approach outperforms other deep learning based MIMO and beamforming methods in terms of performance, faster convergence, and higher spectral efficiency. These findings suggest that the proposed approach is highly suitable for selecting intelligent and energy-efficient transceiver architectures in future 6 G networks.
{"title":"Enhancing 6G wireless performance through advanced MIMO techniques","authors":"Arun Ananthanarayanan , S. Kanithan , Sathish Kumar Hari , Naeem Ahmed , Nadeem Pasha","doi":"10.1016/j.simpat.2025.103222","DOIUrl":"10.1016/j.simpat.2025.103222","url":null,"abstract":"<div><div>To apply efficient beamforming, we need to be able to estimate channel state information (CSI) accurately. It is an essential factor that determines the success of high-data-rate, reliable communication in modern wireless networks. However, classic approaches tend to be inefficient in complex and fast-changing environments. This paper proposes a Deep Single-Carrier Orthogonal Frequency Division Multiplexing (DS-OFDM) to solve these difficulties. Division Multiplexing (Deep SC<img>OFDM) framework, which incorporates Convolutional Neural End-to-End Long Short Term Memory (LSTM) & CNN networks for adaptive networks. Signal processing for 6 G systems. The proposed model simultaneously performs modulation and equalization, overcoming the drawbacks of standard OFDM systems — such as high PAPR and poor interference tolerance — by leveraging CNNs' spatial feature extraction and LSTMs' temporal feature extraction. The identifier can minimize signal degradation and increase symbol detection accuracy, as demonstrated by simulation results. In addition, it shows that the Deep SC<img>OFDM framework exhibits lower PAPR with improved BER performance. Thus, our proposed approach outperforms other deep learning based MIMO and beamforming methods in terms of performance, faster convergence, and higher spectral efficiency. These findings suggest that the proposed approach is highly suitable for selecting intelligent and energy-efficient transceiver architectures in future 6 G networks.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"146 ","pages":"Article 103222"},"PeriodicalIF":3.5,"publicationDate":"2025-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145623772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}