Pub Date : 2025-11-27DOI: 10.1016/j.simpat.2025.103234
Qi Liu, Can Li, Wanjing Ma
Traditional agent-based urban mobility simulations rely on predefined expert rules, limiting their ability to capture the complexity and adaptability of human mobility decisions. This paper introduces GATSim (Generative Agent Transport Simulation), a novel framework that integrates agents powered by large language models (LLM) into urban mobility simulation environments. GATSim employs generative agents with innovative cognitive architectures including hierarchical memory systems, multi-modal retrieval mechanisms, planning, reactive and reflection processes. GATSim is validated at both microscopic and macroscopic levels using a prototype on a stylized transportation network. The results show that the generative agents exhibit peak spreading, route learning, and incident response behaviors that mirror the dynamics of the real world. This work contributes to the paradigm shift from rule-based to intelligence-based urban mobility simulation, providing a more realistic and flexible framework for urban transportation modeling. The code for the prototype implementation is publicly available at: https://GitHub.com/qiliuchn/gatsim.
{"title":"Generative agents for urban mobility: A cognitive framework for realistic travel behavior simulation","authors":"Qi Liu, Can Li, Wanjing Ma","doi":"10.1016/j.simpat.2025.103234","DOIUrl":"10.1016/j.simpat.2025.103234","url":null,"abstract":"<div><div>Traditional agent-based urban mobility simulations rely on predefined expert rules, limiting their ability to capture the complexity and adaptability of human mobility decisions. This paper introduces GATSim (Generative Agent Transport Simulation), a novel framework that integrates agents powered by large language models (LLM) into urban mobility simulation environments. GATSim employs generative agents with innovative cognitive architectures including hierarchical memory systems, multi-modal retrieval mechanisms, planning, reactive and reflection processes. GATSim is validated at both microscopic and macroscopic levels using a prototype on a stylized transportation network. The results show that the generative agents exhibit peak spreading, route learning, and incident response behaviors that mirror the dynamics of the real world. This work contributes to the paradigm shift from rule-based to intelligence-based urban mobility simulation, providing a more realistic and flexible framework for urban transportation modeling. The code for the prototype implementation is publicly available at: <span><span>https://GitHub.com/qiliuchn/gatsim</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"147 ","pages":"Article 103234"},"PeriodicalIF":3.5,"publicationDate":"2025-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145738581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cold spray additive manufacturing (CSAM) is an emerging solid-state deposition technique that utilizes high-velocity gas to propel powdered materials onto a substrate. Analysis of objective functions for process parameter optimization in CSAM requires data that is usually obtained from costly experiments or numerical simulations. Integrating simulations or experiments directly into conventional optimization algorithms can lead to significantly high computational costs. Additionally, these optimization problems typically involve multiple conflicting objectives that should be taken into account simultaneously. In this work, we develop a data-driven, simulation-based multi-objective optimization framework (SMOF) to optimize CSAM process parameters online. The smoothed particle hydrodynamics (SPH) method is used to perform CSAM simulations. A new optimal grid mutation-based infill criterion (OIC) is proposed to enhance the surrogate-assisted search in SMOF. Subsequently, numerical simulations are replaced by an ensemble of surrogates with high prediction robustness. We assess the effectiveness of the proposed OIC on two benchmark test problems and further optimize multiple powder impact problems. The optimization results demonstrate that the present SMOF can identify desired process parameter combinations for the CSAM process. Based on the proposed SMOF, refined multi-objective process parameter windows are established for the first time to analyze the evolution of CSAM process parameters.
{"title":"Optimizing process parameters in cold spray additive manufacturing: A data-driven, simulation-based multi-objective approach","authors":"Hao Chen , Zhilang Zhang , Markus Bambach , Mohamadreza Afrasiabi","doi":"10.1016/j.simpat.2025.103235","DOIUrl":"10.1016/j.simpat.2025.103235","url":null,"abstract":"<div><div>Cold spray additive manufacturing (CSAM) is an emerging solid-state deposition technique that utilizes high-velocity gas to propel powdered materials onto a substrate. Analysis of objective functions for process parameter optimization in CSAM requires data that is usually obtained from costly experiments or numerical simulations. Integrating simulations or experiments directly into conventional optimization algorithms can lead to significantly high computational costs. Additionally, these optimization problems typically involve multiple conflicting objectives that should be taken into account simultaneously. In this work, we develop a data-driven, simulation-based multi-objective optimization framework (SMOF) to optimize CSAM process parameters online. The smoothed particle hydrodynamics (SPH) method is used to perform CSAM simulations. A new optimal grid mutation-based infill criterion (OIC) is proposed to enhance the surrogate-assisted search in SMOF. Subsequently, numerical simulations are replaced by an ensemble of surrogates with high prediction robustness. We assess the effectiveness of the proposed OIC on two benchmark test problems and further optimize multiple powder impact problems. The optimization results demonstrate that the present SMOF can identify desired process parameter combinations for the CSAM process. Based on the proposed SMOF, refined multi-objective process parameter windows are established for the first time to analyze the evolution of CSAM process parameters.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"146 ","pages":"Article 103235"},"PeriodicalIF":3.5,"publicationDate":"2025-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145623774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-25DOI: 10.1016/j.simpat.2025.103233
Yasaman Ghasemi , Yuan Zhou , Sina Zare , Victoria C.P. Chen
This study presents a Geographic Information Systems (GIS)-integrated agent-based simulation (ABS) framework designed to evaluate police patrol deployment and shift scheduling under realistic operational constraints. The model integrates empirical Intergraph Computer-Aided Dispatch (I/CAD) data, GIS-based travel-time routing, and shift-level scheduling logic within a unified ABS environment. It captures dynamic interactions among patrol units, incident locations, and time-varying service demand.
A series of scenario-based experiments investigate the effects of key operational parameters, shift length (8-hour vs. 10-hour), patrol force size, and routing logic (shortest vs. fastest path) on system performance indicators such as response time, officer utilization, and workload balance. Results show that 10-hour shifts consistently improve response efficiency compared to 8-hour shifts, while larger patrol sizes enhance workload equity without significantly reducing delays. The model also quantifies the trade-offs between workforce expansion and scheduling strategy.
The simulation is calibrated using real-world patrol data from the Arlington Police Department, Texas, and validated through both historical benchmarks and synthetic call-arrival profiles. The model offers a configurable and adaptable simulation-based planning framework for urban public-service operations. The proposed framework demonstrates how agent-based simulation, enriched with spatial routing and empirical scheduling data, can support tactical decision-making in complex, service-driven systems.
{"title":"A GIS-integrated agent-based simulation framework for modeling and evaluation of police patrol operations","authors":"Yasaman Ghasemi , Yuan Zhou , Sina Zare , Victoria C.P. Chen","doi":"10.1016/j.simpat.2025.103233","DOIUrl":"10.1016/j.simpat.2025.103233","url":null,"abstract":"<div><div>This study presents a Geographic Information Systems (GIS)-integrated agent-based simulation (ABS) framework designed to evaluate police patrol deployment and shift scheduling under realistic operational constraints. The model integrates empirical Intergraph Computer-Aided Dispatch (I/CAD) data, GIS-based travel-time routing, and shift-level scheduling logic within a unified ABS environment. It captures dynamic interactions among patrol units, incident locations, and time-varying service demand.</div><div>A series of scenario-based experiments investigate the effects of key operational parameters, shift length (8-hour vs. 10-hour), patrol force size, and routing logic (shortest vs. fastest path) on system performance indicators such as response time, officer utilization, and workload balance. Results show that 10-hour shifts consistently improve response efficiency compared to 8-hour shifts, while larger patrol sizes enhance workload equity without significantly reducing delays. The model also quantifies the trade-offs between workforce expansion and scheduling strategy.</div><div>The simulation is calibrated using real-world patrol data from the Arlington Police Department, Texas, and validated through both historical benchmarks and synthetic call-arrival profiles. The model offers a configurable and adaptable simulation-based planning framework for urban public-service operations. The proposed framework demonstrates how agent-based simulation, enriched with spatial routing and empirical scheduling data, can support tactical decision-making in complex, service-driven systems.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"147 ","pages":"Article 103233"},"PeriodicalIF":3.5,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145738583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-25DOI: 10.1016/j.simpat.2025.103232
Selsabil Ines Bouhidel, Nabil Belala
We introduce a dual-log process mining approach for jointly modeling and optimizing behaviors in Vehicular Ad Hoc Networks (VANETs) and urban road traffic. Simulation event logs from SUMO (traffic dynamics) and NS2 (network communications) are synchronized, preprocessed, and mined using Fuzzy Miner and Petri-net discovery in the ProM tool to produce interpretable process models. These models uncover critical anomalies, congestion hotspots, CO emissions peaks, and packet-delivery bottlenecks and drive a continuous feedback loop that adaptively tunes routing protocols and eco-driving strategies in real-time. Experimental evaluation demonstrated the framework’s effectiveness in identifying recurring high-emission behaviors, communication bottlenecks, and incomplete packet flows across a large-scale VANET and traffic simulation dataset. The process models significantly improved behavioral interpretability and reduced the time required for manual analysis and anomaly tracing. Future work will extend this approach with predictive modules and online mining capabilities for enhanced adaptability in dynamic VANET environments.
{"title":"Optimization of urban mobility processes through the integration of process mining","authors":"Selsabil Ines Bouhidel, Nabil Belala","doi":"10.1016/j.simpat.2025.103232","DOIUrl":"10.1016/j.simpat.2025.103232","url":null,"abstract":"<div><div>We introduce a dual-log process mining approach for jointly modeling and optimizing behaviors in Vehicular Ad Hoc Networks (VANETs) and urban road traffic. Simulation event logs from SUMO (traffic dynamics) and NS2 (network communications) are synchronized, preprocessed, and mined using Fuzzy Miner and Petri-net discovery in the ProM tool to produce interpretable process models. These models uncover critical anomalies, congestion hotspots, CO<span><math><msub><mrow></mrow><mrow><mn>2</mn></mrow></msub></math></span> emissions peaks, and packet-delivery bottlenecks and drive a continuous feedback loop that adaptively tunes routing protocols and eco-driving strategies in real-time. Experimental evaluation demonstrated the framework’s effectiveness in identifying recurring high-emission behaviors, communication bottlenecks, and incomplete packet flows across a large-scale VANET and traffic simulation dataset. The process models significantly improved behavioral interpretability and reduced the time required for manual analysis and anomaly tracing. Future work will extend this approach with predictive modules and online mining capabilities for enhanced adaptability in dynamic VANET environments.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"146 ","pages":"Article 103232"},"PeriodicalIF":3.5,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145623831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-22DOI: 10.1016/j.simpat.2025.103231
Helen D. Karatza
Cooperating cloud-fog-mist computing frameworks have been methodically designed to balance computational efficiency and data privacy during the execution of complex applications with diverse security demands. To guarantee the proper execution of these applications, the implementation of security-aware scheduling strategies is crucial. This paper explores security-aware scheduling policies, with a focus on developing algorithms tailored for heterogeneous workloads, including both simple single-task jobs and Bags of Linear Workflows (BoLWs) with varying priority levels. Multi-criteria scheduling algorithms are utilized to handle tasks by priority in the three layers. These algorithms are evaluated under different conditions, including varying system utilization, security requirements, and task service demands. Building on the epoch policy discussed in prior research, which considers job security levels, we propose an enhanced epoch-based approach that also accounts for the number of virtual machines allocated to each BoLW job alongside its security requirements. Simulation results demonstrate the superior performance of this novel epoch strategy compared to the previously established approach.
{"title":"Scheduling mixed workloads with security requirements in a cloud-fog-mist computing environment","authors":"Helen D. Karatza","doi":"10.1016/j.simpat.2025.103231","DOIUrl":"10.1016/j.simpat.2025.103231","url":null,"abstract":"<div><div>Cooperating cloud-fog-mist computing frameworks have been methodically designed to balance computational efficiency and data privacy during the execution of complex applications with diverse security demands. To guarantee the proper execution of these applications, the implementation of security-aware scheduling strategies is crucial. This paper explores security-aware scheduling policies, with a focus on developing algorithms tailored for heterogeneous workloads, including both simple single-task jobs and Bags of Linear Workflows (BoLWs) with varying priority levels. Multi-criteria scheduling algorithms are utilized to handle tasks by priority in the three layers. These algorithms are evaluated under different conditions, including varying system utilization, security requirements, and task service demands. Building on the epoch policy discussed in prior research, which considers job security levels, we propose an enhanced epoch-based approach that also accounts for the number of virtual machines allocated to each BoLW job alongside its security requirements. Simulation results demonstrate the superior performance of this novel epoch strategy compared to the previously established approach.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"146 ","pages":"Article 103231"},"PeriodicalIF":3.5,"publicationDate":"2025-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145623771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-20DOI: 10.1016/j.simpat.2025.103230
Raju Singh
In mission-critical environments that require secure, scalable, and resource-efficient communication, Flying Ad Hoc Networks (FANETs) are increasing in utility. This paper proposed a Python-based simulation framework to analyse a Hybrid Trust–Cryptographic (HTC) protocol designed for unmanned aerial vehicle (UAV) swarm networks. The framework couples’ lightweight cryptographic primitives: Elliptic Curve Cryptography (ECC), AES-GCM, and ECDSA, with an adaptive trust management mechanism that qualifies UAV behaviour in a dynamic way. The trust–key coupling strategy is feedback-driven; declining trust will evoke key refresh or revocation on a pre-emptive basis to address the threats of collusion and insider attacks. Parameter values are validated against existing available cryptographic profiling benchmarks on embedded hardware platforms to ensure realism in modelling computational cost. The simulation environment is built under Gauss–Markov mobility and probabilistic attack model and has scalability with UAV nodes up to 200. The results show an increase in resilience and efficiency with almost 14 % higher packet delivery ratio, 17 % lower end-to-end latency, and 92 % of malicious node detection accuracy, also keeping energy overhead below 15 %. These results establish that adaptive trust evaluation coupled with lightweight cryptographic operations creates an optimal trade-off between security assurance and system performance. With an emphasis on reproducibility, this proposed simulation framework should thus serve as a benchmark for future research into secure communication systems for large-scale UAV swarms.
{"title":"Simulation and evaluation of a hybrid trust–cryptographic protocol for UAV swarm communications","authors":"Raju Singh","doi":"10.1016/j.simpat.2025.103230","DOIUrl":"10.1016/j.simpat.2025.103230","url":null,"abstract":"<div><div>In mission-critical environments that require secure, scalable, and resource-efficient communication, Flying Ad Hoc Networks (FANETs) are increasing in utility. This paper proposed a Python-based simulation framework to analyse a Hybrid Trust–Cryptographic (HTC) protocol designed for unmanned aerial vehicle (UAV) swarm networks. The framework couples’ lightweight cryptographic primitives: Elliptic Curve Cryptography (ECC), AES-GCM, and ECDSA, with an adaptive trust management mechanism that qualifies UAV behaviour in a dynamic way. The trust–key coupling strategy is feedback-driven; declining trust will evoke key refresh or revocation on a pre-emptive basis to address the threats of collusion and insider attacks. Parameter values are validated against existing available cryptographic profiling benchmarks on embedded hardware platforms to ensure realism in modelling computational cost. The simulation environment is built under Gauss–Markov mobility and probabilistic attack model and has scalability with UAV nodes up to 200. The results show an increase in resilience and efficiency with almost 14 % higher packet delivery ratio, 17 % lower end-to-end latency, and 92 % of malicious node detection accuracy, also keeping energy overhead below 15 %. These results establish that adaptive trust evaluation coupled with lightweight cryptographic operations creates an optimal trade-off between security assurance and system performance. With an emphasis on reproducibility, this proposed simulation framework should thus serve as a benchmark for future research into secure communication systems for large-scale UAV swarms.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"146 ","pages":"Article 103230"},"PeriodicalIF":3.5,"publicationDate":"2025-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145623773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-14DOI: 10.1016/j.simpat.2025.103225
Buchuan Zhang , Chuan-Zhi Thomas Xie
As a distinct form of pedestrian motion, skiing possesses a long-standing history, yet the recurrent occurrence of ski-related accidents underscores the necessity of deeper inquiry into this dynamic system. In light of such a need, the present study adopts a modelling and simulation perspective to construct a framework for analysing skier trajectories and performance, with explicit consideration of the complex interactions between human behaviour, varying environmental and physical conditions. To this end, in specific, a cellular automaton (CA)-based model was developed, incorporating six critical factors: slope angle, surface friction, boundary constraints, terrain curvature, aerodynamic drag, and directional inertia. Probabilistic decision rules combined with physics-based speed updates enabled realistic skier movement simulations across a discretized slope grid. The simulation shows that slope angle predominantly drives skier speed, while surface friction and aerodynamic drag reduce efficiency by increasing resistance and prolonging descent. Boundary effects, though minor under wide-slope conditions, help confine lateral motion and influence path shaping. Terrain curvature impacts turning dynamics, especially on rough or irregular surfaces, while inertia enhances straight-line speed but reduces adaptability. The study underscores the importance of capturing both environmental and behavioural interactions to accurately model downhill skiing dynamics and provides detailed insights into the mechanisms shaping skiing efficiency, offering a powerful tool for advanced skier simulation and slope performance analysis. This study presents a cellular automaton (CA)-based modelling framework for simulating skier dynamics. Model integrates six environmental factors – slope, friction, boundary, curvature, aerodynamic drag, and inertia – to reproduce realistic motion patterns on alpine slopes. This study primarily focuses on the dynamics of a single skier, while multi-agent interactions will be explored in future work.
{"title":"Exploring the complexity of pedestrian dynamics in skiing: A modelling and simulation framework","authors":"Buchuan Zhang , Chuan-Zhi Thomas Xie","doi":"10.1016/j.simpat.2025.103225","DOIUrl":"10.1016/j.simpat.2025.103225","url":null,"abstract":"<div><div>As a distinct form of pedestrian motion, skiing possesses a long-standing history, yet the recurrent occurrence of ski-related accidents underscores the necessity of deeper inquiry into this dynamic system. In light of such a need, the present study adopts a modelling and simulation perspective to construct a framework for analysing skier trajectories and performance, with explicit consideration of the complex interactions between human behaviour, varying environmental and physical conditions. To this end, in specific, a cellular automaton (CA)-based model was developed, incorporating six critical factors: slope angle, surface friction, boundary constraints, terrain curvature, aerodynamic drag, and directional inertia. Probabilistic decision rules combined with physics-based speed updates enabled realistic skier movement simulations across a discretized slope grid. The simulation shows that slope angle predominantly drives skier speed, while surface friction and aerodynamic drag reduce efficiency by increasing resistance and prolonging descent. Boundary effects, though minor under wide-slope conditions, help confine lateral motion and influence path shaping. Terrain curvature impacts turning dynamics, especially on rough or irregular surfaces, while inertia enhances straight-line speed but reduces adaptability. The study underscores the importance of capturing both environmental and behavioural interactions to accurately model downhill skiing dynamics and provides detailed insights into the mechanisms shaping skiing efficiency, offering a powerful tool for advanced skier simulation and slope performance analysis. This study presents a cellular automaton (CA)-based modelling framework for simulating skier dynamics. Model integrates six environmental factors – slope, friction, boundary, curvature, aerodynamic drag, and inertia – to reproduce realistic motion patterns on alpine slopes. This study primarily focuses on the dynamics of a single skier, while multi-agent interactions will be explored in future work.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"146 ","pages":"Article 103225"},"PeriodicalIF":3.5,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-13DOI: 10.1016/j.simpat.2025.103227
Jasmine Kaur , Inderveer Chana, Anju Bala
Serverless computing has revolutionized cloud platforms by enabling developers to deploy applications without the burden of managing infrastructure. However, challenges such as workload unpredictability, inefficient job scheduling, and high energy consumption remain critical concerns. To address these issues, this paper introduces LMP-Opt, a simulation-driven hybrid model that integrates Long Short-Term Memory (LSTM) for workload prediction, Multi-Agent Deep Q-Learning (MADQL) for job scheduling, and Proximal Policy Optimization (PPO) for fine-tuning scheduling decisions. Firstly, LSTM predicts workload patterns by capturing temporal dependencies, enabling efficient resource provisioning, and reducing performance degradation caused by unpredictable workloads. Secondly, MADQL utilizes multiple agents to optimize job scheduling by dynamically adjusting allocation strategies in response to workload fluctuations. Third, PPO refines scheduling policies by balancing exploration and exploitation, improving stability and convergence in decision-making. The proposed approach has been validated using ServerlessSimPro, a specialized simulation environment, and is further tested in AWS Lambda to ensure applicability to real-world serverless platforms. Extensive experiments using an e-commerce transaction processing workload demonstrate that LMP-Opt significantly improves system performance. The simulation results show a reduction in the average response time by 4.79% over MADQL and 6.09% over PPO, in addition to savings in energy consumption of 4.35% and 6.14%, respectively. The model also improves cost efficiency, CPU utilization, and resource scalability by reducing node requirements. These results confirm the value of hybrid learning-based simulation models for optimizing scheduling and energy efficiency in serverless computing environments.
{"title":"LMP-Opt: A simulation-based hybrid model for dynamic job scheduling and energy optimization in serverless computing","authors":"Jasmine Kaur , Inderveer Chana, Anju Bala","doi":"10.1016/j.simpat.2025.103227","DOIUrl":"10.1016/j.simpat.2025.103227","url":null,"abstract":"<div><div>Serverless computing has revolutionized cloud platforms by enabling developers to deploy applications without the burden of managing infrastructure. However, challenges such as workload unpredictability, inefficient job scheduling, and high energy consumption remain critical concerns. To address these issues, this paper introduces LMP-Opt, a simulation-driven hybrid model that integrates Long Short-Term Memory (LSTM) for workload prediction, Multi-Agent Deep Q-Learning (MADQL) for job scheduling, and Proximal Policy Optimization (PPO) for fine-tuning scheduling decisions. Firstly, LSTM predicts workload patterns by capturing temporal dependencies, enabling efficient resource provisioning, and reducing performance degradation caused by unpredictable workloads. Secondly, MADQL utilizes multiple agents to optimize job scheduling by dynamically adjusting allocation strategies in response to workload fluctuations. Third, PPO refines scheduling policies by balancing exploration and exploitation, improving stability and convergence in decision-making. The proposed approach has been validated using ServerlessSimPro, a specialized simulation environment, and is further tested in AWS Lambda to ensure applicability to real-world serverless platforms. Extensive experiments using an e-commerce transaction processing workload demonstrate that LMP-Opt significantly improves system performance. The simulation results show a reduction in the average response time by 4.79% over MADQL and 6.09% over PPO, in addition to savings in energy consumption of 4.35% and 6.14%, respectively. The model also improves cost efficiency, CPU utilization, and resource scalability by reducing node requirements. These results confirm the value of hybrid learning-based simulation models for optimizing scheduling and energy efficiency in serverless computing environments.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"146 ","pages":"Article 103227"},"PeriodicalIF":3.5,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-13DOI: 10.1016/j.simpat.2025.103229
Shahed Almobydeen , Gaith Rjoub , Jamal Bentahar , Ahmad Irjoob , Muhammad Younas
In cloud computing systems, the proliferation of intelligent edge devices necessitates novel communication and coordination protocols that can operate under significant bandwidth and latency constraints. This necessity is driven not only by performance requirements but also by the growing imperative for sustainable computing, as inefficient communication is a primary driver of resources consumption in large-scale systems. This paper introduces the Hierarchical and Causal-Graph Network (HCGN), a framework designed for efficient, sustainable, and decentralized decision-making in large-scale edge computing environments. HCGN integrates a hierarchical control paradigm, mapping naturally to edge-fog architectures, with a Graph Neural Network (GNN) that learns a bandwidth-efficient communication policy between edge nodes. Furthermore, a novel Causal Credit Assignment Module (CCAM) enables intelligent and sustainable resource allocation by quantifying each node’s true causal contribution to system-wide objectives, ensuring that computational and communication resources are directed to the most effective parts of the network. We demonstrate through extensive simulations, including a novel edge-based collaborative video analytics task, that HCGN significantly outperforms traditional communication protocols in terms of task success rate, communication overhead, and robustness to network degradation. Our results validate HCGN as a scalable and resource-aware solution building the next generation of sustainable decentralized edge-fog-based systems.
{"title":"HCGN: A Hierarchical Causal-Graph Network for sustainable communication and coordination in edge–fog systems","authors":"Shahed Almobydeen , Gaith Rjoub , Jamal Bentahar , Ahmad Irjoob , Muhammad Younas","doi":"10.1016/j.simpat.2025.103229","DOIUrl":"10.1016/j.simpat.2025.103229","url":null,"abstract":"<div><div>In cloud computing systems, the proliferation of intelligent edge devices necessitates novel communication and coordination protocols that can operate under significant bandwidth and latency constraints. This necessity is driven not only by performance requirements but also by the growing imperative for sustainable computing, as inefficient communication is a primary driver of resources consumption in large-scale systems. This paper introduces the Hierarchical and Causal-Graph Network (HCGN), a framework designed for efficient, sustainable, and decentralized decision-making in large-scale edge computing environments. HCGN integrates a hierarchical control paradigm, mapping naturally to edge-fog architectures, with a Graph Neural Network (GNN) that learns a bandwidth-efficient communication policy between edge nodes. Furthermore, a novel Causal Credit Assignment Module (CCAM) enables intelligent and sustainable resource allocation by quantifying each node’s true causal contribution to system-wide objectives, ensuring that computational and communication resources are directed to the most effective parts of the network. We demonstrate through extensive simulations, including a novel edge-based collaborative video analytics task, that HCGN significantly outperforms traditional communication protocols in terms of task success rate, communication overhead, and robustness to network degradation. Our results validate HCGN as a scalable and resource-aware solution building the next generation of sustainable decentralized edge-fog-based systems.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"146 ","pages":"Article 103229"},"PeriodicalIF":3.5,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-12DOI: 10.1016/j.simpat.2025.103228
Agostino Bruzzone , Alessia Giulianetti , Marco Gotelli , Anna Sciomachen
In this paper, we analyze different scenarios for container flows arriving at marine terminals to different destinations in the hinterland. The aim of the study is to verify how the type of import containers — standard, hazardous, and refrigerated — and their size affect the operational efficiency of the terminal. Relevant performance indicators, such as container dwell time, average and maximum number of waiting containers, and equipment utilization rate, are evaluated. To this end, we present a discrete-event simulation study that, although generalizable to any port, refers to a terminal in the port network of Genoa (Italy). The number of considered scenarios, illustrated in this paper, are taken from a synthetic data generator for logistics flows and used in Witness Horizon v.24 simulation software environment to execute independent runs at a steady state condition. To the authors’ knowledge, this is the first time that a sensitivity analysis based on the variation in the types of containers is presented. The performed simulation experiments can be of great interest to various port stakeholders. Indeed, the results show that the percentage composition of the type of import container over the annual time horizon considered has an impact on the indicators under analysis, favoring a more balanced distribution. However, again in relation to the same indicators, the variation in container size appears to be negligible. The study highlights how advance knowledge of the type of import containers can support port terminal management in terms of efficient management and optimization of resources, providing specific advice on the operational decisions concerning equipment and block yard allocation.
{"title":"The impact of import container flow characteristics on port operational efficiency","authors":"Agostino Bruzzone , Alessia Giulianetti , Marco Gotelli , Anna Sciomachen","doi":"10.1016/j.simpat.2025.103228","DOIUrl":"10.1016/j.simpat.2025.103228","url":null,"abstract":"<div><div>In this paper, we analyze different scenarios for container flows arriving at marine terminals to different destinations in the hinterland. The aim of the study is to verify how the type of import containers — standard, hazardous, and refrigerated — and their size affect the operational efficiency of the terminal. Relevant performance indicators, such as container dwell time, average and maximum number of waiting containers, and equipment utilization rate, are evaluated. To this end, we present a discrete-event simulation study that, although generalizable to any port, refers to a terminal in the port network of Genoa (Italy). The number of considered scenarios, illustrated in this paper, are taken from a synthetic data generator for logistics flows and used in Witness Horizon v.24 simulation software environment to execute independent runs at a steady state condition. To the authors’ knowledge, this is the first time that a sensitivity analysis based on the variation in the types of containers is presented. The performed simulation experiments can be of great interest to various port stakeholders. Indeed, the results show that the percentage composition of the type of import container over the annual time horizon considered has an impact on the indicators under analysis, favoring a more balanced distribution. However, again in relation to the same indicators, the variation in container size appears to be negligible. The study highlights how advance knowledge of the type of import containers can support port terminal management in terms of efficient management and optimization of resources, providing specific advice on the operational decisions concerning equipment and block yard allocation.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"146 ","pages":"Article 103228"},"PeriodicalIF":3.5,"publicationDate":"2025-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}