首页 > 最新文献

Simulation Modelling Practice and Theory最新文献

英文 中文
Modeling the asymmetric thermo-mechanical behavior and failure of gray cast irons: An experimental–numerical study with separate Johnson–Cook parameters 模拟灰铸铁的不对称热力学行为和失效:具有单独Johnson-Cook参数的实验-数值研究
IF 3.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-07-14 DOI: 10.1016/j.simpat.2025.103182
Burak Özcan , Umut Çalışkan , Murat Aydın , Onur Çavuşoğlu , Ulvi Şeker
In this study, the asymmetric (different tensile and compressive behavior) thermo-mechanical behavior and damage of gray cast irons (EN-GJL-200, EN-GJL-250, EN-GJL-300), which are widely used in industrial applications, under different strain rates and temperatures were investigated by a combination of experimental and numerical methods. The mechanical response of the materials was characterized by quasi-static tensile and compression tests at room temperature and elevated temperatures up to 700 °C, Split Hopkinson Compression Bar (SHPB) tests for high strain rates (up to ∼3600 s−1) and tensile tests with specimens of different notch radii to analyze the damage behavior. Based on the experimental data obtained, the Johnson-Cook (JC) material (A, B, n, C, m) and damage (D1-D5) model parameters were calibrated separately for both loading cases in order to capture the apparent asymmetric behavior of gray cast irons under tensile and compression loading. These separate parameter sets were integrated into ANSYS Autodyn finite element software through FORTRAN-based user-defined subroutines and virtual tensile, compression and SHPB tests were performed. Comparing the numerical simulation results with the experimental data, it was observed that the developed asymmetric modeling approach, in particular, represents the thermo-mechanical behavior and damage of the material with high accuracy (deviations in the range of 2–8 % for maximum stress and elongation at break values). This study provides reliable and decoupled JC parameter sets for modeling the asymmetric thermo-mechanical behavior and damage of gray cast irons, allowing more realistic simulations to predict the performance of these materials in demanding engineering applications.
采用实验与数值相结合的方法,研究了工业上广泛应用的灰口铸铁(EN-GJL-200、EN-GJL-250、EN-GJL-300)在不同应变速率和温度下的不对称(不同的拉伸和压缩行为)热力学行为和损伤。材料的力学响应通过室温和高达700°C的高温下的准静态拉伸和压缩试验、高应变率(高达~ 3600 s−1)的劈裂霍普金森压缩杆(SHPB)试验和不同缺口半径试样的拉伸试验来分析损伤行为。基于获得的实验数据,分别校准了两种加载情况下的Johnson-Cook (JC)材料(A, B, n, C, m)和损伤(D1-D5)模型参数,以捕捉灰口铸铁在拉伸和压缩加载下的明显不对称行为。通过基于fortran的用户自定义子程序将这些单独的参数集集成到ANSYS Autodyn有限元软件中,并进行虚拟拉伸、压缩和SHPB试验。将数值模拟结果与实验数据进行比较,发现所开发的非对称建模方法能够高精度地表征材料的热力学行为和损伤(最大应力和断裂伸长率的偏差在2 - 8%范围内)。该研究为灰口铸铁的不对称热力学行为和损伤建模提供了可靠且解耦的JC参数集,允许更真实的模拟来预测这些材料在苛刻的工程应用中的性能。
{"title":"Modeling the asymmetric thermo-mechanical behavior and failure of gray cast irons: An experimental–numerical study with separate Johnson–Cook parameters","authors":"Burak Özcan ,&nbsp;Umut Çalışkan ,&nbsp;Murat Aydın ,&nbsp;Onur Çavuşoğlu ,&nbsp;Ulvi Şeker","doi":"10.1016/j.simpat.2025.103182","DOIUrl":"10.1016/j.simpat.2025.103182","url":null,"abstract":"<div><div>In this study, the asymmetric (different tensile and compressive behavior) thermo-mechanical behavior and damage of gray cast irons (EN-GJL-200, EN-GJL-250, EN-GJL-300), which are widely used in industrial applications, under different strain rates and temperatures were investigated by a combination of experimental and numerical methods. The mechanical response of the materials was characterized by quasi-static tensile and compression tests at room temperature and elevated temperatures up to 700 °C, Split Hopkinson Compression Bar (SHPB) tests for high strain rates (up to ∼3600 s<sup>−1</sup>) and tensile tests with specimens of different notch radii to analyze the damage behavior. Based on the experimental data obtained, the Johnson-Cook (JC) material (A, B, n, C, m) and damage (D1-D5) model parameters were calibrated separately for both loading cases in order to capture the apparent asymmetric behavior of gray cast irons under tensile and compression loading. These separate parameter sets were integrated into ANSYS Autodyn finite element software through FORTRAN-based user-defined subroutines and virtual tensile, compression and SHPB tests were performed. Comparing the numerical simulation results with the experimental data, it was observed that the developed asymmetric modeling approach, in particular, represents the thermo-mechanical behavior and damage of the material with high accuracy (deviations in the range of 2–8 % for maximum stress and elongation at break values). This study provides reliable and decoupled JC parameter sets for modeling the asymmetric thermo-mechanical behavior and damage of gray cast irons, allowing more realistic simulations to predict the performance of these materials in demanding engineering applications.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"144 ","pages":"Article 103182"},"PeriodicalIF":3.5,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144654182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An advanced 3D continuum finite element model for field-scale in-situ stress simulation of rock media 用于岩石介质现场地应力模拟的先进三维连续体有限元模型
IF 3.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-07-14 DOI: 10.1016/j.simpat.2025.103183
Atefeh Dargahizarandi , Hossein Masoumi , Abolfazl Hashemi , Biswachetan Saha , Hamid Roshan
Accurate field-scale three-dimensional (3D) stress inversion using numerical simulation is crucial for obtaining in-situ stresses required for the safety and efficiency of underground minerals and energy resources extraction. However, existing commercial packages fall short in dealing with large-scale 3D stress inversion simulations and handling complex geological models containing faults and fractures. This work lays the foundation for the development of an optimised continuum Finite Element (FE) code (3DiStress) to simulate the 3D stress state in elastic media, capable of handling complex geological models. Such a computational framework employs advanced algorithms and state-of-the-art techniques, including the implementation of fault modelling through the effective medium theory, efficient large-scale model handling via vectorisation and sparse matrix storage, Superconvergent Patch Recovery (SPR) to calculate the stresses precisely, and iterative boundary conditions adjustment using Genetic Algorithm (GA) for stress inversion. For large-scale simulations, an effective solver, renowned for its robust handling of large sparse systems (Pardiso), is implemented to solve the resultant system of equations with high efficiency in parallel on a workstation and supercomputers. Furthermore, an iterative boundary condition adjustment is performed using GA, to calibrate the model against on-site stress measurements, thereby optimising the stress distribution. The principal advantages of this computational tool include its capability to accurately simulate complex faulted elastic media, flexible boundary condition optimisation, and the ability to easily adapt and integrate various algorithms, making it an asset for advanced geomechanical engineering applications.
利用数值模拟技术进行准确的场尺度三维应力反演对于获得地下矿产和能源开采安全高效所需的地应力至关重要。然而,现有的商业软件包在处理大规模三维应力反演模拟和处理包含断层和裂缝的复杂地质模型方面存在不足。这项工作为开发优化的连续体有限元(FE)代码(3DiStress)奠定了基础,以模拟弹性介质中的三维应力状态,能够处理复杂的地质模型。这种计算框架采用了先进的算法和技术,包括通过有效介质理论实现断层建模,通过矢量化和稀疏矩阵存储进行高效的大规模模型处理,超收敛补丁恢复(SPR)精确计算应力,以及使用遗传算法(GA)进行应力反演的迭代边界条件调整。对于大规模模拟,实现了一个有效的求解器,以其对大型稀疏系统的鲁棒处理(Pardiso)而闻名,在工作站和超级计算机上并行高效地求解所得方程组。此外,使用遗传算法进行迭代边界条件调整,以根据现场应力测量校准模型,从而优化应力分布。该计算工具的主要优点包括其精确模拟复杂断层弹性介质的能力,灵活的边界条件优化,以及轻松适应和集成各种算法的能力,使其成为先进地质力学工程应用的资产。
{"title":"An advanced 3D continuum finite element model for field-scale in-situ stress simulation of rock media","authors":"Atefeh Dargahizarandi ,&nbsp;Hossein Masoumi ,&nbsp;Abolfazl Hashemi ,&nbsp;Biswachetan Saha ,&nbsp;Hamid Roshan","doi":"10.1016/j.simpat.2025.103183","DOIUrl":"10.1016/j.simpat.2025.103183","url":null,"abstract":"<div><div>Accurate field-scale three-dimensional (<em>3D</em>) stress inversion using numerical simulation is crucial for obtaining in-situ stresses required for the safety and efficiency of underground minerals and energy resources extraction. However, existing commercial packages fall short in dealing with large-scale <em>3D</em> stress inversion simulations and handling complex geological models containing faults and fractures. This work lays the foundation for the development of an optimised continuum Finite Element (FE) code (3DiStress) to simulate the 3D stress state in elastic media, capable of handling complex geological models. Such a computational framework employs advanced algorithms and state-of-the-art techniques, including the implementation of fault modelling through the effective medium theory, efficient large-scale model handling via vectorisation and sparse matrix storage, Superconvergent Patch Recovery (<em>SPR</em>) to calculate the stresses precisely, and iterative boundary conditions adjustment using Genetic Algorithm (<em>GA</em>) for stress inversion. For large-scale simulations, an effective solver, renowned for its robust handling of large sparse systems (Pardiso), is implemented to solve the resultant system of equations with high efficiency in parallel on a workstation and supercomputers. Furthermore, an iterative boundary condition adjustment is performed using <em>GA</em>, to calibrate the model against on-site stress measurements, thereby optimising the stress distribution. The principal advantages of this computational tool include its capability to accurately simulate complex faulted elastic media, flexible boundary condition optimisation, and the ability to easily adapt and integrate various algorithms, making it an asset for advanced geomechanical engineering applications.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"144 ","pages":"Article 103183"},"PeriodicalIF":3.5,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144654098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the performance of real-time data imputation to enhance fault tolerance on the edge: A study on environmental data 基于边缘容错的实时数据输入性能研究——以环境数据为例
IF 3.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-07-12 DOI: 10.1016/j.simpat.2025.103178
Dimitris Gkoulis, Anargyros Tsadimas, George Kousiouris, Cleopatra Bardaki, Mara Nikolaidou
Real-time data streams from edge-based IoT sensors are frequently affected by transmission errors, sensor faults, and network disruptions, leading to missing or incomplete data. This paper investigates the application of lightweight, real-time imputation methods to enhance fault tolerance in edge computing systems. To this end, we propose to integrate a modular imputation engine on edge system supporting lightweight forecasting models selected for their computational efficiency and suitability to operate on real-time data streams. To assess the performance of different popular lightweight forecasting models for real-time applications, a simulation framework is introduced that simulates the operation of the imputation engine, replicates sensor failure scenarios and allows controlled testing on real-world systems. Imputation accuracy is evaluated using Mean Absolute Error (MAE), 95th percentile error, and maximum error, with results benchmarked against sensor tolerance thresholds. The simulation framework is used to explore imputation on environmental data based on observations collected from a weather station. The findings show that Holt–Winters Exponential Smoothing delivers the highest accuracy for real-time imputation across environmental variables, outperforming simpler models suited only for short-term gaps. Errors grow with longer forecasts, confirming imputation as a temporary solution. Evaluations against sensor-specific thresholds offer practical insights, and execution profiling proves these models are lightweight enough for deployment on low-power edge devices, enabling real-time, fault-tolerant monitoring without cloud dependence.
来自边缘物联网传感器的实时数据流经常受到传输错误、传感器故障和网络中断的影响,导致数据丢失或不完整。本文研究了在边缘计算系统中应用轻量、实时的插值方法来增强容错性。为此,我们建议在边缘系统上集成一个模块化的输入引擎,支持轻量级的预测模型,这些预测模型是根据其计算效率和对实时数据流的适用性而选择的。为了评估实时应用中不同流行的轻量级预测模型的性能,引入了一个仿真框架,该框架模拟了输入引擎的操作,复制了传感器故障场景,并允许在真实系统上进行受控测试。使用平均绝对误差(MAE)、第95百分位误差和最大误差来评估插入精度,结果以传感器公差阈值为基准。利用模拟框架探讨了基于气象站观测数据的环境数据的拟合。研究结果表明,Holt-Winters指数平滑在跨环境变量的实时输入中提供了最高的准确性,优于仅适用于短期差距的简单模型。随着预测时间的延长,错误也会增加,这证实了归咎只是一种临时解决方案。针对特定传感器阈值的评估提供了实用的见解,执行分析证明这些模型足够轻量级,可以部署在低功耗边缘设备上,实现实时、容错监控,而不依赖云。
{"title":"Exploring the performance of real-time data imputation to enhance fault tolerance on the edge: A study on environmental data","authors":"Dimitris Gkoulis,&nbsp;Anargyros Tsadimas,&nbsp;George Kousiouris,&nbsp;Cleopatra Bardaki,&nbsp;Mara Nikolaidou","doi":"10.1016/j.simpat.2025.103178","DOIUrl":"10.1016/j.simpat.2025.103178","url":null,"abstract":"<div><div>Real-time data streams from edge-based IoT sensors are frequently affected by transmission errors, sensor faults, and network disruptions, leading to missing or incomplete data. This paper investigates the application of lightweight, real-time imputation methods to enhance fault tolerance in edge computing systems. To this end, we propose to integrate a modular imputation engine on edge system supporting lightweight forecasting models selected for their computational efficiency and suitability to operate on real-time data streams. To assess the performance of different popular lightweight forecasting models for real-time applications, a simulation framework is introduced that simulates the operation of the imputation engine, replicates sensor failure scenarios and allows controlled testing on real-world systems. Imputation accuracy is evaluated using Mean Absolute Error (MAE), 95th percentile error, and maximum error, with results benchmarked against sensor tolerance thresholds. The simulation framework is used to explore imputation on environmental data based on observations collected from a weather station. The findings show that Holt–Winters Exponential Smoothing delivers the highest accuracy for real-time imputation across environmental variables, outperforming simpler models suited only for short-term gaps. Errors grow with longer forecasts, confirming imputation as a temporary solution. Evaluations against sensor-specific thresholds offer practical insights, and execution profiling proves these models are lightweight enough for deployment on low-power edge devices, enabling real-time, fault-tolerant monitoring without cloud dependence.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"144 ","pages":"Article 103178"},"PeriodicalIF":3.5,"publicationDate":"2025-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144614126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CoFANN: A collaborative framework for accelerating DNN inference in drone-based agricultural monitoring systems CoFANN:在基于无人机的农业监测系统中加速DNN推理的协作框架
IF 3.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-07-11 DOI: 10.1016/j.simpat.2025.103176
Nhu-Y Tran-Van, Kim-Hung Le
Plant leaf diseases pose a major threat to global agricultural productivity, causing substantial crop losses annually. While drone-based monitoring systems equipped with deep neural networks (DNNs) offer a promising solution for large-scale disease detection, their deployment is hindered by the computational limitations of IoT devices and the latency issues associated with cloud and edge computing. Existing collaborative inference approaches aim to mitigate end-to-end latency by offloading computation across devices. However, these methods often compromise model accuracy and add computing latency in generating inference strategies. To address these challenges, we present CoFANN, a novel collaborative framework to accelerate DNN inference in dynamic IoT environments. Our framework includes two key advances: a differentiable strategy search space with a gradient-based optimization algorithm for efficiently identify optimal partitioning strategies, and an adaptive model partitioning algorithm that effectively divides and allocates DNN components across computing devices based on their capabilities and network conditions. Experimental results in the plant disease dataset demonstrate that CoFANN reduces the total inference latency by up to 70% compared to device-only and 50% compared to edge-only approaches under varying network conditions, while maintaining comparable accuracy from 93.7% to 95.8%.
植物叶片病害对全球农业生产力构成重大威胁,每年造成大量作物损失。虽然配备深度神经网络(dnn)的无人机监测系统为大规模疾病检测提供了一个很有前途的解决方案,但它们的部署受到物联网设备的计算限制以及与云和边缘计算相关的延迟问题的阻碍。现有的协同推理方法旨在通过跨设备卸载计算来减轻端到端延迟。然而,这些方法往往会损害模型的准确性,并在生成推理策略时增加计算延迟。为了应对这些挑战,我们提出了CoFANN,这是一个新的协作框架,可以在动态物联网环境中加速DNN推理。我们的框架包括两个关键的进展:一个可微分的策略搜索空间,一个基于梯度的优化算法,用于有效地识别最优分区策略;一个自适应模型分区算法,根据计算设备的能力和网络条件,有效地在计算设备之间划分和分配DNN组件。植物病害数据集的实验结果表明,在不同的网络条件下,CoFANN比纯设备方法减少了70%的总推理延迟,比纯边缘方法减少了50%,同时保持了93.7%到95.8%的相当准确率。
{"title":"CoFANN: A collaborative framework for accelerating DNN inference in drone-based agricultural monitoring systems","authors":"Nhu-Y Tran-Van,&nbsp;Kim-Hung Le","doi":"10.1016/j.simpat.2025.103176","DOIUrl":"10.1016/j.simpat.2025.103176","url":null,"abstract":"<div><div>Plant leaf diseases pose a major threat to global agricultural productivity, causing substantial crop losses annually. While drone-based monitoring systems equipped with deep neural networks (DNNs) offer a promising solution for large-scale disease detection, their deployment is hindered by the computational limitations of IoT devices and the latency issues associated with cloud and edge computing. Existing collaborative inference approaches aim to mitigate end-to-end latency by offloading computation across devices. However, these methods often compromise model accuracy and add computing latency in generating inference strategies. To address these challenges, we present CoFANN, a novel collaborative framework to accelerate DNN inference in dynamic IoT environments. Our framework includes two key advances: a differentiable strategy search space with a gradient-based optimization algorithm for efficiently identify optimal partitioning strategies, and an adaptive model partitioning algorithm that effectively divides and allocates DNN components across computing devices based on their capabilities and network conditions. Experimental results in the plant disease dataset demonstrate that CoFANN reduces the total inference latency by up to 70% compared to device-only and 50% compared to edge-only approaches under varying network conditions, while maintaining comparable accuracy from 93.7% to 95.8%.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"144 ","pages":"Article 103176"},"PeriodicalIF":3.5,"publicationDate":"2025-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144611623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GPU-accelerated cloud computing services and performance evaluation gpu加速云计算服务和性能评估
IF 3.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-07-11 DOI: 10.1016/j.simpat.2025.103181
Zakery Collins, Gennaro De Luca, Yinong Chen
This paper explores the feasibility of replacing traditional CPU-based cloud computing with Graphic Processing Unit GPU-accelerated services. Using NVIDIA’s CUDA GPU-accelerated C/C++ and Python libraries, we benchmark the performance of GPU computing against multithreaded CPU computing across several domains, including machine learning and large-scale image processing. A novel contribution of this work is an intelligent autoscaling system that maximizes single-GPU resource utilization before scaling to additional GPUs, improving efficiency in cloud-based deployments. Our simulation experiments demonstrate significant performance gains for GPU-accelerated computing and highlight the impact of optimized resource allocation in cloud environments. For example, in a machine learning experiment, using a dataset with 8.790 entries, the execution of a GeForce 3060 ti GPU is 3.42 times faster than a 16-thread CPU computer. Compared with the same 16-thread CPU, Tesla K80 GPU is 4.17 times faster. Furthermore, we provide an analysis of GPU performance optimization strategies, including memory management, concurrency techniques, and workload distribution methodologies, offering insights into the long-term scalability and cost-effectiveness of GPU-accelerated cloud infrastructure.
本文探讨了用图形处理单元gpu加速服务取代传统基于cpu的云计算的可行性。使用NVIDIA的CUDA GPU加速C/ c++和Python库,我们将GPU计算的性能与多个领域的多线程CPU计算进行基准测试,包括机器学习和大规模图像处理。这项工作的一个新颖贡献是一个智能自动扩展系统,在扩展到其他gpu之前最大限度地利用单个gpu资源,提高基于云的部署的效率。我们的模拟实验证明了gpu加速计算的显著性能提升,并突出了云环境中优化资源分配的影响。例如,在机器学习实验中,使用具有8.790个条目的数据集,GeForce 3060 ti GPU的执行速度比16线程CPU计算机快3.42倍。与相同的16线程CPU相比,Tesla K80 GPU的速度提高了4.17倍。此外,我们还提供了GPU性能优化策略的分析,包括内存管理、并发技术和工作负载分配方法,提供了对GPU加速云基础设施的长期可扩展性和成本效益的见解。
{"title":"GPU-accelerated cloud computing services and performance evaluation","authors":"Zakery Collins,&nbsp;Gennaro De Luca,&nbsp;Yinong Chen","doi":"10.1016/j.simpat.2025.103181","DOIUrl":"10.1016/j.simpat.2025.103181","url":null,"abstract":"<div><div>This paper explores the feasibility of replacing traditional CPU-based cloud computing with Graphic Processing Unit GPU-accelerated services. Using NVIDIA’s CUDA GPU-accelerated C/<em>C</em>++ and Python libraries, we benchmark the performance of GPU computing against multithreaded CPU computing across several domains, including machine learning and large-scale image processing. A novel contribution of this work is an intelligent autoscaling system that maximizes single-GPU resource utilization before scaling to additional GPUs, improving efficiency in cloud-based deployments. Our simulation experiments demonstrate significant performance gains for GPU-accelerated computing and highlight the impact of optimized resource allocation in cloud environments. For example, in a machine learning experiment, using a dataset with 8.790 entries, the execution of a GeForce 3060 ti GPU is 3.42 times faster than a 16-thread CPU computer. Compared with the same 16-thread CPU, Tesla K80 GPU is 4.17 times faster. Furthermore, we provide an analysis of GPU performance optimization strategies, including memory management, concurrency techniques, and workload distribution methodologies, offering insights into the long-term scalability and cost-effectiveness of GPU-accelerated cloud infrastructure.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"144 ","pages":"Article 103181"},"PeriodicalIF":3.5,"publicationDate":"2025-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144654099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mastering the complexity: An enhanced cellular automata-based framework for simulating resilience of hospital Power-Water-Firefighting-Space nexus system 控制复杂性:基于增强元胞自动机的医院动力-水-消防-空间联系系统弹性模拟框架
IF 3.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-07-08 DOI: 10.1016/j.simpat.2025.103177
Renlong Wang , Lingzhi Li , Wenjie Lin , Endong Wang , Jingfeng Yuan
Modeling the resilience of hospital Power-Water-Firefighting-Space (PWFS) nexus systems is a complex, dynamic, and nonlinear challenge characterized by high uncertainty. Existing methods, mainly agent-based and network-based models, face difficulties in balancing detailed component-level behaviors with broader system-level interdependencies and neglect the impact of external disruptions, such as surges in service demand during the COVID-19 pandemic, on hospital PWFS system resilience. To address this, the study proposes an enhanced cellular automata (CA)-based framework for simulating hospital PWFS system resilience. The PWFS system is modeled as a seven-tuple CA, incorporating cell structure, state, space, neighborhood, transition rules, and time, facilitating the integration of micro-level component behavior with macro-level interdependencies. A set of resilience metrics, including robustness, rapidity, performance loss, and an integrated resilience index, are introduced, based on the system performance curve, which includes normality, connectivity, resource transfer efficiency, and space functionality. The model enables scalable, polynomial-time simulations of cascading failures, resource redistribution, and spatial–temporal recovery across interconnected PWFS subsystems. A real-world outpatient building case study demonstrates the applicability and validity of the enhanced CA model. The findings emphasize the importance of modeling intra-system interdependencies and provide actionable insights for infrastructure design and emergency preparedness. Overall, the enhanced CA framework offers a systematic, scalable, and computationally efficient approach to resilience assessment, bridging theoretical modeling with practical infrastructure planning.
医院电力-水-消防-空间(PWFS)联结系统的弹性建模是一个复杂的、动态的、非线性的、具有高度不确定性的挑战。现有方法(主要是基于代理和基于网络的模型)在平衡详细的组件级行为与更广泛的系统级相互依赖关系方面面临困难,并且忽略了外部中断(例如COVID-19大流行期间服务需求激增)对医院PWFS系统弹性的影响。为了解决这个问题,该研究提出了一个增强的基于元胞自动机(CA)的框架来模拟医院PWFS系统的弹性。PWFS系统被建模为一个七元组CA,包含单元结构、状态、空间、邻域、转换规则和时间,促进了微观级组件行为与宏观级相互依赖关系的集成。基于系统性能曲线,包括正态性、连通性、资源转移效率和空间功能,引入了一套弹性指标,包括鲁棒性、快速性、性能损失和综合弹性指标。该模型支持级联故障、资源再分配和跨互联PWFS子系统的时空恢复的可扩展、多项式时间模拟。一个实际的门诊建筑案例研究证明了增强CA模型的适用性和有效性。研究结果强调了系统内部相互依赖关系建模的重要性,并为基础设施设计和应急准备提供了可操作的见解。总体而言,增强的CA框架为弹性评估提供了一种系统的、可扩展的、计算效率高的方法,将理论建模与实际基础设施规划联系起来。
{"title":"Mastering the complexity: An enhanced cellular automata-based framework for simulating resilience of hospital Power-Water-Firefighting-Space nexus system","authors":"Renlong Wang ,&nbsp;Lingzhi Li ,&nbsp;Wenjie Lin ,&nbsp;Endong Wang ,&nbsp;Jingfeng Yuan","doi":"10.1016/j.simpat.2025.103177","DOIUrl":"10.1016/j.simpat.2025.103177","url":null,"abstract":"<div><div>Modeling the resilience of hospital Power-Water-Firefighting-Space (PWFS) nexus systems is a complex, dynamic, and nonlinear challenge characterized by high uncertainty. Existing methods, mainly agent-based and network-based models, face difficulties in balancing detailed component-level behaviors with broader system-level interdependencies and neglect the impact of external disruptions, such as surges in service demand during the COVID-19 pandemic, on hospital PWFS system resilience. To address this, the study proposes an enhanced cellular automata (CA)-based framework for simulating hospital PWFS system resilience. The PWFS system is modeled as a seven-tuple CA, incorporating cell structure, state, space, neighborhood, transition rules, and time, facilitating the integration of micro-level component behavior with macro-level interdependencies. A set of resilience metrics, including robustness, rapidity, performance loss, and an integrated resilience index, are introduced, based on the system performance curve, which includes normality, connectivity, resource transfer efficiency, and space functionality. The model enables scalable, polynomial-time simulations of cascading failures, resource redistribution, and spatial–temporal recovery across interconnected PWFS subsystems. A real-world outpatient building case study demonstrates the applicability and validity of the enhanced CA model. The findings emphasize the importance of modeling intra-system interdependencies and provide actionable insights for infrastructure design and emergency preparedness. Overall, the enhanced CA framework offers a systematic, scalable, and computationally efficient approach to resilience assessment, bridging theoretical modeling with practical infrastructure planning.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"144 ","pages":"Article 103177"},"PeriodicalIF":3.5,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144632420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Simulation model and performance evaluation of automated valet parking technologies in parking lots 停车场自动代客泊车技术仿真模型及性能评价
IF 3.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-07-06 DOI: 10.1016/j.simpat.2025.103175
Ning Ma , Angjun Tang , Jingxin Hai , Fang Yuan
Autonomous valet parking (AVP) is widely employed among parking lots and city logistics worldwide, expanding the applications of autonomous driving technologies. Auto companies are promoting three technology roadmaps to improve the implementation of AVP: AVP with autonomous driving system (AVP-ADS), AVP with Intelligent Infrastructure Systems (AVP-IS), and AVP with cooperative vehicle infrastructure systems (AVP-CVIS). Specifically, AVP-ADS can further be divided into LIDAR solution (AVP-ADS-LIDAR) and visual solution (AVP-ADS-VISUAL). This paper presents a simulation model to evaluate and compare the performances of AVP-CVIS, AVP-IS, AVP-ADS-LIDAR, AVP-ADS-VISUAL and Manual Parking (MP) in the real parking lot. The vehicle parking system is modeled as a discrete event simulation model, in which the controller module simulates driving behavior and defines the control logics of these three parking solutions. Extensive experiments are conducted and metrics are presented to evaluate the performance of these AVP technical solutions. The results indicate that AVP-CVIS exhibits more efficient parking performance. Management insights are provided to facilitate a more effective implementation of AVP.
自动代客泊车(AVP)在全球停车场和城市物流领域得到广泛应用,拓展了自动驾驶技术的应用领域。汽车公司正在推广三种技术路线图,以改善AVP的实施:AVP与自动驾驶系统(AVP- ads), AVP与智能基础设施系统(AVP- is), AVP与合作车辆基础设施系统(AVP- cvis)。具体来说,AVP-ADS可进一步分为激光雷达解决方案(AVP-ADS-LIDAR)和视觉解决方案(AVP-ADS- visual)。本文建立了仿真模型,对AVP-CVIS、AVP-IS、AVP-ADS-LIDAR、AVP-ADS-VISUAL和手动泊车(MP)在实际停车场中的性能进行了评价和比较。将车辆停车系统建模为离散事件仿真模型,其中控制器模块模拟驾驶行为,并定义这三种停车方案的控制逻辑。进行了大量的实验,并提出了指标来评估这些AVP技术解决方案的性能。结果表明,AVP-CVIS具有更高的停车效率。提供管理见解,以促进更有效地实施AVP。
{"title":"Simulation model and performance evaluation of automated valet parking technologies in parking lots","authors":"Ning Ma ,&nbsp;Angjun Tang ,&nbsp;Jingxin Hai ,&nbsp;Fang Yuan","doi":"10.1016/j.simpat.2025.103175","DOIUrl":"10.1016/j.simpat.2025.103175","url":null,"abstract":"<div><div>Autonomous valet parking (AVP) is widely employed among parking lots and city logistics worldwide, expanding the applications of autonomous driving technologies. Auto companies are promoting three technology roadmaps to improve the implementation of AVP: AVP with autonomous driving system (AVP-ADS), AVP with Intelligent Infrastructure Systems (AVP-IS), and AVP with cooperative vehicle infrastructure systems (AVP-CVIS). Specifically, AVP-ADS can further be divided into LIDAR solution (AVP-ADS-LIDAR) and visual solution (AVP-ADS-VISUAL). This paper presents a simulation model to evaluate and compare the performances of AVP-CVIS, AVP-IS, AVP-ADS-LIDAR, AVP-ADS-VISUAL and Manual Parking (MP) in the real parking lot. The vehicle parking system is modeled as a discrete event simulation model, in which the controller module simulates driving behavior and defines the control logics of these three parking solutions. Extensive experiments are conducted and metrics are presented to evaluate the performance of these AVP technical solutions. The results indicate that AVP-CVIS exhibits more efficient parking performance. Management insights are provided to facilitate a more effective implementation of AVP.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"144 ","pages":"Article 103175"},"PeriodicalIF":3.5,"publicationDate":"2025-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144571579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning methods in microscopic pedestrian and evacuation dynamics simulation: a comparative study 微观行人和疏散动力学模拟中的机器学习方法:比较研究
IF 3.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-07-05 DOI: 10.1016/j.simpat.2025.103180
Nan Jiang , Hanchen Yu , Eric Wai Ming Lee , Hongyun Yang , Lizhong Yang , Richard Kwok Kit Yuen
The modeling and simulation of pedestrian and evacuation dynamics provides essential insights for the field of crowd safety against the background of population increasing and regional development. With the superior performance of machine learning methods demonstrated in pedestrian modeling, varying data encoding schemes and machine learning algorithms were investigated and lack of comparative analysis. Hence, this study analyzes machine learning methods for simulating microscopic pedestrian and evacuation dynamics. The motion interaction field along with a data extraction rule that standardizes input lengths for learning-based models is proposed. Two typical algorithms, Classification and Regression Trees (CART) and Artificial Neural Networks (ANN), are employed for model training and comparison. The fitting performance is evaluated using mean absolute error of velocity, revealing that the CART-based model outperforms the ANN-based model in stability and lower error rates, particularly in varying local density ranges. Dynamics tests are further performed to examine the two models’ robustness against inherent error. The results indicate that the CART-based model struggles under high-density conditions due to the split-based structure. In contrast, the ANN-based model demonstrates superior non-linear fitting ability, allowing for better reproduction of pedestrian dynamics at relatively higher densities. Moreover, the Wasserstein Distance with Sinkhorn iteration is used to quantify model performance in terms of flow-density fundamental diagrams, highlighting the advantages of learning-based approaches over traditional social force model. This research has significant implications for the field of building and civil engineering, as insights from comparative analysis of two typical machine learning algorithms and the establishment of motion interaction field can inform the progress of learning-based pedestrian and evacuation dynamics simulation. The study presented underscores the transformative potential of machine learning methods in simulating pedestrian dynamics and suggests future research directions to enhance robustness and applicability across diverse scenarios of learning-based methods in microscopic pedestrian and evacuation dynamics simulation.
行人和疏散动力学的建模和仿真为人口增长和区域发展背景下的人群安全领域提供了重要的见解。由于机器学习方法在行人建模中表现出优越的性能,研究人员对不同的数据编码方案和机器学习算法进行了研究,但缺乏比较分析。因此,本研究分析了模拟微观行人和疏散动态的机器学习方法。提出了基于学习的模型的运动交互场以及一种标准化输入长度的数据提取规则。采用分类回归树(CART)和人工神经网络(ANN)两种典型算法进行模型训练和比较。使用速度的平均绝对误差来评估拟合性能,表明基于cart的模型在稳定性和更低的错误率方面优于基于ann的模型,特别是在不同的局部密度范围内。进一步进行了动力学测试,以检验两个模型对固有误差的鲁棒性。结果表明,由于基于分裂的结构,基于cart的模型在高密度条件下会遇到困难。相比之下,基于人工神经网络的模型表现出优越的非线性拟合能力,可以在相对较高的密度下更好地再现行人动态。此外,使用带有Sinkhorn迭代的Wasserstein距离来根据流量密度基本图量化模型性能,突出了基于学习的方法相对于传统社会力模型的优势。本研究对建筑和土木工程领域具有重要意义,通过对两种典型机器学习算法的比较分析和运动交互场的建立,可以为基于学习的行人和疏散动力学仿真的进展提供信息。该研究强调了机器学习方法在模拟行人动力学方面的变革潜力,并提出了未来的研究方向,以增强基于学习的方法在微观行人和疏散动力学模拟中不同场景的鲁棒性和适用性。
{"title":"Machine learning methods in microscopic pedestrian and evacuation dynamics simulation: a comparative study","authors":"Nan Jiang ,&nbsp;Hanchen Yu ,&nbsp;Eric Wai Ming Lee ,&nbsp;Hongyun Yang ,&nbsp;Lizhong Yang ,&nbsp;Richard Kwok Kit Yuen","doi":"10.1016/j.simpat.2025.103180","DOIUrl":"10.1016/j.simpat.2025.103180","url":null,"abstract":"<div><div>The modeling and simulation of pedestrian and evacuation dynamics provides essential insights for the field of crowd safety against the background of population increasing and regional development. With the superior performance of machine learning methods demonstrated in pedestrian modeling, varying data encoding schemes and machine learning algorithms were investigated and lack of comparative analysis. Hence, this study analyzes machine learning methods for simulating microscopic pedestrian and evacuation dynamics. The motion interaction field along with a data extraction rule that standardizes input lengths for learning-based models is proposed. Two typical algorithms, Classification and Regression Trees (CART) and Artificial Neural Networks (ANN), are employed for model training and comparison. The fitting performance is evaluated using mean absolute error of velocity, revealing that the CART-based model outperforms the ANN-based model in stability and lower error rates, particularly in varying local density ranges. Dynamics tests are further performed to examine the two models’ robustness against inherent error. The results indicate that the CART-based model struggles under high-density conditions due to the split-based structure. In contrast, the ANN-based model demonstrates superior non-linear fitting ability, allowing for better reproduction of pedestrian dynamics at relatively higher densities. Moreover, the Wasserstein Distance with Sinkhorn iteration is used to quantify model performance in terms of flow-density fundamental diagrams, highlighting the advantages of learning-based approaches over traditional social force model. This research has significant implications for the field of building and civil engineering, as insights from comparative analysis of two typical machine learning algorithms and the establishment of motion interaction field can inform the progress of learning-based pedestrian and evacuation dynamics simulation. The study presented underscores the transformative potential of machine learning methods in simulating pedestrian dynamics and suggests future research directions to enhance robustness and applicability across diverse scenarios of learning-based methods in microscopic pedestrian and evacuation dynamics simulation.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"144 ","pages":"Article 103180"},"PeriodicalIF":3.5,"publicationDate":"2025-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144702136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CADCO: An Adaptive Dynamic Cloud-fog Computing Offloading Method for complex dependency tasks of IoT CADCO:物联网复杂依赖任务的自适应动态云雾计算卸载方法
IF 3.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-07-03 DOI: 10.1016/j.simpat.2025.103168
Zhuangzhi Tian , Xiaolong Xu
With the rapid development of the Internet of Things (IoT) and cloud-fog computing, efficient offloading of complex dependency tasks has become a key challenge for improving system performance, especially for real-time IoT applications. Traditional methods are inefficient in handling dynamic environments and long-range dependencies, while existing deep reinforcement learning approaches face issues such as rigid resource allocation and Q-value overestimation. To address these problems, we propose an Adaptive Dynamic Cloud-fog Computing Offloading Method for complex dependency tasks (CADCO). The method accurately models task dependencies using the multi-head attention mechanism of Transformer, optimizes computational and memory resource allocation through Hybrid Model Parallelism (HMP) technology, and designs a dynamic offloading strategy based on an improved Double Deep Q-Network (DDQN). A freshness factor is introduced to optimize the experience replay mechanism, enhancing the stability of the strategy. Experimental results show that CADCO demonstrates significant advantages in multi-user, multi-task offloading scenarios, optimizing task scheduling, improving resource utilization, and significantly enhancing QoS while reducing task latency and energy consumption. These results validate the practical application value of CADCO in complex task dependency environments, providing solid theoretical and experimental support for intelligent computing offloading optimization.
随着物联网(IoT)和云雾计算的快速发展,复杂依赖任务的高效卸载已成为提高系统性能的关键挑战,特别是对于实时物联网应用。传统方法在处理动态环境和远程依赖关系方面效率低下,而现有的深度强化学习方法面临资源分配僵化和q值高估等问题。为了解决这些问题,我们提出了一种复杂依赖任务的自适应动态云雾计算卸载方法(CADCO)。该方法利用Transformer的多头注意机制精确建模任务依赖关系,通过混合模型并行(HMP)技术优化计算和内存资源分配,并设计了基于改进双深度Q-Network (DDQN)的动态卸载策略。引入新鲜度因子对体验回放机制进行优化,增强了策略的稳定性。实验结果表明,CADCO在多用户、多任务卸载场景下具有显著优势,可以优化任务调度,提高资源利用率,在降低任务延迟和能耗的同时显著增强QoS。这些结果验证了CADCO在复杂任务依赖环境中的实际应用价值,为智能计算卸载优化提供了坚实的理论和实验支持。
{"title":"CADCO: An Adaptive Dynamic Cloud-fog Computing Offloading Method for complex dependency tasks of IoT","authors":"Zhuangzhi Tian ,&nbsp;Xiaolong Xu","doi":"10.1016/j.simpat.2025.103168","DOIUrl":"10.1016/j.simpat.2025.103168","url":null,"abstract":"<div><div>With the rapid development of the Internet of Things (IoT) and cloud-fog computing, efficient offloading of complex dependency tasks has become a key challenge for improving system performance, especially for real-time IoT applications. Traditional methods are inefficient in handling dynamic environments and long-range dependencies, while existing deep reinforcement learning approaches face issues such as rigid resource allocation and Q-value overestimation. To address these problems, we propose an Adaptive Dynamic Cloud-fog Computing Offloading Method for complex dependency tasks (CADCO). The method accurately models task dependencies using the multi-head attention mechanism of Transformer, optimizes computational and memory resource allocation through Hybrid Model Parallelism (HMP) technology, and designs a dynamic offloading strategy based on an improved Double Deep Q-Network (DDQN). A freshness factor is introduced to optimize the experience replay mechanism, enhancing the stability of the strategy. Experimental results show that CADCO demonstrates significant advantages in multi-user, multi-task offloading scenarios, optimizing task scheduling, improving resource utilization, and significantly enhancing QoS while reducing task latency and energy consumption. These results validate the practical application value of CADCO in complex task dependency environments, providing solid theoretical and experimental support for intelligent computing offloading optimization.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"144 ","pages":"Article 103168"},"PeriodicalIF":3.5,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144570443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparing Control Theory and Deep Reinforcement Learning techniques for decentralized task offloading in the edge–cloud continuum 边缘云连续体中分散任务卸载的控制理论与深度强化学习技术比较
IF 3.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-07-03 DOI: 10.1016/j.simpat.2025.103170
Gorka Nieto , Neco Villegas , Luis Diez , Idoia de la Iglesia , Unai Lopez-Novoa , Cristina Perfecto , Ramón Agüero
With the increasingly demanding requirements of Internet-of-Things (IoT) applications in terms of latency, energy efficiency, and computational resources, among others, task offloading has become crucial to optimize performance across edge and cloud infrastructures. Thus, optimizing the offloading to reduce latency as well as energy consumption and, ultimately, to guarantee appropriate service levels and enhance performance has become an important area of research. There are many approaches to guide the offloading of tasks in a distributed environment, and, in this work, we present a comprehensive comparison of three of them: A Control Theory (CT) Lyapunov optimization method, 3 Deep Reinforcement Learning (DRL) based strategies and traditional solutions, like Round-Robin or static schedulers. This comparison has been conducted using ITSASO, an in-house developed simulation platform for evaluating decentralized task offloading strategies in a three-layer computing hierarchy comprising IoT, fog, and cloud nodes. The platform models service generation in the IoT layer using a configurable distribution, enabling each IoT node to decide whether to autonomously execute tasks (locally), offload them to the fog layer, or send them to the cloud server. Our approach aims to minimize the energy consumption of devices while meeting tasks’ latency requirements. Our simulation results reveal that Lyapunov optimization excels in static environments, while DRL approaches prove to be more effective in dynamic settings, by better adapting to changing requirements and workloads. This study offers an analysis of the trade-offs between these solutions, highlighting the scenarios in which each scheduling approach is most suitable, thereby contributing valuable theoretical insights into the effectiveness of various offloading strategies in different environments. The source code of ITSASO is publicly available.
随着物联网(IoT)应用程序在延迟、能效和计算资源等方面的要求越来越高,任务卸载对于优化跨边缘和云基础设施的性能变得至关重要。因此,优化卸载以减少延迟和能耗,并最终保证适当的服务水平和提高性能已成为一个重要的研究领域。有许多方法可以指导分布式环境中的任务卸载,在这项工作中,我们对其中三种进行了全面的比较:一种控制理论(CT) Lyapunov优化方法,3种基于深度强化学习(DRL)的策略和传统的解决方案,如轮询或静态调度程序。这项比较是使用ITSASO进行的,ITSASO是一个内部开发的仿真平台,用于评估由物联网、雾和云节点组成的三层计算层次结构中的分散任务卸载策略。该平台使用可配置的分布对物联网层中的服务生成进行建模,使每个物联网节点能够决定是自主执行任务(本地),将它们卸载到雾层,还是将它们发送到云服务器。我们的方法旨在最大限度地减少设备的能耗,同时满足任务的延迟要求。我们的模拟结果显示,Lyapunov优化在静态环境中表现出色,而DRL方法在动态环境中更有效,可以更好地适应不断变化的需求和工作负载。本研究提供了这些解决方案之间的权衡分析,突出了每种调度方法最适合的场景,从而为不同环境下各种卸载策略的有效性提供了有价值的理论见解。ITSASO的源代码是公开的。
{"title":"Comparing Control Theory and Deep Reinforcement Learning techniques for decentralized task offloading in the edge–cloud continuum","authors":"Gorka Nieto ,&nbsp;Neco Villegas ,&nbsp;Luis Diez ,&nbsp;Idoia de la Iglesia ,&nbsp;Unai Lopez-Novoa ,&nbsp;Cristina Perfecto ,&nbsp;Ramón Agüero","doi":"10.1016/j.simpat.2025.103170","DOIUrl":"10.1016/j.simpat.2025.103170","url":null,"abstract":"<div><div>With the increasingly demanding requirements of Internet-of-Things (IoT) applications in terms of latency, energy efficiency, and computational resources, among others, task offloading has become crucial to optimize performance across edge and cloud infrastructures. Thus, optimizing the offloading to reduce latency as well as energy consumption and, ultimately, to guarantee appropriate service levels and enhance performance has become an important area of research. There are many approaches to guide the offloading of tasks in a distributed environment, and, in this work, we present a comprehensive comparison of three of them: A Control Theory (CT) Lyapunov optimization method, 3 Deep Reinforcement Learning (DRL) based strategies and traditional solutions, like Round-Robin or static schedulers. This comparison has been conducted using <em>ITSASO</em>, an in-house developed simulation platform for evaluating decentralized task offloading strategies in a three-layer computing hierarchy comprising IoT, fog, and cloud nodes. The platform models service generation in the IoT layer using a configurable distribution, enabling each IoT node to decide whether to autonomously execute tasks (locally), offload them to the fog layer, or send them to the cloud server. Our approach aims to minimize the energy consumption of devices while meeting tasks’ latency requirements. Our simulation results reveal that Lyapunov optimization excels in static environments, while DRL approaches prove to be more effective in dynamic settings, by better adapting to changing requirements and workloads. This study offers an analysis of the trade-offs between these solutions, highlighting the scenarios in which each scheduling approach is most suitable, thereby contributing valuable theoretical insights into the effectiveness of various offloading strategies in different environments. The source code of <em>ITSASO</em> is publicly available.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"144 ","pages":"Article 103170"},"PeriodicalIF":3.5,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144562822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Simulation Modelling Practice and Theory
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1