Pub Date : 2024-04-16DOI: 10.1016/j.simpat.2024.102941
Surendra Singh
The issue of task scheduling for a multi-core processor in Fog networks, with a focus on security and energy efficiency is of great importance in real-time systems. Currently, scheduling algorithms designed for cluster computing environments utilize dynamic voltage scaling (DVS) to decrease CPU power consumption, albeit at the expense of performance. This problem becomes more pronounced when a real-time task requires robust security, resulting in heavily overloaded nodes (CPUs or computing systems) in a cluster computing environment. To address such challenges, a solution called “Energy efficient Security Driven Scheduling of Real-Time Tasks using DVS-enabled Fog Networks (ESDS)” has been proposed. The primary goal of ESDS is to dynamically adjust CPU voltages or frequencies based on the workload conditions of nodes in Fog networks, thereby achieving optimal trade-offs between security, scheduling, and energy consumption for real-time tasks. By dynamically reducing voltage or frequency levels, ESDS conserves energy while still meeting deadlines for both running and new tasks, especially during periods of high system workload. Comprehensive experiments have been carried out to compare the ESDS algorithm with established baseline algorithms, including MEG, MELV, MEHV, and AEES. These experiments affirm the originality and effectiveness of the ESDS algorithm.
在实时系统中,雾网络中多核处理器的任务调度问题非常重要,其重点是安全性和能效。目前,为集群计算环境设计的调度算法利用动态电压缩放(DVS)来降低 CPU 功耗,但这是以牺牲性能为代价的。当实时任务需要强大的安全性时,这个问题就会变得更加突出,导致集群计算环境中的节点(CPU 或计算系统)严重超载。为了应对这些挑战,有人提出了一种名为 "使用支持 DVS 的雾网络(ESDS)的实时任务节能安全驱动调度 "的解决方案。ESDS 的主要目标是根据雾网络中节点的工作负载条件动态调整 CPU 电压或频率,从而在实时任务的安全性、调度和能耗之间实现最佳权衡。通过动态降低电压或频率水平,ESDS 在节约能源的同时,还能满足运行任务和新任务的截止日期要求,尤其是在系统工作负荷较高的时期。我们进行了全面的实验,将 ESDS 算法与 MEG、MELV、MEHV 和 AEES 等既定基准算法进行了比较。这些实验证实了 ESDS 算法的独创性和有效性。
{"title":"Energy efficient Security Driven Scheduling for Real-Time Tasks through DVS-enabled Fog Networks","authors":"Surendra Singh","doi":"10.1016/j.simpat.2024.102941","DOIUrl":"https://doi.org/10.1016/j.simpat.2024.102941","url":null,"abstract":"<div><p>The issue of task scheduling for a multi-core processor in Fog networks, with a focus on security and energy efficiency is of great importance in real-time systems. Currently, scheduling algorithms designed for cluster computing environments utilize dynamic voltage scaling (DVS) to decrease CPU power consumption, albeit at the expense of performance. This problem becomes more pronounced when a real-time task requires robust security, resulting in heavily overloaded nodes (CPUs or computing systems) in a cluster computing environment. To address such challenges, a solution called “Energy efficient Security Driven Scheduling of Real-Time Tasks using DVS-enabled Fog Networks (ESDS)” has been proposed. The primary goal of ESDS is to dynamically adjust CPU voltages or frequencies based on the workload conditions of nodes in Fog networks, thereby achieving optimal trade-offs between security, scheduling, and energy consumption for real-time tasks. By dynamically reducing voltage or frequency levels, ESDS conserves energy while still meeting deadlines for both running and new tasks, especially during periods of high system workload. Comprehensive experiments have been carried out to compare the ESDS algorithm with established baseline algorithms, including MEG, MELV, MEHV, and AEES. These experiments affirm the originality and effectiveness of the ESDS algorithm.</p></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"134 ","pages":"Article 102941"},"PeriodicalIF":4.2,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140559231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-16DOI: 10.1016/j.simpat.2024.102947
Suayb S. Arslan , James Peng , Turguy Goker
High performance computing data is surging fast into the exabyte-scale world, where tape libraries are the main platform for long-term durable data storage besides high-cost DNA. Tape libraries are extremely hard to model, but accurate modeling is critical for system administrators to obtain valid performance estimates for their designs. This research introduces a discrete–event tape simulation platform that realistically models tape library behavior in a networked cloud environment, by incorporating real-world phenomena and effects. The platform addresses several challenges, including precise estimation of data access latency, rates of robot exchange, data collocation, deduplication/compression ratio, and attainment of durability goals through replication or erasure coding. Using the proposed simulator, one can compare the single enterprise configuration with multiple commodity library configurations, making it a useful tool for system administrators and reliability engineers. This makes the simulator a valuable tool for system administrators and reliability engineers, enabling them to acquire practical and dependable performance estimates for their enduring, cost-efficient cold data storage architecture designs.
高性能计算数据正快速飙升至埃字节级,而磁带库是除高成本 DNA 之外长期持久数据存储的主要平台。磁带库极难建模,但精确建模对于系统管理员为其设计获得有效的性能估计至关重要。这项研究引入了一个离散事件磁带模拟平台,通过结合现实世界的现象和影响,对网络云环境中的磁带库行为进行真实建模。该平台解决了多个难题,包括精确估算数据访问延迟、机器人交换率、数据搭配、重复数据删除/压缩比,以及通过复制或擦除编码实现耐用性目标。使用建议的模拟器,人们可以将单一企业配置与多个商品库配置进行比较,使其成为系统管理员和可靠性工程师的有用工具。这使得该模拟器成为系统管理员和可靠性工程师的重要工具,使他们能够为其持久、经济高效的冷数据存储架构设计获得实用、可靠的性能评估。
{"title":"TALICS3: Tape library cloud storage system simulator","authors":"Suayb S. Arslan , James Peng , Turguy Goker","doi":"10.1016/j.simpat.2024.102947","DOIUrl":"10.1016/j.simpat.2024.102947","url":null,"abstract":"<div><p>High performance computing data is surging fast into the exabyte-scale world, where tape libraries are the main platform for long-term durable data storage besides high-cost DNA. Tape libraries are extremely hard to model, but accurate modeling is critical for system administrators to obtain valid performance estimates for their designs. This research introduces a discrete–event tape simulation platform that realistically models tape library behavior in a networked cloud environment, by incorporating real-world phenomena and effects. The platform addresses several challenges, including precise estimation of data access latency, rates of robot exchange, data collocation, deduplication/compression ratio, and attainment of durability goals through replication or erasure coding. Using the proposed simulator, one can compare the single enterprise configuration with multiple commodity library configurations, making it a useful tool for system administrators and reliability engineers. This makes the simulator a valuable tool for system administrators and reliability engineers, enabling them to acquire practical and dependable performance estimates for their enduring, cost-efficient cold data storage architecture designs.</p></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"134 ","pages":"Article 102947"},"PeriodicalIF":4.2,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140765491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-15DOI: 10.1016/j.simpat.2024.102945
Wei Xie , Xiongfeng Peng , Yanru Liu , Junhai Zeng , Lili Li , Toshio Eisaka
With the rapid development of intelligent manufacturing, the application of automated guided vehicles (AGVs) in intelligent warehousing systems has become increasingly common. Efficiently planning the conflict-free paths of multiple AGVs while minimizing the total task completion time is crucial for evaluating the performance of the system. Distinguishing itself from recent approaches where conflict avoidance strategy and path planning algorithm are executed independently or separately, this paper proposes an improved conflict-free A* algorithm by integrating the conflict avoidance strategy into the initial path planning process. Based on the heuristic A* algorithm, we use the instruction time consumption as the key evaluation indicator of the cost function and add the turning consumption in the future path cost evaluation. Moreover, the expansion mode of child nodes is optimized where a five-element search set containing the “zero movement” is proposed to implement a proactive pause-wait strategy. Then the prediction rules are designed to add constraints to three types of instructions based on the timeline map, guiding the heuristic planning to search for conflict-free child nodes. Extensive simulations show that the coordination planning based on the improved conflict-free A* algorithm not only effectively achieves advanced conflict avoidance at the algorithmic level, but also exhibits lower computational complexity and higher task completion efficiency compared to other coordination planning methods.
{"title":"Conflict-free coordination planning for multiple automated guided vehicles in an intelligent warehousing system","authors":"Wei Xie , Xiongfeng Peng , Yanru Liu , Junhai Zeng , Lili Li , Toshio Eisaka","doi":"10.1016/j.simpat.2024.102945","DOIUrl":"https://doi.org/10.1016/j.simpat.2024.102945","url":null,"abstract":"<div><p>With the rapid development of intelligent manufacturing, the application of automated guided vehicles (AGVs) in intelligent warehousing systems has become increasingly common. Efficiently planning the conflict-free paths of multiple AGVs while minimizing the total task completion time is crucial for evaluating the performance of the system. Distinguishing itself from recent approaches where conflict avoidance strategy and path planning algorithm are executed independently or separately, this paper proposes an improved conflict-free A* algorithm by integrating the conflict avoidance strategy into the initial path planning process. Based on the heuristic A* algorithm, we use the instruction time consumption as the key evaluation indicator of the cost function and add the turning consumption in the future path cost evaluation. Moreover, the expansion mode of child nodes is optimized where a five-element search set containing the “zero movement” is proposed to implement a proactive pause-wait strategy. Then the prediction rules are designed to add constraints to three types of instructions based on the timeline map, guiding the heuristic planning to search for conflict-free child nodes. Extensive simulations show that the coordination planning based on the improved conflict-free A* algorithm not only effectively achieves advanced conflict avoidance at the algorithmic level, but also exhibits lower computational complexity and higher task completion efficiency compared to other coordination planning methods.</p></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"134 ","pages":"Article 102945"},"PeriodicalIF":4.2,"publicationDate":"2024-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140647515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-12DOI: 10.1016/j.simpat.2024.102946
Sugan J , Isaac Sajan R
In the realm of e-commerce, the growing complexity of dynamic workloads and resource management poses a substantial challenge for platforms aiming to optimize user experiences and operational efficiency. To address this issue, the PredictOptiCloud framework is introduced, offering a solution that combines sophisticated methodologies with comprehensive performance analysis. The framework encompasses a domain-specific approach that extracts and processes historical workload data, utilizing Domain-specific Hierarchical Attention Bi LSTM networks. This enables PredictOptiCloud to effectively predict and manage both stable and dynamic workloads. Furthermore, it employs the Spider Wolf Optimization (SWO) for load balancing and offloading decisions, optimizing resource allocation and enhancing user experiences. The performance analysis of PredictOptiCloud involves a multifaceted evaluation, with key metrics including response time, throughput, resource utilization rate, cost-efficiency, conversion rate, rate of successful task offloading, precision, accuracy, task volume, and churn rate. By meticulously assessing these metrics, PredictOptiCloud demonstrates its prowess in not only predicting and managing workloads but also in optimizing user satisfaction, operational efficiency, and cost-effectiveness, ultimately positioning itself as an invaluable asset for e-commerce platforms striving for excellence in an ever-evolving landscape.
在电子商务领域,动态工作负载和资源管理的复杂性与日俱增,给旨在优化用户体验和运营效率的平台带来了巨大挑战。为解决这一问题,我们推出了 PredictOptiCloud 框架,提供一种将复杂方法与全面性能分析相结合的解决方案。该框架采用特定领域方法,利用特定领域分层注意力 Bi LSTM 网络提取和处理历史工作负载数据。这使得 PredictOptiCloud 能够有效地预测和管理稳定和动态的工作负载。此外,它还采用了蜘蛛狼优化(SWO)技术,用于负载平衡和卸载决策,优化资源分配,提升用户体验。PredictOptiCloud 的性能分析涉及多方面的评估,关键指标包括响应时间、吞吐量、资源利用率、成本效益、转换率、任务卸载成功率、精确度、准确性、任务量和流失率。通过对这些指标的细致评估,PredictOptiCloud 不仅展示了其在预测和管理工作量方面的实力,还展示了其在优化用户满意度、运营效率和成本效益方面的实力,最终使自己成为电子商务平台在不断变化的环境中追求卓越的宝贵资产。
{"title":"PredictOptiCloud: A hybrid framework for predictive optimization in hybrid workload cloud task scheduling","authors":"Sugan J , Isaac Sajan R","doi":"10.1016/j.simpat.2024.102946","DOIUrl":"10.1016/j.simpat.2024.102946","url":null,"abstract":"<div><p>In the realm of e-commerce, the growing complexity of dynamic workloads and resource management poses a substantial challenge for platforms aiming to optimize user experiences and operational efficiency. To address this issue, the PredictOptiCloud framework is introduced, offering a solution that combines sophisticated methodologies with comprehensive performance analysis. The framework encompasses a domain-specific approach that extracts and processes historical workload data, utilizing Domain-specific Hierarchical Attention Bi LSTM networks. This enables PredictOptiCloud to effectively predict and manage both stable and dynamic workloads. Furthermore, it employs the Spider Wolf Optimization (SWO) for load balancing and offloading decisions, optimizing resource allocation and enhancing user experiences. The performance analysis of PredictOptiCloud involves a multifaceted evaluation, with key metrics including response time, throughput, resource utilization rate, cost-efficiency, conversion rate, rate of successful task offloading, precision, accuracy, task volume, and churn rate. By meticulously assessing these metrics, PredictOptiCloud demonstrates its prowess in not only predicting and managing workloads but also in optimizing user satisfaction, operational efficiency, and cost-effectiveness, ultimately positioning itself as an invaluable asset for e-commerce platforms striving for excellence in an ever-evolving landscape.</p></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"134 ","pages":"Article 102946"},"PeriodicalIF":4.2,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140763708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-11DOI: 10.1016/j.simpat.2024.102943
Haozhou Ma , Peng Zhang , Yingwei Dong , Xuewen Wang , Rui Xia , Bo Li
The complexity of the underground environment in coal mines often leads to varying load conditions during the operation of the scraper conveyor, which can affect the lifespan of its components and result in unnecessary energy consumption. A test platform for the scraper conveyor was constructed based on the similarity theory to measure torque, speed, chain tension, and scraper acceleration during transportation. A DEM-MBD model of the scraper conveyor was developed and validated through transport tests and similarity theories to analyze the rigid-discrete coupling effect under different chain speed-load conditions. The results revealed a stratification phenomenon and a Brazilian fruit effect in the movement of coal. The average velocity of the upper and lower coal layers gradually increased during the transportation, while the difference between them gradually decreased. As the load increased, the stacking density and height of coal between scrapers also increased, leading to a higher force exerted on the scraper and chain. As the chain speed increased, the stacking density and height of coal between scrapers decreased, along with a decrease in the force applied to the scraper and chain. The formation of three-body wear necessitates a specific positional condition. When the scraper (chain)- coal-deck plate (chute liner) forms a particle stagnation state, severe wear occurs on the parts. This study provides a foundation for analyzing the transport mechanism of scraper conveyor from the particle perspective, offers a simulation reference for analyzing the mechanical and tribological characteristics of the line pan and scraper chain, and serves as a guideline for the future development of transportation state monitoring and the optimization and enhancement of components under different working conditions.
{"title":"Study on the rigid-discrete coupling effect of scraper conveyor under different chain speed-load conditions","authors":"Haozhou Ma , Peng Zhang , Yingwei Dong , Xuewen Wang , Rui Xia , Bo Li","doi":"10.1016/j.simpat.2024.102943","DOIUrl":"https://doi.org/10.1016/j.simpat.2024.102943","url":null,"abstract":"<div><p>The complexity of the underground environment in coal mines often leads to varying load conditions during the operation of the scraper conveyor, which can affect the lifespan of its components and result in unnecessary energy consumption. A test platform for the scraper conveyor was constructed based on the similarity theory to measure torque, speed, chain tension, and scraper acceleration during transportation. A DEM-MBD model of the scraper conveyor was developed and validated through transport tests and similarity theories to analyze the rigid-discrete coupling effect under different chain speed-load conditions. The results revealed a stratification phenomenon and a Brazilian fruit effect in the movement of coal. The average velocity of the upper and lower coal layers gradually increased during the transportation, while the difference between them gradually decreased. As the load increased, the stacking density and height of coal between scrapers also increased, leading to a higher force exerted on the scraper and chain. As the chain speed increased, the stacking density and height of coal between scrapers decreased, along with a decrease in the force applied to the scraper and chain. The formation of three-body wear necessitates a specific positional condition. When the scraper (chain)- coal-deck plate (chute liner) forms a particle stagnation state, severe wear occurs on the parts. This study provides a foundation for analyzing the transport mechanism of scraper conveyor from the particle perspective, offers a simulation reference for analyzing the mechanical and tribological characteristics of the line pan and scraper chain, and serves as a guideline for the future development of transportation state monitoring and the optimization and enhancement of components under different working conditions.</p></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"134 ","pages":"Article 102943"},"PeriodicalIF":4.2,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140559173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-10DOI: 10.1016/j.simpat.2024.102944
Helen D. Karatza
{"title":"Modeling and simulation of services computing✰","authors":"Helen D. Karatza","doi":"10.1016/j.simpat.2024.102944","DOIUrl":"10.1016/j.simpat.2024.102944","url":null,"abstract":"","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"134 ","pages":"Article 102944"},"PeriodicalIF":4.2,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140628444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-10DOI: 10.1016/j.simpat.2024.102942
Zilong Yang , Yong Hu , Mingxu Xu , Jiyu Tian , Hao Pang , Xiangyang Liu
Parameter calibration is a critical step in accurately modeling using the discrete element method (DEM), but the time-consuming and complex calibration process limits the practical utilization of DEM. Herein, a catch-up penalty algorithm was proposed to simultaneously adjust multiple micro parameters of the flat-joint model through iterations. The effect of micro parameters on macro parameters was investigated by conducting 64 sets of orthogonal tests in PFC3D and analyzing the results by ANOVA. Regression analysis was used to establish the preliminary formulas for directly obtaining initial values of micro parameters and the trend equations for deriving iterative formulas. Based on the preliminary and iterative formulas, the calibration process for the algorithm was proposed, in which the micro parameters of each iteration can be calculated, thereby reducing researchers' dependence on the experience. The calibration capability of the algorithm was verified on four types of rocks, and the results showed that the average calibration error between the simulation results and the target values was reduced to within 5 % after six iterations, proving the reliability and applicability of the algorithm.
参数校准是利用离散元法(DEM)精确建模的关键步骤,但耗时且复杂的校准过程限制了 DEM 的实际应用。本文提出了一种追赶惩罚算法,通过迭代同时调整平关节模型的多个微观参数。通过在 PFC3D 中进行 64 组正交试验和方差分析,研究了微观参数对宏观参数的影响。利用回归分析建立了用于直接获得微观参数初始值的初步公式和用于推导迭代公式的趋势方程。在初步公式和迭代公式的基础上,提出了该算法的校准过程,在此过程中可以计算出每次迭代的微观参数,从而减少了研究人员对经验的依赖。在四种岩石上验证了该算法的校准能力,结果表明,经过六次迭代,模拟结果与目标值之间的平均校准误差减小到 5 % 以内,证明了该算法的可靠性和适用性。
{"title":"An iterative method to improve the calibration accuracy of flat-joint models: Catch-up penalty algorithm","authors":"Zilong Yang , Yong Hu , Mingxu Xu , Jiyu Tian , Hao Pang , Xiangyang Liu","doi":"10.1016/j.simpat.2024.102942","DOIUrl":"https://doi.org/10.1016/j.simpat.2024.102942","url":null,"abstract":"<div><p>Parameter calibration is a critical step in accurately modeling using the discrete element method (DEM), but the time-consuming and complex calibration process limits the practical utilization of DEM. Herein, a catch-up penalty algorithm was proposed to simultaneously adjust multiple micro parameters of the flat-joint model through iterations. The effect of micro parameters on macro parameters was investigated by conducting 64 sets of orthogonal tests in PFC3D and analyzing the results by ANOVA. Regression analysis was used to establish the preliminary formulas for directly obtaining initial values of micro parameters and the trend equations for deriving iterative formulas. Based on the preliminary and iterative formulas, the calibration process for the algorithm was proposed, in which the micro parameters of each iteration can be calculated, thereby reducing researchers' dependence on the experience. The calibration capability of the algorithm was verified on four types of rocks, and the results showed that the average calibration error between the simulation results and the target values was reduced to within 5 % after six iterations, proving the reliability and applicability of the algorithm.</p></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"134 ","pages":"Article 102942"},"PeriodicalIF":4.2,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140542926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-05DOI: 10.1016/j.simpat.2024.102931
Luis Veas-Castillo , Juan Ovando-Leon , Carolina Bonacic , Veronica Gil-Costa , Mauricio Marin
Natural disasters drastically impact the society, causing emotional disorders as well as serious accidents that can lead to death. These kinds of disasters cause serious damage in computer and communications systems, due to the complete or partial destruction of the infrastructure, causing software applications that actually run on those infrastructures to crash. Additionally, these software applications have to provide a stable service to a large number of users and support unpredictable peaks of workloads. In this work, we propose a methodology to predict the performance of software applications designed for emergency situations when a natural disaster strikes. The applications are deployed on a distributed platform formed of commodity hardware usually available from universities, using container technology and container orchestration. We also present a specification language to formalize the definition and interaction between the components, services and the computing resources used to deploy the applications. Our proposal allows to predict computing performance based on the modeling and simulation of the different components deployed on a distributed computing platform combined with machine learning techniques. We evaluate our proposal under different scenarios, and we compare the results obtained by our proposal and by actual implementations of two applications deployed in a distributed computing infrastructure. Results show that our proposal can predict the performance of the applications with an error between 2% and 7%.
{"title":"A methodology for performance estimation of bot-based applications for natural disasters","authors":"Luis Veas-Castillo , Juan Ovando-Leon , Carolina Bonacic , Veronica Gil-Costa , Mauricio Marin","doi":"10.1016/j.simpat.2024.102931","DOIUrl":"https://doi.org/10.1016/j.simpat.2024.102931","url":null,"abstract":"<div><p>Natural disasters drastically impact the society, causing emotional disorders as well as serious accidents that can lead to death. These kinds of disasters cause serious damage in computer and communications systems, due to the complete or partial destruction of the infrastructure, causing software applications that actually run on those infrastructures to crash. Additionally, these software applications have to provide a stable service to a large number of users and support unpredictable peaks of workloads. In this work, we propose a methodology to predict the performance of software applications designed for emergency situations when a natural disaster strikes. The applications are deployed on a distributed platform formed of commodity hardware usually available from universities, using container technology and container orchestration. We also present a specification language to formalize the definition and interaction between the components, services and the computing resources used to deploy the applications. Our proposal allows to predict computing performance based on the modeling and simulation of the different components deployed on a distributed computing platform combined with machine learning techniques. We evaluate our proposal under different scenarios, and we compare the results obtained by our proposal and by actual implementations of two applications deployed in a distributed computing infrastructure. Results show that our proposal can predict the performance of the applications with an error between 2% and 7%.</p></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"134 ","pages":"Article 102931"},"PeriodicalIF":4.2,"publicationDate":"2024-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140535632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-02DOI: 10.1016/j.simpat.2024.102930
Xu Chen, Siyu Li, Wenzhang Yang, Yujia Chen, Hao Wang
The unclear understanding of right-turning vehicle behavior at signalized intersections complicates the interaction with pedestrians. Current micro-dynamic modeling research falls short of effectively simulating this complexity. Specifically, the existing models fail to adequately capture the three states that right-turning vehicles may undergo: car-following, free right-turn, and avoidance of conflicting pedestrians. Moreover, pedestrian behavior is typically influenced by encountering conflicting vehicles and surrounding pedestrians, as well as traffic signals. To simulate these behaviors, the right-turning and yielding intelligent driver model (RTYIDM), the modified social force model (MSFM) considering green light pressure, and the yielding decision model between pedestrians and vehicles have been established. Model calibration is performed using detailed behavioral data collected and extracted from field observations. Furthermore, a microsimulation platform with 3D visualization and playback features has been developed to facilitate testing and demonstration. Model validation is performed by comparing it with actual trajectories in three representative scenarios of pedestrian crossing with conflict between pedestrians and vehicles. Meanwhile, the calibrated model's ability to predict pedestrian-interaction events and estimate vehicle yielding rates is also assessed. The well-established simulation performance of the proposed model makes it a useful tool for evaluating existing traffic operations.
{"title":"Enhanced microsimulation framework for right-turning vehicle-pedestrian interactions at signalized intersection","authors":"Xu Chen, Siyu Li, Wenzhang Yang, Yujia Chen, Hao Wang","doi":"10.1016/j.simpat.2024.102930","DOIUrl":"https://doi.org/10.1016/j.simpat.2024.102930","url":null,"abstract":"<div><p>The unclear understanding of right-turning vehicle behavior at signalized intersections complicates the interaction with pedestrians. Current micro-dynamic modeling research falls short of effectively simulating this complexity. Specifically, the existing models fail to adequately capture the three states that right-turning vehicles may undergo: car-following, free right-turn, and avoidance of conflicting pedestrians. Moreover, pedestrian behavior is typically influenced by encountering conflicting vehicles and surrounding pedestrians, as well as traffic signals. To simulate these behaviors, the right-turning and yielding intelligent driver model (RTYIDM), the modified social force model (MSFM) considering green light pressure, and the yielding decision model between pedestrians and vehicles have been established. Model calibration is performed using detailed behavioral data collected and extracted from field observations. Furthermore, a microsimulation platform with 3D visualization and playback features has been developed to facilitate testing and demonstration. Model validation is performed by comparing it with actual trajectories in three representative scenarios of pedestrian crossing with conflict between pedestrians and vehicles. Meanwhile, the calibrated model's ability to predict pedestrian-interaction events and estimate vehicle yielding rates is also assessed. The well-established simulation performance of the proposed model makes it a useful tool for evaluating existing traffic operations.</p></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"134 ","pages":"Article 102930"},"PeriodicalIF":4.2,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140535631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-26DOI: 10.1016/j.simpat.2024.102928
Mohammed Mustafa , Salman Pervaiz , Ibrahim Deiab
Titanium alloys, including Ti6Al4V, are considered hard to cut materials due to their low thermal conductivity, low elastic modules and high chemical reactivity. This leads to high cutting forces and high surface roughness. Thermal assisted machining is used to improve the machinability of Ti6Al4V. To improve the performance of thermal assisted machining, this study investigates how are the cutting force, cutting zones temperatures, chip morphology, shear plane angle and strain rate are affected by the cutting speed and the heating element characteristics during thermally assisted machining of Ti6Al4V. A 2D numerical model simulating orthogonal cutting process was created using ABAQUS/Explicit software. In this model, Johnson Cook constitutive model was used to describe the material behavior during cutting process. Also, Johnson Cook damage model was used to simulate chip separation mechanism. After the verification of the model by comparison with results found in the literature, a number of simulations were run at different levels of four factors: cutting speed (40, 60, 80, 100, 120 and 140 m/min), heat source temperature (200, 400 and 600 °C), heating source distance from the cutting tool (0.3, 0.6 and 0.9 mm) and heating source size/diameter (0.6, 0.8 and 1 mm). Taguchi L18 orthogonal mixed level design was used to plan for simulation runs using Minitab software. ANOVA analysis was used to investigate the significance of the four factors. The response table of means and the main effect of means are used to compare between the four factors and find their ranking. Based on 95% confidence Interval (CI), the results show that cutting speed has a significant effect on cutting force, strain rate, chip compression ratio, cutting tool nose temperature, cutting tool and chip temperature in the secondary deformation zone, average chip thickness at peaks and average chip thickness at valleys and average pitch. This conclusion is based on the P-values which are << 0.05 and the contribution which reaches 99.01%. Similarly, based on P-values (< 0.05) and contributions (up to 12.16%), the heating source temperature has a significant effect on average chip thickness at valleys, chip compression ratio and strain rate. The cutting speed has Rank 1 among the four factors affecting cutting force, cutting zones temperatures, chip morphology, shear plane angel and stain rate. The effect of instantaneous heating directly before cutting process is negligible compared to the effect of plastic deformation and fracture mechanism in the cutting zone.
{"title":"A novel finite element model for thermally induced machining of Ti6Al4V","authors":"Mohammed Mustafa , Salman Pervaiz , Ibrahim Deiab","doi":"10.1016/j.simpat.2024.102928","DOIUrl":"10.1016/j.simpat.2024.102928","url":null,"abstract":"<div><p>Titanium alloys, including Ti6Al4V, are considered hard to cut materials due to their low thermal conductivity, low elastic modules and high chemical reactivity. This leads to high cutting forces and high surface roughness. Thermal assisted machining is used to improve the machinability of Ti6Al4V. To improve the performance of thermal assisted machining, this study investigates how are the cutting force, cutting zones temperatures, chip morphology, shear plane angle and strain rate are affected by the cutting speed and the heating element characteristics during thermally assisted machining of Ti6Al4V. A 2D numerical model simulating orthogonal cutting process was created using ABAQUS/Explicit software. In this model, Johnson Cook constitutive model was used to describe the material behavior during cutting process. Also, Johnson Cook damage model was used to simulate chip separation mechanism. After the verification of the model by comparison with results found in the literature, a number of simulations were run at different levels of four factors: cutting speed (40, 60, 80, 100, 120 and 140 m/min), heat source temperature (200, 400 and 600 °C), heating source distance from the cutting tool (0.3, 0.6 and 0.9 mm) and heating source size/diameter (0.6, 0.8 and 1 mm). Taguchi L18 orthogonal mixed level design was used to plan for simulation runs using Minitab software. ANOVA analysis was used to investigate the significance of the four factors. The response table of means and the main effect of means are used to compare between the four factors and find their ranking. Based on 95% confidence Interval (CI), the results show that cutting speed has a significant effect on cutting force, strain rate, chip compression ratio, cutting tool nose temperature, cutting tool and chip temperature in the secondary deformation zone, average chip thickness at peaks and average chip thickness at valleys and average pitch. This conclusion is based on the P-values which are << 0.05 and the contribution which reaches 99.01%. Similarly, based on P-values (< 0.05) and contributions (up to 12.16%), the heating source temperature has a significant effect on average chip thickness at valleys, chip compression ratio and strain rate. The cutting speed has Rank 1 among the four factors affecting cutting force, cutting zones temperatures, chip morphology, shear plane angel and stain rate. The effect of instantaneous heating directly before cutting process is negligible compared to the effect of plastic deformation and fracture mechanism in the cutting zone.</p></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"134 ","pages":"Article 102928"},"PeriodicalIF":4.2,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140405482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}