Pub Date : 2024-04-30DOI: 10.1016/j.simpat.2024.102951
Fengjiang Wang , Chuchu Rao , Xiaosheng Fang , Yeshen Lan
Clustering routing protocols currently have problems such as Single point of failure of cluster head nodes, poor network dynamics, uneven data transmission, etc., which are critical to the optimization of energy efficiency, network lifespan and network topology control. However, this optimization problem is an NP hard problem that conventional algorithms are difficult to solve. This paper proposes a new multi-objective cluster routing protocol (CHEABC-QCRP) aimed at optimizing network energy consumption, system lifespan, and quality of services (QoS). The protocol is based on a new chaotic hybrid elite artificial bee colony algorithm (CHEABC) proposed in this paper, which has strong search ability and greatly reduces convergence time. At the same time, a new chaotic strategy was designed to effectively prevent falling into local optima and premature convergence. In simulation experiments, compared with multiple routing protocols, a large number of test results show that this protocol significantly reduces network energy consumption, greatly improves system lifespan, and effectively improves QoS in IWSN.
{"title":"CHEABC-QCRP: A novel QoS-aware cluster routing protocol for industrial IoT","authors":"Fengjiang Wang , Chuchu Rao , Xiaosheng Fang , Yeshen Lan","doi":"10.1016/j.simpat.2024.102951","DOIUrl":"https://doi.org/10.1016/j.simpat.2024.102951","url":null,"abstract":"<div><p>Clustering routing protocols currently have problems such as Single point of failure of cluster head nodes, poor network dynamics, uneven data transmission, etc., which are critical to the optimization of energy efficiency, network lifespan and network topology control. However, this optimization problem is an NP hard problem that conventional algorithms are difficult to solve. This paper proposes a new multi-objective cluster routing protocol (CHEABC-QCRP) aimed at optimizing network energy consumption, system lifespan, and quality of services (QoS). The protocol is based on a new chaotic hybrid elite artificial bee colony algorithm (CHEABC) proposed in this paper, which has strong search ability and greatly reduces convergence time. At the same time, a new chaotic strategy was designed to effectively prevent falling into local optima and premature convergence. In simulation experiments, compared with multiple routing protocols, a large number of test results show that this protocol significantly reduces network energy consumption, greatly improves system lifespan, and effectively improves QoS in IWSN.</p></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140818687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-26DOI: 10.1016/j.simpat.2024.102952
Keerthan Kumar T.G. , Shivangi Tomar , Sourav Kanti Addya , Anurag Satpathy , Shashidhar G. Koolagudi
The integration of Software-Defined Networking (SDN) into Network Virtualization (NV) significantly enhances network management, isolation, and troubleshooting capabilities. However, it brings forth the intricate challenge of allocating Substrate Network (SN) resources for various Virtual Network Requests (VNRs), a process known as Virtual Network Embedding (VNE). It encompasses solving two intractable sub-problems: embedding Virtual Machines (VMs) and embedding Virtual Links (VLs). While the research community has focused on formulating embedding strategies, there has been less emphasis on practical implementation at a laboratory scale, which is crucial for comprehensive design, development, testing, and validation policies for large-scale systems. However, conducting tests using commercial providers presents challenges due to the scale of the problem and associated costs. Moreover, current simulators lack accuracy in representing the complexities of communication patterns, resource allocation, and support for SDN-specific features. These limitations result in inefficient implementations and reduced adaptability, hindering seamless integration with commercial cloud providers. To address this gap, this work introduces EFraS (Emulated Framework for Dynamic VNE Strategies over SDN). The goal is to aid developers and researchers in iterating, testing, and evaluating VNE solutions seamlessly, leveraging a modular design and customized reconfigurability. EFraS offers various functionalities, including generating real-world SN topologies and VNRs. Additionally, it integrates with a diverse set of evaluation metrics to streamline the testing and validation process. EFraS leverages Mininet, Ryu controller, and OpenFlow switches to closely emulate real-time setups. Moreover, we integrate EFraS with various state-of-the-art VNE schemes, ensuring the effective validation of embedding algorithms.
{"title":"EFraS: Emulated framework to develop and analyze dynamic Virtual Network Embedding strategies over SDN infrastructure","authors":"Keerthan Kumar T.G. , Shivangi Tomar , Sourav Kanti Addya , Anurag Satpathy , Shashidhar G. Koolagudi","doi":"10.1016/j.simpat.2024.102952","DOIUrl":"https://doi.org/10.1016/j.simpat.2024.102952","url":null,"abstract":"<div><p>The integration of Software-Defined Networking (SDN) into Network Virtualization (NV) significantly enhances network management, isolation, and troubleshooting capabilities. However, it brings forth the intricate challenge of allocating Substrate Network (SN) resources for various Virtual Network Requests (VNRs), a process known as Virtual Network Embedding (VNE). It encompasses solving two intractable sub-problems: embedding Virtual Machines (VMs) and embedding Virtual Links (VLs). While the research community has focused on formulating embedding strategies, there has been less emphasis on practical implementation at a laboratory scale, which is crucial for comprehensive design, development, testing, and validation policies for large-scale systems. However, conducting tests using commercial providers presents challenges due to the scale of the problem and associated costs. Moreover, current simulators lack accuracy in representing the complexities of communication patterns, resource allocation, and support for SDN-specific features. These limitations result in inefficient implementations and reduced adaptability, hindering seamless integration with commercial cloud providers. To address this gap, this work introduces EFraS (Emulated Framework for Dynamic VNE Strategies over SDN). The goal is to aid developers and researchers in iterating, testing, and evaluating VNE solutions seamlessly, leveraging a modular design and customized reconfigurability. EFraS offers various functionalities, including generating real-world SN topologies and VNRs. Additionally, it integrates with a diverse set of evaluation metrics to streamline the testing and validation process. EFraS leverages <span>Mininet</span>, <span>Ryu</span> controller, and <span>OpenFlow</span> switches to closely emulate real-time setups. Moreover, we integrate EFraS with various state-of-the-art VNE schemes, ensuring the effective validation of embedding algorithms.</p></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140825598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-24DOI: 10.1016/j.simpat.2024.102950
Yacong Gao , Chenjing Zhou , Jian Rong , Xia Zhang , Yi Wang
Calibrating microscopic traffic simulation models is a prerequisite for simulation applications. This study proposes three novel methods to improve the accuracy and interpretability of the calibration model. The proposed approach involves selecting the calibration parameter, refining the model parameter system, and optimizing the calibration results. The first method expands the single-point mean into a multi-point distribution. The cumulative distribution curve of delay was selected as the calibration parameter. The second method divides the parameter system into global and local parameters. Global parameters were calibrated using NGSIM measured data, and local parameters were calibrated through intelligent algorithms. The third method proposes a methodology of parameter clustering recursion based on the genetic algorithm results, with information entropy selected as the analysis index. To evaluate the effectiveness of the proposed optimization methods, this study used NGSIM trajectory data as a case study. Eight simulation schemes based on the three optimization methods were designed, and simulation experiments were conducted using the VISSIM platform. The results show that the accuracy of the multi-point distribution calibration and parameter value optimization method is significantly higher than the default method. Additionally, the optimization method with calibration of both global and local parameters was more consistent with actual driving characteristics. This study provides a theoretical foundation for improving the practical application of traffic simulation technology, which has significant implications for transportation planning and management.
{"title":"Enhancing parameter calibration for micro-simulation models: Investigating improvement methods","authors":"Yacong Gao , Chenjing Zhou , Jian Rong , Xia Zhang , Yi Wang","doi":"10.1016/j.simpat.2024.102950","DOIUrl":"10.1016/j.simpat.2024.102950","url":null,"abstract":"<div><p>Calibrating microscopic traffic simulation models is a prerequisite for simulation applications. This study proposes three novel methods to improve the accuracy and interpretability of the calibration model. The proposed approach involves selecting the calibration parameter, refining the model parameter system, and optimizing the calibration results. The first method expands the single-point mean into a multi-point distribution. The cumulative distribution curve of delay was selected as the calibration parameter. The second method divides the parameter system into global and local parameters. Global parameters were calibrated using NGSIM measured data, and local parameters were calibrated through intelligent algorithms. The third method proposes a methodology of parameter clustering recursion based on the genetic algorithm results, with information entropy selected as the analysis index. To evaluate the effectiveness of the proposed optimization methods, this study used NGSIM trajectory data as a case study. Eight simulation schemes based on the three optimization methods were designed, and simulation experiments were conducted using the VISSIM platform. The results show that the accuracy of the multi-point distribution calibration and parameter value optimization method is significantly higher than the default method. Additionally, the optimization method with calibration of both global and local parameters was more consistent with actual driving characteristics. This study provides a theoretical foundation for improving the practical application of traffic simulation technology, which has significant implications for transportation planning and management.</p></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140760437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-23DOI: 10.1016/j.simpat.2024.102949
Martin Ďuriška , Hana Neradilová , Gabriel Fedorko , Vieroslav Molnár , Nikoleta Mikušová
A Non-Fungible Token (NFT) is a digital asset that is proof of ownership and originality in the digital world. It is generally a unique data unit that can be created from a digital file. But it is not just any digital file; it must be audio, video, image, or photo. This fact is mainly limiting. However, there are many other digital files for which the connection with NFT and blockchain technology would make sense. Such digital files include, among other things, various simulation models. With the development of the use of simulation models for the needs of managing multiple types of logistics processes daily, the questions of how to prevent the unauthorised copying of any simulation model and protect the copyright of its authors are coming to the fore. NFT and blockchain represent a robust technology whose possibilities of use are gradually expanding, while simulation models could be one area of their application. The paper presents the result of research that will enable the implementation of NFT and blockchain technology in simulation models. The research outcome confirmed the possibility of creating an NFT through the decentralised public blockchain XRP Ledger (XRPL) and the marketplace xrp. cafe, which can be used to verify the ownership and originality of the simulation model.
{"title":"Use of Non-Fungible Tokens for proof of ownership and originality of simulation model in logistics","authors":"Martin Ďuriška , Hana Neradilová , Gabriel Fedorko , Vieroslav Molnár , Nikoleta Mikušová","doi":"10.1016/j.simpat.2024.102949","DOIUrl":"10.1016/j.simpat.2024.102949","url":null,"abstract":"<div><p>A Non-Fungible Token (NFT) is a digital asset that is proof of ownership and originality in the digital world. It is generally a unique data unit that can be created from a digital file. But it is not just any digital file; it must be audio, video, image, or photo. This fact is mainly limiting. However, there are many other digital files for which the connection with NFT and blockchain technology would make sense. Such digital files include, among other things, various simulation models. With the development of the use of simulation models for the needs of managing multiple types of logistics processes daily, the questions of how to prevent the unauthorised copying of any simulation model and protect the copyright of its authors are coming to the fore. NFT and blockchain represent a robust technology whose possibilities of use are gradually expanding, while simulation models could be one area of their application. The paper presents the result of research that will enable the implementation of NFT and blockchain technology in simulation models. The research outcome confirmed the possibility of creating an NFT through the decentralised public blockchain XRP Ledger (XRPL) and the marketplace xrp. cafe, which can be used to verify the ownership and originality of the simulation model.</p></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1569190X24000637/pdfft?md5=66ecd351073408720c05f21b79695173&pid=1-s2.0-S1569190X24000637-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140777202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-20DOI: 10.1016/j.simpat.2024.102948
Lorenzo Tiacci, Andrea Rossi
The job shop scheduling problem, which involves the routing and sequencing of jobs in a job shop context, is a relevant subject in industrial engineering. Approaches based on Deep Reinforcement Learning (DRL) are very promising for dealing with the variability of real working conditions due to dynamic events such as the arrival of new jobs and machine failures. Discrete Event Simulation (DES) is essential for training and testing DRL approaches, which are based on the interaction of an intelligent agent and the production system. Nonetheless, there are numerous papers in the literature in which DRL techniques, developed to solve the Dynamic Flexible Job Shop Problem (DFJSP), have been implemented and evaluated in the absence of a simulation environment. In the paper, the limitations of these techniques are highlighted, and a numerical experiment that demonstrates their ineffectiveness is presented. Furthermore, in order to provide the scientific community with a simulation tool designed to be used in conjunction with DRL techniques, an agent-based discrete event simulator is also presented.
{"title":"A discrete event simulator to implement deep reinforcement learning for the dynamic flexible job shop scheduling problem","authors":"Lorenzo Tiacci, Andrea Rossi","doi":"10.1016/j.simpat.2024.102948","DOIUrl":"10.1016/j.simpat.2024.102948","url":null,"abstract":"<div><p>The job shop scheduling problem, which involves the routing and sequencing of jobs in a job shop context, is a relevant subject in industrial engineering. Approaches based on Deep Reinforcement Learning (DRL) are very promising for dealing with the variability of real working conditions due to dynamic events such as the arrival of new jobs and machine failures. Discrete Event Simulation (DES) is essential for training and testing DRL approaches, which are based on the interaction of an intelligent agent and the production system. Nonetheless, there are numerous papers in the literature in which DRL techniques, developed to solve the Dynamic Flexible Job Shop Problem (DFJSP), have been implemented and evaluated in the absence of a simulation environment. In the paper, the limitations of these techniques are highlighted, and a numerical experiment that demonstrates their ineffectiveness is presented. Furthermore, in order to provide the scientific community with a simulation tool designed to be used in conjunction with DRL techniques, an agent-based discrete event simulator is also presented.</p></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1569190X24000625/pdfft?md5=849864b242edbe1834ecc16bf681e910&pid=1-s2.0-S1569190X24000625-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140775130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-16DOI: 10.1016/j.simpat.2024.102941
Surendra Singh
The issue of task scheduling for a multi-core processor in Fog networks, with a focus on security and energy efficiency is of great importance in real-time systems. Currently, scheduling algorithms designed for cluster computing environments utilize dynamic voltage scaling (DVS) to decrease CPU power consumption, albeit at the expense of performance. This problem becomes more pronounced when a real-time task requires robust security, resulting in heavily overloaded nodes (CPUs or computing systems) in a cluster computing environment. To address such challenges, a solution called “Energy efficient Security Driven Scheduling of Real-Time Tasks using DVS-enabled Fog Networks (ESDS)” has been proposed. The primary goal of ESDS is to dynamically adjust CPU voltages or frequencies based on the workload conditions of nodes in Fog networks, thereby achieving optimal trade-offs between security, scheduling, and energy consumption for real-time tasks. By dynamically reducing voltage or frequency levels, ESDS conserves energy while still meeting deadlines for both running and new tasks, especially during periods of high system workload. Comprehensive experiments have been carried out to compare the ESDS algorithm with established baseline algorithms, including MEG, MELV, MEHV, and AEES. These experiments affirm the originality and effectiveness of the ESDS algorithm.
在实时系统中,雾网络中多核处理器的任务调度问题非常重要,其重点是安全性和能效。目前,为集群计算环境设计的调度算法利用动态电压缩放(DVS)来降低 CPU 功耗,但这是以牺牲性能为代价的。当实时任务需要强大的安全性时,这个问题就会变得更加突出,导致集群计算环境中的节点(CPU 或计算系统)严重超载。为了应对这些挑战,有人提出了一种名为 "使用支持 DVS 的雾网络(ESDS)的实时任务节能安全驱动调度 "的解决方案。ESDS 的主要目标是根据雾网络中节点的工作负载条件动态调整 CPU 电压或频率,从而在实时任务的安全性、调度和能耗之间实现最佳权衡。通过动态降低电压或频率水平,ESDS 在节约能源的同时,还能满足运行任务和新任务的截止日期要求,尤其是在系统工作负荷较高的时期。我们进行了全面的实验,将 ESDS 算法与 MEG、MELV、MEHV 和 AEES 等既定基准算法进行了比较。这些实验证实了 ESDS 算法的独创性和有效性。
{"title":"Energy efficient Security Driven Scheduling for Real-Time Tasks through DVS-enabled Fog Networks","authors":"Surendra Singh","doi":"10.1016/j.simpat.2024.102941","DOIUrl":"https://doi.org/10.1016/j.simpat.2024.102941","url":null,"abstract":"<div><p>The issue of task scheduling for a multi-core processor in Fog networks, with a focus on security and energy efficiency is of great importance in real-time systems. Currently, scheduling algorithms designed for cluster computing environments utilize dynamic voltage scaling (DVS) to decrease CPU power consumption, albeit at the expense of performance. This problem becomes more pronounced when a real-time task requires robust security, resulting in heavily overloaded nodes (CPUs or computing systems) in a cluster computing environment. To address such challenges, a solution called “Energy efficient Security Driven Scheduling of Real-Time Tasks using DVS-enabled Fog Networks (ESDS)” has been proposed. The primary goal of ESDS is to dynamically adjust CPU voltages or frequencies based on the workload conditions of nodes in Fog networks, thereby achieving optimal trade-offs between security, scheduling, and energy consumption for real-time tasks. By dynamically reducing voltage or frequency levels, ESDS conserves energy while still meeting deadlines for both running and new tasks, especially during periods of high system workload. Comprehensive experiments have been carried out to compare the ESDS algorithm with established baseline algorithms, including MEG, MELV, MEHV, and AEES. These experiments affirm the originality and effectiveness of the ESDS algorithm.</p></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140559231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-16DOI: 10.1016/j.simpat.2024.102947
Suayb S. Arslan , James Peng , Turguy Goker
High performance computing data is surging fast into the exabyte-scale world, where tape libraries are the main platform for long-term durable data storage besides high-cost DNA. Tape libraries are extremely hard to model, but accurate modeling is critical for system administrators to obtain valid performance estimates for their designs. This research introduces a discrete–event tape simulation platform that realistically models tape library behavior in a networked cloud environment, by incorporating real-world phenomena and effects. The platform addresses several challenges, including precise estimation of data access latency, rates of robot exchange, data collocation, deduplication/compression ratio, and attainment of durability goals through replication or erasure coding. Using the proposed simulator, one can compare the single enterprise configuration with multiple commodity library configurations, making it a useful tool for system administrators and reliability engineers. This makes the simulator a valuable tool for system administrators and reliability engineers, enabling them to acquire practical and dependable performance estimates for their enduring, cost-efficient cold data storage architecture designs.
高性能计算数据正快速飙升至埃字节级,而磁带库是除高成本 DNA 之外长期持久数据存储的主要平台。磁带库极难建模,但精确建模对于系统管理员为其设计获得有效的性能估计至关重要。这项研究引入了一个离散事件磁带模拟平台,通过结合现实世界的现象和影响,对网络云环境中的磁带库行为进行真实建模。该平台解决了多个难题,包括精确估算数据访问延迟、机器人交换率、数据搭配、重复数据删除/压缩比,以及通过复制或擦除编码实现耐用性目标。使用建议的模拟器,人们可以将单一企业配置与多个商品库配置进行比较,使其成为系统管理员和可靠性工程师的有用工具。这使得该模拟器成为系统管理员和可靠性工程师的重要工具,使他们能够为其持久、经济高效的冷数据存储架构设计获得实用、可靠的性能评估。
{"title":"TALICS3: Tape library cloud storage system simulator","authors":"Suayb S. Arslan , James Peng , Turguy Goker","doi":"10.1016/j.simpat.2024.102947","DOIUrl":"10.1016/j.simpat.2024.102947","url":null,"abstract":"<div><p>High performance computing data is surging fast into the exabyte-scale world, where tape libraries are the main platform for long-term durable data storage besides high-cost DNA. Tape libraries are extremely hard to model, but accurate modeling is critical for system administrators to obtain valid performance estimates for their designs. This research introduces a discrete–event tape simulation platform that realistically models tape library behavior in a networked cloud environment, by incorporating real-world phenomena and effects. The platform addresses several challenges, including precise estimation of data access latency, rates of robot exchange, data collocation, deduplication/compression ratio, and attainment of durability goals through replication or erasure coding. Using the proposed simulator, one can compare the single enterprise configuration with multiple commodity library configurations, making it a useful tool for system administrators and reliability engineers. This makes the simulator a valuable tool for system administrators and reliability engineers, enabling them to acquire practical and dependable performance estimates for their enduring, cost-efficient cold data storage architecture designs.</p></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140765491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-15DOI: 10.1016/j.simpat.2024.102945
Wei Xie , Xiongfeng Peng , Yanru Liu , Junhai Zeng , Lili Li , Toshio Eisaka
With the rapid development of intelligent manufacturing, the application of automated guided vehicles (AGVs) in intelligent warehousing systems has become increasingly common. Efficiently planning the conflict-free paths of multiple AGVs while minimizing the total task completion time is crucial for evaluating the performance of the system. Distinguishing itself from recent approaches where conflict avoidance strategy and path planning algorithm are executed independently or separately, this paper proposes an improved conflict-free A* algorithm by integrating the conflict avoidance strategy into the initial path planning process. Based on the heuristic A* algorithm, we use the instruction time consumption as the key evaluation indicator of the cost function and add the turning consumption in the future path cost evaluation. Moreover, the expansion mode of child nodes is optimized where a five-element search set containing the “zero movement” is proposed to implement a proactive pause-wait strategy. Then the prediction rules are designed to add constraints to three types of instructions based on the timeline map, guiding the heuristic planning to search for conflict-free child nodes. Extensive simulations show that the coordination planning based on the improved conflict-free A* algorithm not only effectively achieves advanced conflict avoidance at the algorithmic level, but also exhibits lower computational complexity and higher task completion efficiency compared to other coordination planning methods.
{"title":"Conflict-free coordination planning for multiple automated guided vehicles in an intelligent warehousing system","authors":"Wei Xie , Xiongfeng Peng , Yanru Liu , Junhai Zeng , Lili Li , Toshio Eisaka","doi":"10.1016/j.simpat.2024.102945","DOIUrl":"https://doi.org/10.1016/j.simpat.2024.102945","url":null,"abstract":"<div><p>With the rapid development of intelligent manufacturing, the application of automated guided vehicles (AGVs) in intelligent warehousing systems has become increasingly common. Efficiently planning the conflict-free paths of multiple AGVs while minimizing the total task completion time is crucial for evaluating the performance of the system. Distinguishing itself from recent approaches where conflict avoidance strategy and path planning algorithm are executed independently or separately, this paper proposes an improved conflict-free A* algorithm by integrating the conflict avoidance strategy into the initial path planning process. Based on the heuristic A* algorithm, we use the instruction time consumption as the key evaluation indicator of the cost function and add the turning consumption in the future path cost evaluation. Moreover, the expansion mode of child nodes is optimized where a five-element search set containing the “zero movement” is proposed to implement a proactive pause-wait strategy. Then the prediction rules are designed to add constraints to three types of instructions based on the timeline map, guiding the heuristic planning to search for conflict-free child nodes. Extensive simulations show that the coordination planning based on the improved conflict-free A* algorithm not only effectively achieves advanced conflict avoidance at the algorithmic level, but also exhibits lower computational complexity and higher task completion efficiency compared to other coordination planning methods.</p></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140647515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-12DOI: 10.1016/j.simpat.2024.102946
Sugan J , Isaac Sajan R
In the realm of e-commerce, the growing complexity of dynamic workloads and resource management poses a substantial challenge for platforms aiming to optimize user experiences and operational efficiency. To address this issue, the PredictOptiCloud framework is introduced, offering a solution that combines sophisticated methodologies with comprehensive performance analysis. The framework encompasses a domain-specific approach that extracts and processes historical workload data, utilizing Domain-specific Hierarchical Attention Bi LSTM networks. This enables PredictOptiCloud to effectively predict and manage both stable and dynamic workloads. Furthermore, it employs the Spider Wolf Optimization (SWO) for load balancing and offloading decisions, optimizing resource allocation and enhancing user experiences. The performance analysis of PredictOptiCloud involves a multifaceted evaluation, with key metrics including response time, throughput, resource utilization rate, cost-efficiency, conversion rate, rate of successful task offloading, precision, accuracy, task volume, and churn rate. By meticulously assessing these metrics, PredictOptiCloud demonstrates its prowess in not only predicting and managing workloads but also in optimizing user satisfaction, operational efficiency, and cost-effectiveness, ultimately positioning itself as an invaluable asset for e-commerce platforms striving for excellence in an ever-evolving landscape.
在电子商务领域,动态工作负载和资源管理的复杂性与日俱增,给旨在优化用户体验和运营效率的平台带来了巨大挑战。为解决这一问题,我们推出了 PredictOptiCloud 框架,提供一种将复杂方法与全面性能分析相结合的解决方案。该框架采用特定领域方法,利用特定领域分层注意力 Bi LSTM 网络提取和处理历史工作负载数据。这使得 PredictOptiCloud 能够有效地预测和管理稳定和动态的工作负载。此外,它还采用了蜘蛛狼优化(SWO)技术,用于负载平衡和卸载决策,优化资源分配,提升用户体验。PredictOptiCloud 的性能分析涉及多方面的评估,关键指标包括响应时间、吞吐量、资源利用率、成本效益、转换率、任务卸载成功率、精确度、准确性、任务量和流失率。通过对这些指标的细致评估,PredictOptiCloud 不仅展示了其在预测和管理工作量方面的实力,还展示了其在优化用户满意度、运营效率和成本效益方面的实力,最终使自己成为电子商务平台在不断变化的环境中追求卓越的宝贵资产。
{"title":"PredictOptiCloud: A hybrid framework for predictive optimization in hybrid workload cloud task scheduling","authors":"Sugan J , Isaac Sajan R","doi":"10.1016/j.simpat.2024.102946","DOIUrl":"10.1016/j.simpat.2024.102946","url":null,"abstract":"<div><p>In the realm of e-commerce, the growing complexity of dynamic workloads and resource management poses a substantial challenge for platforms aiming to optimize user experiences and operational efficiency. To address this issue, the PredictOptiCloud framework is introduced, offering a solution that combines sophisticated methodologies with comprehensive performance analysis. The framework encompasses a domain-specific approach that extracts and processes historical workload data, utilizing Domain-specific Hierarchical Attention Bi LSTM networks. This enables PredictOptiCloud to effectively predict and manage both stable and dynamic workloads. Furthermore, it employs the Spider Wolf Optimization (SWO) for load balancing and offloading decisions, optimizing resource allocation and enhancing user experiences. The performance analysis of PredictOptiCloud involves a multifaceted evaluation, with key metrics including response time, throughput, resource utilization rate, cost-efficiency, conversion rate, rate of successful task offloading, precision, accuracy, task volume, and churn rate. By meticulously assessing these metrics, PredictOptiCloud demonstrates its prowess in not only predicting and managing workloads but also in optimizing user satisfaction, operational efficiency, and cost-effectiveness, ultimately positioning itself as an invaluable asset for e-commerce platforms striving for excellence in an ever-evolving landscape.</p></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140763708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-11DOI: 10.1016/j.simpat.2024.102943
Haozhou Ma , Peng Zhang , Yingwei Dong , Xuewen Wang , Rui Xia , Bo Li
The complexity of the underground environment in coal mines often leads to varying load conditions during the operation of the scraper conveyor, which can affect the lifespan of its components and result in unnecessary energy consumption. A test platform for the scraper conveyor was constructed based on the similarity theory to measure torque, speed, chain tension, and scraper acceleration during transportation. A DEM-MBD model of the scraper conveyor was developed and validated through transport tests and similarity theories to analyze the rigid-discrete coupling effect under different chain speed-load conditions. The results revealed a stratification phenomenon and a Brazilian fruit effect in the movement of coal. The average velocity of the upper and lower coal layers gradually increased during the transportation, while the difference between them gradually decreased. As the load increased, the stacking density and height of coal between scrapers also increased, leading to a higher force exerted on the scraper and chain. As the chain speed increased, the stacking density and height of coal between scrapers decreased, along with a decrease in the force applied to the scraper and chain. The formation of three-body wear necessitates a specific positional condition. When the scraper (chain)- coal-deck plate (chute liner) forms a particle stagnation state, severe wear occurs on the parts. This study provides a foundation for analyzing the transport mechanism of scraper conveyor from the particle perspective, offers a simulation reference for analyzing the mechanical and tribological characteristics of the line pan and scraper chain, and serves as a guideline for the future development of transportation state monitoring and the optimization and enhancement of components under different working conditions.
{"title":"Study on the rigid-discrete coupling effect of scraper conveyor under different chain speed-load conditions","authors":"Haozhou Ma , Peng Zhang , Yingwei Dong , Xuewen Wang , Rui Xia , Bo Li","doi":"10.1016/j.simpat.2024.102943","DOIUrl":"https://doi.org/10.1016/j.simpat.2024.102943","url":null,"abstract":"<div><p>The complexity of the underground environment in coal mines often leads to varying load conditions during the operation of the scraper conveyor, which can affect the lifespan of its components and result in unnecessary energy consumption. A test platform for the scraper conveyor was constructed based on the similarity theory to measure torque, speed, chain tension, and scraper acceleration during transportation. A DEM-MBD model of the scraper conveyor was developed and validated through transport tests and similarity theories to analyze the rigid-discrete coupling effect under different chain speed-load conditions. The results revealed a stratification phenomenon and a Brazilian fruit effect in the movement of coal. The average velocity of the upper and lower coal layers gradually increased during the transportation, while the difference between them gradually decreased. As the load increased, the stacking density and height of coal between scrapers also increased, leading to a higher force exerted on the scraper and chain. As the chain speed increased, the stacking density and height of coal between scrapers decreased, along with a decrease in the force applied to the scraper and chain. The formation of three-body wear necessitates a specific positional condition. When the scraper (chain)- coal-deck plate (chute liner) forms a particle stagnation state, severe wear occurs on the parts. This study provides a foundation for analyzing the transport mechanism of scraper conveyor from the particle perspective, offers a simulation reference for analyzing the mechanical and tribological characteristics of the line pan and scraper chain, and serves as a guideline for the future development of transportation state monitoring and the optimization and enhancement of components under different working conditions.</p></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140559173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}