Pub Date : 2024-03-04DOI: 10.1016/j.simpat.2024.102919
Dimitris Gkoulis, Cleopatra Bardaki, Mara Nikolaidou, George Kousiouris, Anargyros Tsadimas
Complex Event Processing (CEP) is a successful method to transform simple IoT events created by sensors into meaningful complex business events. To enhance availability, an event fabrication mechanism is integrated within the CEP model, generating synthetic events to offset missing data, resulting in a quality-aware CEP model. In this model, generated complex events are characterized by quality properties, namely completeness and timeliness. To empirically assess the quality of complex events through experimentation, we have developed a hybrid simulation platform. The platform’s dual nature stems from its distinctive approach of simulating sensor behaviors while concurrently running the quality-aware CEP IoT platform. Users can conduct experiments that closely mimic actual operational scenarios and have, in real-time, full visibility and control over all involved aspects, including composite transformations, quality assessment, event fabrication and its effectiveness, and aggregated reports. A representative experiment in an IoT-enabled greenhouse with missing events is presented to demonstrate the usefulness of the platform. The contribution of the hybrid simulation platform is twofold: provide (a) quality assessment of complex events, using two established quality properties for IoT environments with specific computation formulas and (b) a comprehensive testbed covering all aspects of a typical IoT setup for realistic experimentation. Together, these elements provide significant cost–benefit advantages by enabling researchers and practitioners to pre-optimize operational efficiency and decision-making in IoT systems.
{"title":"A Hybrid Simulation Platform for quality-aware evaluation of complex events in an IoT environment","authors":"Dimitris Gkoulis, Cleopatra Bardaki, Mara Nikolaidou, George Kousiouris, Anargyros Tsadimas","doi":"10.1016/j.simpat.2024.102919","DOIUrl":"10.1016/j.simpat.2024.102919","url":null,"abstract":"<div><p>Complex Event Processing (CEP) is a successful method to transform simple IoT events created by sensors into meaningful complex business events. To enhance availability, an event fabrication mechanism is integrated within the CEP model, generating synthetic events to offset missing data, resulting in a quality-aware CEP model. In this model, generated complex events are characterized by quality properties, namely completeness and timeliness. To empirically assess the quality of complex events through experimentation, we have developed a hybrid simulation platform. The platform’s dual nature stems from its distinctive approach of simulating sensor behaviors while concurrently running the quality-aware CEP IoT platform. Users can conduct experiments that closely mimic actual operational scenarios and have, in real-time, full visibility and control over all involved aspects, including composite transformations, quality assessment, event fabrication and its effectiveness, and aggregated reports. A representative experiment in an IoT-enabled greenhouse with missing events is presented to demonstrate the usefulness of the platform. The contribution of the hybrid simulation platform is twofold: provide (a) quality assessment of complex events, using two established quality properties for IoT environments with specific computation formulas and (b) a comprehensive testbed covering all aspects of a typical IoT setup for realistic experimentation. Together, these elements provide significant cost–benefit advantages by enabling researchers and practitioners to pre-optimize operational efficiency and decision-making in IoT systems.</p></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"133 ","pages":"Article 102919"},"PeriodicalIF":4.2,"publicationDate":"2024-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140071288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-02DOI: 10.1016/j.simpat.2024.102917
Zhihua Cui , Xiangyu Shi , Zhixia Zhang , Wensheng Zhang , Jinjun Chen
The computation offloading problem in mobile edge computing (MEC) has received a lot of attention, but service caching is also a research topic that cannot be ignored in MEC. Due to the limited resources available on the Edge Server (ES), a wise computation offloading and service caching policy must be formulated in order to maximize system offload efficiency. In this paper, a many-objective joint optimization computation offloading and service caching model (MaJOCOSC) is designed. The model takes into account the limited computing and storage resources of ES, the delay and energy consumption constraints of different types of tasks, and multiple processing modes of user tasks, and sets delay, energy consumption, task hit service rate, service cache balancing, and load balancing as the five optimization objectives of MaJOCOSC. Meanwhile, a non-dominated sorting genetic algorithm (NSGAIII-ASF&WD) based on achievement scalar function (ASF) and the k-nearest neighbor weighted distance mating selection strategy is proposed for better solving the model. The ASF ensures that the given strategy performs well for each objective value, and the k-nearest neighbor weighted distance provides the user with a diversity of strategies. Simulation results show that NSGAIII-ASF&WD can obtain better objective values when solving the model compared with other many-objective evolutionary algorithms, and a suitable computation offloading and service caching strategy is obtained.
移动边缘计算(MEC)中的计算卸载问题已受到广泛关注,但服务缓存也是 MEC 中不容忽视的研究课题。由于边缘服务器(ES)的资源有限,必须制定明智的计算卸载和服务缓存策略,才能最大限度地提高系统的卸载效率。本文设计了一个多目标联合优化计算卸载和服务缓存模型(MaJOCOSC)。该模型考虑了 ES 有限的计算和存储资源、不同类型任务的时延和能耗约束以及用户任务的多种处理模式,将时延、能耗、任务命中服务率、服务缓存均衡和负载均衡作为 MaJOCOSC 的五个优化目标。同时,为了更好地求解该模型,提出了基于成就标度函数(ASF)的非支配排序遗传算法(NSGAIII-ASF&WD)和 k 近邻加权距离交配选择策略。ASF 确保给定的策略在每个目标值下都有良好的表现,而 k 近邻加权距离则为用户提供了多样化的策略。仿真结果表明,与其他多目标进化算法相比,NSGAIII-ASF&WD 在求解模型时能获得更好的目标值,并得到了合适的计算卸载和服务缓存策略。
{"title":"Many-objective joint optimization of computation offloading and service caching in mobile edge computing","authors":"Zhihua Cui , Xiangyu Shi , Zhixia Zhang , Wensheng Zhang , Jinjun Chen","doi":"10.1016/j.simpat.2024.102917","DOIUrl":"https://doi.org/10.1016/j.simpat.2024.102917","url":null,"abstract":"<div><p>The computation offloading problem in mobile edge computing (MEC) has received a lot of attention, but service caching is also a research topic that cannot be ignored in MEC. Due to the limited resources available on the Edge Server (ES), a wise computation offloading and service caching policy must be formulated in order to maximize system offload efficiency. In this paper, a many-objective joint optimization computation offloading and service caching model (MaJOCOSC) is designed. The model takes into account the limited computing and storage resources of ES, the delay and energy consumption constraints of different types of tasks, and multiple processing modes of user tasks, and sets delay, energy consumption, task hit service rate, service cache balancing, and load balancing as the five optimization objectives of MaJOCOSC. Meanwhile, a non-dominated sorting genetic algorithm (NSGAIII-ASF&WD) based on achievement scalar function (ASF) and the k-nearest neighbor weighted distance mating selection strategy is proposed for better solving the model. The ASF ensures that the given strategy performs well for each objective value, and the k-nearest neighbor weighted distance provides the user with a diversity of strategies. Simulation results show that NSGAIII-ASF&WD can obtain better objective values when solving the model compared with other many-objective evolutionary algorithms, and a suitable computation offloading and service caching strategy is obtained.</p></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"133 ","pages":"Article 102917"},"PeriodicalIF":4.2,"publicationDate":"2024-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140031226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-29DOI: 10.1016/j.simpat.2024.102915
Swati Gupta, Ravi Shankar Singh
Cloud computing has revolutionized the IT landscape, providing scalable, on-demand computing resource. For efficiency in cloud environments, it is essential for modern organizations, where objectives often include cost reduction, resource consumption, operational efficiency and load balancing etc, to implement multi objective solutions. Single-objective systems can fail in handling dynamic and diverse workloads. This study introduces the Multi-Objective Whale Optimization-Based Scheduler (WOA-Scheduler) for efficient task scheduling in cloud computing environments. Leveraging the Whale Optimization Algorithm (WOA), the scheduler optimizes multiple objectives simultaneously, including cost, time, and load balancing. A key feature of the WOA-Scheduler is its flexibility in accommodating user-defined weights for different objectives, allowing organizations to prioritize optimization goals based on their specific requirements. Comparative analysis across various cloud environments demonstrates the superiority of the WOA-Scheduler over traditional single-objective approaches. By achieving a better balance between cost, time, and resource utilization, the scheduler enhances overall performance. Moreover, its multi-objective optimization capabilities enable dynamic adjustment of task assignments in response to changing workload conditions, ensuring efficient resource utilization and workload distribution. Overall, the WOA-Scheduler offers a customizable and adaptable solution for addressing the complexities of modern cloud services, ultimately improving performance and efficiency.
云计算为 IT 领域带来了革命性的变化,提供了可扩展的按需计算资源。现代组织的目标通常包括降低成本、资源消耗、运营效率和负载平衡等,为了提高云环境的效率,必须实施多目标解决方案。单目标系统可能无法处理动态和多样化的工作负载。本研究介绍了基于鲸鱼优化算法的多目标调度程序(WOA-Scheduler),用于云计算环境中的高效任务调度。利用鲸鱼优化算法(WOA),该调度程序可同时优化多个目标,包括成本、时间和负载平衡。WOA 调度器的一个主要特点是它能灵活地适应用户为不同目标定义的权重,使企业能够根据其特定要求确定优化目标的优先级。对各种云环境的比较分析表明,WOA-Scheduler 优于传统的单一目标方法。通过在成本、时间和资源利用率之间实现更好的平衡,该调度器提高了整体性能。此外,它的多目标优化功能还能根据不断变化的工作负载条件动态调整任务分配,确保高效的资源利用和工作负载分配。总之,WOA-Scheduler 为解决现代云服务的复杂性提供了一个可定制和适应性强的解决方案,最终提高了性能和效率。
{"title":"User-defined weight based multi objective task scheduling in cloud using whale optimization algorithm","authors":"Swati Gupta, Ravi Shankar Singh","doi":"10.1016/j.simpat.2024.102915","DOIUrl":"https://doi.org/10.1016/j.simpat.2024.102915","url":null,"abstract":"<div><p>Cloud computing has revolutionized the IT landscape, providing scalable, on-demand computing resource. For efficiency in cloud environments, it is essential for modern organizations, where objectives often include cost reduction, resource consumption, operational efficiency and load balancing etc, to implement multi objective solutions. Single-objective systems can fail in handling dynamic and diverse workloads. This study introduces the Multi-Objective Whale Optimization-Based Scheduler (WOA-Scheduler) for efficient task scheduling in cloud computing environments. Leveraging the Whale Optimization Algorithm (WOA), the scheduler optimizes multiple objectives simultaneously, including cost, time, and load balancing. A key feature of the WOA-Scheduler is its flexibility in accommodating user-defined weights for different objectives, allowing organizations to prioritize optimization goals based on their specific requirements. Comparative analysis across various cloud environments demonstrates the superiority of the WOA-Scheduler over traditional single-objective approaches. By achieving a better balance between cost, time, and resource utilization, the scheduler enhances overall performance. Moreover, its multi-objective optimization capabilities enable dynamic adjustment of task assignments in response to changing workload conditions, ensuring efficient resource utilization and workload distribution. Overall, the WOA-Scheduler offers a customizable and adaptable solution for addressing the complexities of modern cloud services, ultimately improving performance and efficiency.</p></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"133 ","pages":"Article 102915"},"PeriodicalIF":4.2,"publicationDate":"2024-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140014179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-29DOI: 10.1016/j.simpat.2024.102916
Vladimir Ciric , Marija Milosevic , Danijel Sokolovic , Ivan Milentijevic
In an increasingly digitalized world, cybersecurity has emerged as a critical component of safeguarding sensitive information and infrastructure from malicious threats. The threat actors are often in line or even one step ahead of the defense, causing the increasing reliance of security teams on artificial intelligence while trying to detect zero-day attacks. However, most of the cybersecurity solutions based on artificial intelligence that can be found in the literature are trained and tested on reference datasets that are at least five or more years old, which gives a vague insight into their security performances. Moreover, they often tend to be designed as isolated, self-focused components. The aim of this paper is to design and implement a modular network intrusion detection architecture capable of simulating cyberattacks based on real-world scenarios while evaluating its defense capabilities. The architecture is designed as a full pipeline from real-time network data collection and transformation to threat-information presentation and visualization, with a pre-trained artificial intelligence module at its core. Well-known components like CICFlowMeter, Prometheus, and Grafana are used and modified to fit our data preparation and core modules to form the proposed architecture for real-world network traffic security monitoring. For the sake of cyberattack simulation, the proposed architecture is situated within a virtual environment, surrounded by the Kali Linux-based penetration simulation agent on one side and a vulnerable agent on the other. The intrusion detection artificial intelligence module is trained on the CICIDS-2017 dataset, and it is demonstrated using the proposed architecture that, despite being trained on an outdated dataset, the trained module is still effective in detecting sophisticated modern attacks. Two case studies are given to illustrate how modular architectures and virtual environments can be valuable tools to assess the security properties of artificial intelligence-based solutions through simulation in real-world scenarios.
在日益数字化的世界中,网络安全已成为保护敏感信息和基础设施免受恶意威胁的关键组成部分。威胁行为者往往比防御者领先一步,甚至更早一步,这导致安全团队在试图检测零日攻击时越来越依赖人工智能。然而,文献中可以找到的大多数基于人工智能的网络安全解决方案都是在至少五年或五年以上的参考数据集上进行训练和测试的,因此对其安全性能的了解比较模糊。此外,它们往往被设计成孤立的、自我关注的组件。本文旨在设计并实现一种模块化网络入侵检测架构,该架构能够模拟基于真实世界场景的网络攻击,同时评估其防御能力。该架构设计为从实时网络数据收集和转换到威胁信息展示和可视化的完整流水线,其核心是预先训练好的人工智能模块。我们使用了 CICFlowMeter、Prometheus 和 Grafana 等知名组件,并对其进行了修改,以适应我们的数据准备和核心模块,从而形成适用于现实世界网络流量安全监控的拟议架构。为了模拟网络攻击,拟议架构被置于虚拟环境中,一侧是基于 Kali Linux 的渗透模拟代理,另一侧是易受攻击代理。入侵检测人工智能模块是在 CICIDS-2017 数据集上进行训练的,使用所提出的架构证明,尽管是在过时的数据集上进行训练,但训练后的模块仍能有效检测到复杂的现代攻击。本文给出了两个案例研究,以说明模块化架构和虚拟环境如何成为有价值的工具,通过在真实世界场景中进行模拟,评估基于人工智能的解决方案的安全特性。
{"title":"Modular deep learning-based network intrusion detection architecture for real-world cyber-attack simulation","authors":"Vladimir Ciric , Marija Milosevic , Danijel Sokolovic , Ivan Milentijevic","doi":"10.1016/j.simpat.2024.102916","DOIUrl":"10.1016/j.simpat.2024.102916","url":null,"abstract":"<div><p>In an increasingly digitalized world, cybersecurity has emerged as a critical component of safeguarding sensitive information and infrastructure from malicious threats. The threat actors are often in line or even one step ahead of the defense, causing the increasing reliance of security teams on artificial intelligence while trying to detect zero-day attacks. However, most of the cybersecurity solutions based on artificial intelligence that can be found in the literature are trained and tested on reference datasets that are at least five or more years old, which gives a vague insight into their security performances. Moreover, they often tend to be designed as isolated, self-focused components. The aim of this paper is to design and implement a modular network intrusion detection architecture capable of simulating cyberattacks based on real-world scenarios while evaluating its defense capabilities. The architecture is designed as a full pipeline from real-time network data collection and transformation to threat-information presentation and visualization, with a pre-trained artificial intelligence module at its core. Well-known components like CICFlowMeter, Prometheus, and Grafana are used and modified to fit our data preparation and core modules to form the proposed architecture for real-world network traffic security monitoring. For the sake of cyberattack simulation, the proposed architecture is situated within a virtual environment, surrounded by the Kali Linux-based penetration simulation agent on one side and a vulnerable agent on the other. The intrusion detection artificial intelligence module is trained on the CICIDS-2017 dataset, and it is demonstrated using the proposed architecture that, despite being trained on an outdated dataset, the trained module is still effective in detecting sophisticated modern attacks. Two case studies are given to illustrate how modular architectures and virtual environments can be valuable tools to assess the security properties of artificial intelligence-based solutions through simulation in real-world scenarios.</p></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"133 ","pages":"Article 102916"},"PeriodicalIF":4.2,"publicationDate":"2024-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140016946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-23DOI: 10.1016/j.simpat.2024.102914
Zubin Mistry , Andrea Vacca , Sri Krishna Uppaluri
This paper presents a model formulation for balanced twin lip vane pumps and an experimental activity to validate the model. The simulation model begins with a geometrical module that preprocesses the CAD drawings of a given unit. The model then performs a fluid dynamic analysis using a lumped-parameter formulation to solve for the pressures inside properly defined control volumes within the unit. The fluid dynamic model is solved simultaneously with a motion module that evaluates the planar motions of the vanes using Newton’s law of motion and with a lubricating interface solver based on the Reynolds equation. Contact dynamics formulations and elastohydrodynamic relations are applied at the vane locations in contact with the cam ring. The comparison with experimental results highlights a good match in volumetric and hydromechanical efficiencies. The measured outlet pressure ripple matches the simulated one for all tested speeds and pressures. The paper also shows a breakdown of the distribution of volumetric and power losses arising from various components of the machine. The proposed methodology is computationally inexpensive, so it can be used in future design and optimization studies aimed at improving the performance of such units.
{"title":"Modeling and experimental validation of twin lip balanced vane pump considering micromotions, contact mechanics, and lubricating interfaces","authors":"Zubin Mistry , Andrea Vacca , Sri Krishna Uppaluri","doi":"10.1016/j.simpat.2024.102914","DOIUrl":"https://doi.org/10.1016/j.simpat.2024.102914","url":null,"abstract":"<div><p>This paper presents a model formulation for balanced twin lip vane pumps and an experimental activity to validate the model. The simulation model begins with a geometrical module that preprocesses the CAD drawings of a given unit. The model then performs a fluid dynamic analysis using a lumped-parameter formulation to solve for the pressures inside properly defined control volumes within the unit. The fluid dynamic model is solved simultaneously with a motion module that evaluates the planar motions of the vanes using Newton’s law of motion and with a lubricating interface solver based on the Reynolds equation. Contact dynamics formulations and elastohydrodynamic relations are applied at the vane locations in contact with the cam ring. The comparison with experimental results highlights a good match in volumetric and hydromechanical efficiencies. The measured outlet pressure ripple matches the simulated one for all tested speeds and pressures. The paper also shows a breakdown of the distribution of volumetric and power losses arising from various components of the machine. The proposed methodology is computationally inexpensive, so it can be used in future design and optimization studies aimed at improving the performance of such units.</p></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"133 ","pages":"Article 102914"},"PeriodicalIF":4.2,"publicationDate":"2024-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139985373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-20DOI: 10.1016/j.simpat.2024.102912
Nicola Bertocci, Laura Carnevali, Leonardo Scommegna, Enrico Vicario
Tramways decrease time, cost, and environmental impact of urban transport, while requiring multimodal intersections where trams arriving with nominal periodic timetables may have right of way over road vehicles. Quantitative evaluation of stochastic models enables early exploration and online adaptation of design choices, identifying operational parameters that mitigate impact on road transport performance.
We present an efficient analytical approach for offline scheduling of traffic signals at multimodal intersections among road traffic flows and tram lines with right of way, minimizing the maximum expected percentage of queued vehicles of each flow with respect to sequence and duration of phases. To this end, we compute the expected queue size over time of each vehicle flow through a compositional approach, decoupling analyses of tram and road traffic. On the one hand, we define microscopic models of tram traffic, capturing periodic tram departures, bounded delays, and travel times with general (i.e., non-Exponential) distribution with bounded support, open to represent arrival and travel processes estimated from operational data. On the other hand, we define macroscopic models of road transport flows as finite-capacity vacation queues, with general vacation times determined by the transient probability that the intersection is available for vehicles, efficiently evaluating the exact expected queue size over time. We show that the distribution of the expected queue size of each flow at multiples of the hyperperiod, resulting from temporization of nominal tram arrivals and vehicle traffic signals, reaches a steady state within few hyper-periods. Therefore, transient analysis starting from this steady-state distribution and lasting for the hyper-period duration turns out to be sufficient to characterize road transport behavior over time intervals of arbitrary duration.
We implemented the proposed approach in the novel OMNIBUS Java library, and we compared against Simulation of Urban MObility (SUMO). Experimental results on case studies of real complexity with time-varying parameters show the approach effectiveness at identifying optimal traffic signal schedules, notably exploring in few minutes hundreds of schedules requiring tens of hours in SUMO.
{"title":"Efficient derivation of optimal signal schedules for multimodal intersections","authors":"Nicola Bertocci, Laura Carnevali, Leonardo Scommegna, Enrico Vicario","doi":"10.1016/j.simpat.2024.102912","DOIUrl":"10.1016/j.simpat.2024.102912","url":null,"abstract":"<div><p>Tramways decrease time, cost, and environmental impact of urban transport, while requiring multimodal intersections where trams arriving with nominal periodic timetables may have right of way over road vehicles. Quantitative evaluation of stochastic models enables early exploration and online adaptation of design choices, identifying operational parameters that mitigate impact on road transport performance.</p><p>We present an efficient analytical approach for offline scheduling of traffic signals at multimodal intersections among road traffic flows and tram lines with right of way, minimizing the maximum expected percentage of queued vehicles of each flow with respect to sequence and duration of phases. To this end, we compute the expected queue size over time of each vehicle flow through a compositional approach, decoupling analyses of tram and road traffic. On the one hand, we define microscopic models of tram traffic, capturing periodic tram departures, bounded delays, and travel times with general (i.e., non-Exponential) distribution with bounded support, open to represent arrival and travel processes estimated from operational data. On the other hand, we define macroscopic models of road transport flows as finite-capacity vacation queues, with general vacation times determined by the transient probability that the intersection is available for vehicles, efficiently evaluating the exact expected queue size over time. We show that the distribution of the expected queue size of each flow at multiples of the hyperperiod, resulting from temporization of nominal tram arrivals and vehicle traffic signals, reaches a steady state within few hyper-periods. Therefore, transient analysis starting from this steady-state distribution and lasting for the hyper-period duration turns out to be sufficient to characterize road transport behavior over time intervals of arbitrary duration.</p><p>We implemented the proposed approach in the novel OMNIBUS Java library, and we compared against Simulation of Urban MObility (SUMO). Experimental results on case studies of real complexity with time-varying parameters show the approach effectiveness at identifying optimal traffic signal schedules, notably exploring in few minutes hundreds of schedules requiring tens of hours in SUMO.</p></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"133 ","pages":"Article 102912"},"PeriodicalIF":4.2,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1569190X24000261/pdfft?md5=192754ce41fc68c4420f5a2ae4093a81&pid=1-s2.0-S1569190X24000261-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139926477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-20DOI: 10.1016/j.simpat.2024.102913
Saeed Javanmardi , Georgia Sakellari , Mohammad Shojafar , Antonio Caruso
Several scenarios that use the Internet of Drones (IoD) networks require a Fog paradigm, where the Fog devices, provide time-sensitive functionality such as task allocation, scheduling, and resource optimization. The problem of efficient task allocation/scheduling is critical for optimizing Fog-enabled Internet of Drones performance. In recent years, many articles have employed meta-heuristic approaches for task scheduling/allocation in Fog-enabled IoT-based scenarios, focusing on network usage and delay, but neglecting execution time. While promising in the academic area, metaheuristic have many limitations in real-time environments due to their high execution time, resource-intensive nature, increased time complexity, and inherent uncertainty in achieving optimal solutions, as supported by empirical studies, case studies, and benchmarking data. We propose a task allocation method named F-DTA that is used as the fitness function of two metaheuristic approaches: Particle Swarm Optimization (PSO) and The Krill Herd Algorithm (KHA). We compare our proposed method by simulation using the iFogSim2 simulator, keeping all the settings the same for a fair evaluation and only focus on the execution time. The results confirm its superior performance in execution time, compared to the metaheuristics.
{"title":"Why it does not work? Metaheuristic task allocation approaches in Fog-enabled Internet of Drones","authors":"Saeed Javanmardi , Georgia Sakellari , Mohammad Shojafar , Antonio Caruso","doi":"10.1016/j.simpat.2024.102913","DOIUrl":"10.1016/j.simpat.2024.102913","url":null,"abstract":"<div><p>Several scenarios that use the Internet of Drones (IoD) networks require a Fog paradigm, where the Fog devices, provide time-sensitive functionality such as task allocation, scheduling, and resource optimization. The problem of efficient task allocation/scheduling is critical for optimizing Fog-enabled Internet of Drones performance. In recent years, many articles have employed meta-heuristic approaches for task scheduling/allocation in Fog-enabled IoT-based scenarios, focusing on network usage and delay, but neglecting execution time. While promising in the academic area, metaheuristic have many limitations in real-time environments due to their high execution time, resource-intensive nature, increased time complexity, and inherent uncertainty in achieving optimal solutions, as supported by empirical studies, case studies, and benchmarking data. We propose a task allocation method named F-DTA that is used as the fitness function of two metaheuristic approaches: Particle Swarm Optimization (PSO) and The Krill Herd Algorithm (KHA). We compare our proposed method by simulation using the iFogSim2 simulator, keeping all the settings the same for a fair evaluation and only focus on the execution time. The results confirm its superior performance in execution time, compared to the metaheuristics.</p></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"133 ","pages":"Article 102913"},"PeriodicalIF":4.2,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1569190X24000273/pdfft?md5=4a684cbf1f3922d096dcb3ab0bd3aefb&pid=1-s2.0-S1569190X24000273-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139926503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Educational big data analysis is facilitated by the significant amount of unstructured data found in education institutions. Python has various toolkits for both structured and unstructured data processing. However, its ability for processing large-scale data is limited. On the other hand, Spark is a big data processing framework, but it does not have the needed toolkits for processing unstructured rich text documents, 3D model and image processing. In this study, we develop a generic framework that integrates Python toolkits and Spark based on service-oriented architecture. The framework automatically extends the serial algorithm written in Python to distributed algorithm to accomplish parallel processing tasks seamlessly. First, our focus is on achieving non-intrusive deployment to Spark servers and how to run Python codes in Spark environment to process rich text documents. Second, we propose a compression-based schema to address the poor performance of small sized files in HDFS. Finally, we design a generic model that can process different types of poly-structured data such as 3D models and images. We published the services used in the system for sharing them at https level for constructing different systems. It is evaluated through simulation experiments using large-scale rich text documents, 3D models and images. According to the results, the speedup is 49 times faster than the standalone Python-docx in the simulations of extracting 232 GB docx files when eight physical nodes with 128 cores are used. It reaches about 89 times after further compression schema is applied. In addition, simulations for 3D model descriptors' extraction show that the simulation achieves a speedup of about 116 times. In the large-scale image's HOG features extraction simulation task of up to 256.7 GB (6,861,024 images), a speedup of up to 110 times is achieved.
{"title":"A service-oriented framework for large-scale documents processing and application via 3D models and feature extraction","authors":"Qiang Chen , Yinong Chen , Cheng Zhan , Wu Chen , Zili Zhang , Sheng Wu","doi":"10.1016/j.simpat.2024.102903","DOIUrl":"10.1016/j.simpat.2024.102903","url":null,"abstract":"<div><p>Educational big data analysis is facilitated by the significant amount of unstructured data found in education institutions. Python has various toolkits for both structured and unstructured data processing. However, its ability for processing large-scale data is limited. On the other hand, Spark is a big data processing framework, but it does not have the needed toolkits for processing unstructured rich text documents, 3D model and image processing. In this study, we develop a generic framework that integrates Python toolkits and Spark based on service-oriented architecture. The framework automatically extends the serial algorithm written in Python to distributed algorithm to accomplish parallel processing tasks seamlessly. First, our focus is on achieving non-intrusive deployment to Spark servers and how to run Python codes in Spark environment to process rich text documents. Second, we propose a compression-based schema to address the poor performance of small sized files in HDFS. Finally, we design a generic model that can process different types of poly-structured data such as 3D models and images. We published the services used in the system for sharing them at https level for constructing different systems. It is evaluated through simulation experiments using large-scale rich text documents, 3D models and images. According to the results, the speedup is 49 times faster than the standalone Python-docx in the simulations of extracting 232 GB docx files when eight physical nodes with 128 cores are used. It reaches about 89 times after further compression schema is applied. In addition, simulations for 3D model descriptors' extraction show that the simulation achieves a speedup of about 116 times. In the large-scale image's HOG features extraction simulation task of up to 256.7 GB (6,861,024 images), a speedup of up to 110 times is achieved.</p></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"133 ","pages":"Article 102903"},"PeriodicalIF":4.2,"publicationDate":"2024-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139892574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-08DOI: 10.1016/j.simpat.2024.102904
Pantelis G. Nikolakopoulos , Angelos P. Markopoulos
{"title":"Editorial on simulation and modeling using digital twins in mechanical design and in advanced manufacturing technology","authors":"Pantelis G. Nikolakopoulos , Angelos P. Markopoulos","doi":"10.1016/j.simpat.2024.102904","DOIUrl":"10.1016/j.simpat.2024.102904","url":null,"abstract":"","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"133 ","pages":"Article 102904"},"PeriodicalIF":4.2,"publicationDate":"2024-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139814549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-06DOI: 10.1016/j.simpat.2024.102902
Mauricio Becerra-Fernandez , Olga R. Romero , Johanna Trujillo-Diaz , Milton M. Herrera
This study proposes a simulation model for allocating counterbalanced forklifts in a logistics distribution center (LDC) with aisle constraints. Modeling the case study for a consumer goods firm, the performance measures of the logistics operation were calculated and certain experimental scenarios were purposed for decision-making regarding the number of forklifts and their productivity. The relevance of this research is validated by the gap in existing literature on enhancing forklift assignments in massive storage systems with restrictions. The simulation scenarios contribute toward standardizing logistics operations with similar characteristics, starting from the layout stage of an LDC. The designed simulation model demonstrates that the simulated allocation incorporates technical and human resources in warehouse operations. Utilizing discrete-event simulation (DES) as a framework, this study assesses various scenarios in an LDC with restrictions on the forklift. The hypothesis of the problem was analyzed, and the simulation model was used to characterize the system behavior under different scenarios and guide the decision-making processes impacting operational costs and client service levels. This research employs DES to address performance indicators and operational costs, serving as a methodological guide for resource allocation in logistics operations at distribution centers.
本研究提出了一种在有通道限制的物流配送中心(LDC)中分配平衡重式叉车的仿真模型。通过对一家消费品公司的案例研究建模,计算了物流操作的性能指标,并针对叉车数量及其生产率提出了若干实验方案,以供决策参考。现有文献中关于在有限制的大规模仓储系统中提高叉车分配的研究存在空白,这也验证了本研究的相关性。模拟场景有助于从 LDC 的布局阶段开始,将具有相似特征的物流操作标准化。所设计的模拟模型表明,模拟分配结合了仓库运营中的技术和人力资源。本研究以离散事件仿真(DES)为框架,评估了限制叉车的物流中心的各种情况。对问题的假设进行了分析,仿真模型用于描述不同情景下的系统行为,并指导影响运营成本和客户服务水平的决策过程。这项研究利用 DES 解决绩效指标和运营成本问题,为配送中心物流运营的资源分配提供了方法指导。
{"title":"Assignment-simulation model for forklifts in a distribution center with aisle constraints","authors":"Mauricio Becerra-Fernandez , Olga R. Romero , Johanna Trujillo-Diaz , Milton M. Herrera","doi":"10.1016/j.simpat.2024.102902","DOIUrl":"10.1016/j.simpat.2024.102902","url":null,"abstract":"<div><p>This study proposes a simulation model for allocating counterbalanced forklifts in a logistics distribution center (LDC) with aisle constraints. Modeling the case study for a consumer goods firm, the performance measures of the logistics operation were calculated and certain experimental scenarios were purposed for decision-making regarding the number of forklifts and their productivity. The relevance of this research is validated by the gap in existing literature on enhancing forklift assignments in massive storage systems with restrictions. The simulation scenarios contribute toward standardizing logistics operations with similar characteristics, starting from the layout stage of an LDC. The designed simulation model demonstrates that the simulated allocation incorporates technical and human resources in warehouse operations. Utilizing discrete-event simulation (DES) as a framework, this study assesses various scenarios in an LDC with restrictions on the forklift. The hypothesis of the problem was analyzed, and the simulation model was used to characterize the system behavior under different scenarios and guide the decision-making processes impacting operational costs and client service levels. This research employs DES to address performance indicators and operational costs, serving as a methodological guide for resource allocation in logistics operations at distribution centers.</p></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"133 ","pages":"Article 102902"},"PeriodicalIF":4.2,"publicationDate":"2024-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1569190X24000169/pdfft?md5=99d00dd4dbcf204d4bf0e03f7784de16&pid=1-s2.0-S1569190X24000169-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139885677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}