首页 > 最新文献

Journal of Grid Computing最新文献

英文 中文
Assessing the Complexity of Cloud Pricing Policies: A Comparative Market Analysis 评估云计算定价政策的复杂性:市场比较分析
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-12 DOI: 10.1007/s10723-024-09780-4
Vasiliki Liagkou, George Fragiadakis, Evangelia Filiopoulou, Christos Michalakelis, Anargyros Tsadimas, Mara Nikolaidou

Cloud computing has gained popularity at a breakneck pace over the last few years. It has revolutionized the way businesses operate by providing a flexible and scalable infrastructure for their computing needs. Cloud providers offer a range of services with a variety of pricing schemes. Cloud pricing schemes are based on functional factors like CPU, RAM, and storage, combined with different payment options, such as pay-per-use, subscription-based, and non-functional aspects, such as scalability and availability. While cloud pricing can be complicated, it is critical for businesses to thoroughly assess and compare pricing policies along with technical requirements to ensure they design an investment strategy. This paper evaluates current pricing strategies for IaaS, CaaS, and PaaS cloud services and also focuses on the three leading cloud providers, Amazon, Microsoft, and Google. To compare pricing policies between different services and providers, a hedonic price index is constructed for each service type based on data collected in 2022. Using the hedonic price index, a comparative analysis between them becomes feasible. The results revealed that providers follow the very same pricing pattern for IaaS and CaaS, with CPU being the main driver of cloud pricing schemes, whereas PaaS pricing fluctuates among cloud providers.

过去几年,云计算以迅猛的速度得到普及。它为企业的计算需求提供了灵活、可扩展的基础设施,从而彻底改变了企业的运营方式。云计算提供商提供一系列服务和各种定价方案。云定价方案基于 CPU、内存和存储等功能因素,结合不同的付款方式,如按使用付费、订阅式以及可扩展性和可用性等非功能方面。虽然云定价可能很复杂,但企业必须全面评估和比较定价政策以及技术要求,以确保设计出投资策略。本文评估了IaaS、CaaS和PaaS云服务的当前定价策略,并重点关注了亚马逊、微软和谷歌这三家领先的云服务提供商。为了比较不同服务和提供商之间的定价政策,本文根据 2022 年收集的数据为每种服务类型构建了享乐价格指数。利用享乐价格指数,可以对它们进行比较分析。结果显示,提供商对 IaaS 和 CaaS 的定价模式完全相同,CPU 是云定价方案的主要驱动因素,而 PaaS 的定价则在云提供商之间波动。
{"title":"Assessing the Complexity of Cloud Pricing Policies: A Comparative Market Analysis","authors":"Vasiliki Liagkou, George Fragiadakis, Evangelia Filiopoulou, Christos Michalakelis, Anargyros Tsadimas, Mara Nikolaidou","doi":"10.1007/s10723-024-09780-4","DOIUrl":"https://doi.org/10.1007/s10723-024-09780-4","url":null,"abstract":"<p>Cloud computing has gained popularity at a breakneck pace over the last few years. It has revolutionized the way businesses operate by providing a flexible and scalable infrastructure for their computing needs. Cloud providers offer a range of services with a variety of pricing schemes. Cloud pricing schemes are based on functional factors like CPU, RAM, and storage, combined with different payment options, such as pay-per-use, subscription-based, and non-functional aspects, such as scalability and availability. While cloud pricing can be complicated, it is critical for businesses to thoroughly assess and compare pricing policies along with technical requirements to ensure they design an investment strategy. This paper evaluates current pricing strategies for IaaS, CaaS, and PaaS cloud services and also focuses on the three leading cloud providers, Amazon, Microsoft, and Google. To compare pricing policies between different services and providers, a hedonic price index is constructed for each service type based on data collected in 2022. Using the hedonic price index, a comparative analysis between them becomes feasible. The results revealed that providers follow the very same pricing pattern for IaaS and CaaS, with CPU being the main driver of cloud pricing schemes, whereas PaaS pricing fluctuates among cloud providers.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142193609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Quasi-Oppositional Learning-based Fox Optimizer for QoS-aware Web Service Composition in Mobile Edge Computing 基于准命题学习的 Fox 优化器,用于移动边缘计算中的 QoS 感知网络服务组合
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-31 DOI: 10.1007/s10723-024-09779-x
Ramin Habibzadeh Sharif, Mohammad Masdari, Ali Ghaffari, Farhad Soleimanian Gharehchopogh

Currently, web service-based edge computing networks are across-the-board, and their users are increasing dramatically. The network users request various services with specific Quality-of-Service (QoS) values. The QoS-aware Web Service Composition (WSC) methods assign available services to users’ tasks and significantly affect their satisfaction. Various methods have been provided to solve the QoS-aware WSC problem; However, this field is still one of the popular research fields since the dimensions of these networks, the number of their users, and the variety of provided services are growing outstandingly. Consequently, this study presents an enhanced Fox Optimizer (FOX)-based framework named EQOLFOX to solve QoS-aware web service composition problems in edge computing environments. In this regard, the Quasi-Oppositional Learning is utilized in the EQOLFOX to diminish the zero-orientation nature of the FOX algorithm. Likewise, a reinitialization strategy is included to enhance EQOLFOX's exploration capability. Besides, a new phase with two new movement strategies is introduced to improve searching abilities. Also, a multi-best strategy is recruited to depart local optimums and lead the population more optimally. Eventually, the greedy selection approach is employed to augment the convergence rate and exploitation capability. The EQOLFOX is applied to ten real-life and artificial web-service-based edge computing environments, each with four different task counts to evaluate its proficiency. The obtained results are compared with the DO, FOX, JS, MVO, RSA, SCA, SMA, and TSA algorithms numerically and visually. The experimental results indicated the contributions' effectiveness and the EQOLFOX's competency.

目前,基于网络服务的边缘计算网络遍地开花,其用户也在急剧增加。网络用户要求各种具有特定服务质量(QoS)值的服务。具有 QoS 意识的网络服务组成(WSC)方法可将可用服务分配给用户的任务,并极大地影响用户的满意度。然而,由于这些网络的规模、用户数量和所提供服务的种类都在显著增加,因此这一领域仍然是热门研究领域之一。因此,本研究提出了一个基于增强型福克斯优化器(FOX)的框架,名为 EQOLFOX,用于解决边缘计算环境中的 QoS 感知网络服务组成问题。在这方面,EQOLFOX 采用了准命题学习(Quasi-Oppositional Learning)技术,以减少 FOX 算法的零定向特性。同样,为了增强 EQOLFOX 的探索能力,还加入了重新初始化策略。此外,还引入了一个新阶段和两种新的移动策略,以提高搜索能力。此外,还采用了多最优策略,以脱离局部最优,并引导群体达到更优化的状态。最后,采用贪婪选择方法来提高收敛速度和开发能力。EQOLFOX 被应用于十个基于网络服务的真实和人工边缘计算环境,每个环境有四个不同的任务数,以评估其能力。实验结果与 DO、FOX、JS、MVO、RSA、SCA、SMA 和 TSA 算法进行了数值和视觉比较。实验结果表明了这些贡献的有效性和 EQOLFOX 的能力。
{"title":"A Quasi-Oppositional Learning-based Fox Optimizer for QoS-aware Web Service Composition in Mobile Edge Computing","authors":"Ramin Habibzadeh Sharif, Mohammad Masdari, Ali Ghaffari, Farhad Soleimanian Gharehchopogh","doi":"10.1007/s10723-024-09779-x","DOIUrl":"https://doi.org/10.1007/s10723-024-09779-x","url":null,"abstract":"<p>Currently, web service-based edge computing networks are across-the-board, and their users are increasing dramatically. The network users request various services with specific Quality-of-Service (QoS) values. The QoS-aware Web Service Composition (WSC) methods assign available services to users’ tasks and significantly affect their satisfaction. Various methods have been provided to solve the QoS-aware WSC problem; However, this field is still one of the popular research fields since the dimensions of these networks, the number of their users, and the variety of provided services are growing outstandingly. Consequently, this study presents an enhanced Fox Optimizer (FOX)-based framework named EQOLFOX to solve QoS-aware web service composition problems in edge computing environments. In this regard, the Quasi-Oppositional Learning is utilized in the EQOLFOX to diminish the zero-orientation nature of the FOX algorithm. Likewise, a reinitialization strategy is included to enhance EQOLFOX's exploration capability. Besides, a new phase with two new movement strategies is introduced to improve searching abilities. Also, a multi-best strategy is recruited to depart local optimums and lead the population more optimally. Eventually, the greedy selection approach is employed to augment the convergence rate and exploitation capability. The EQOLFOX is applied to ten real-life and artificial web-service-based edge computing environments, each with four different task counts to evaluate its proficiency. The obtained results are compared with the DO, FOX, JS, MVO, RSA, SCA, SMA, and TSA algorithms numerically and visually. The experimental results indicated the contributions' effectiveness and the EQOLFOX's competency.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142193616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
WIDESim: A Toolkit for Simulating Resource Management Techniques Of Scientific Workflows in Distributed Environments with Graph Topology WIDESim:利用图拓扑模拟分布式环境中科学工作流资源管理技术的工具包
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-13 DOI: 10.1007/s10723-024-09778-y
Mohammad Amin Rayej, Hajar Siar, Ahmadreza Hamzei, Mohammad Sadegh Majidi Yazdi, Parsa Mohammadian, Mohammad Izadi

Modeling IoT applications in distributed computing systems as workflows enables automating their procedure. There are different types of workflow-based applications in the literature. Executing IoT applications using device-to-device (D2D) communications in distributed computing systems especially edge paradigms requiring direct communication between devices in a network with a graph topology. This paper introduces a toolkit for simulating resource management of scientific workflows with different structures in distributed environments with graph topology called WIDESim. The proposed simulator enables dynamic resource management and scheduling. We have validated the performance of WIDESim in comparison to standard simulators, also evaluated its performance in real-world scenarios of distributed computing. The results indicate that WIDESim’s performance is close to existing standard simulators besides its improvements. Additionally, the findings demonstrate the satisfactory performance of the extended features incorporated within WIDESim.

将分布式计算系统中的物联网应用建模为工作流,可以实现程序自动化。文献中有不同类型的基于工作流的应用。在分布式计算系统中使用设备到设备(D2D)通信执行物联网应用,尤其是需要在具有图拓扑结构的网络中进行设备间直接通信的边缘范例。本文介绍了一个名为 WIDESim 的工具包,用于模拟图拓扑分布式环境中具有不同结构的科学工作流的资源管理。该模拟器可实现动态资源管理和调度。与标准模拟器相比,我们验证了 WIDESim 的性能,还评估了它在分布式计算实际场景中的表现。结果表明,WIDESim 的性能接近现有的标准模拟器,而且还有所改进。此外,研究结果还证明了 WIDESim 中所包含的扩展功能的性能令人满意。
{"title":"WIDESim: A Toolkit for Simulating Resource Management Techniques Of Scientific Workflows in Distributed Environments with Graph Topology","authors":"Mohammad Amin Rayej, Hajar Siar, Ahmadreza Hamzei, Mohammad Sadegh Majidi Yazdi, Parsa Mohammadian, Mohammad Izadi","doi":"10.1007/s10723-024-09778-y","DOIUrl":"https://doi.org/10.1007/s10723-024-09778-y","url":null,"abstract":"<p>Modeling IoT applications in distributed computing systems as workflows enables automating their procedure. There are different types of workflow-based applications in the literature. Executing IoT applications using device-to-device (D2D) communications in distributed computing systems especially edge paradigms requiring direct communication between devices in a network with a graph topology. This paper introduces a toolkit for simulating resource management of scientific workflows with different structures in distributed environments with graph topology called WIDESim. The proposed simulator enables dynamic resource management and scheduling. We have validated the performance of WIDESim in comparison to standard simulators, also evaluated its performance in real-world scenarios of distributed computing. The results indicate that WIDESim’s performance is close to existing standard simulators besides its improvements. Additionally, the findings demonstrate the satisfactory performance of the extended features incorporated within WIDESim.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142193617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CMK: Enhancing Resource Usage Monitoring across Diverse Bioinformatics Workflow Management Systems CMK:加强对不同生物信息学工作流程管理系统的资源使用监控
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-01 DOI: 10.1007/s10723-024-09777-z
Robert Nica, Stefan Götz, Germán Moltó

The increasing use of multiple Workflow Management Systems (WMS) employing various workflow languages and shared workflow repositories enhances the open-source bioinformatics ecosystem. Efficient resource utilization in these systems is crucial for keeping costs low and improving processing times, especially for large-scale bioinformatics workflows running in cloud environments. Recognizing this, our study introduces a novel reference architecture, Cloud Monitoring Kit (CMK), for a multi-platform monitoring system. Our solution is designed to generate uniform, aggregated metrics from containerized workflow tasks scheduled by different WMS. Central to the proposed solution is the use of task labeling methods, which enable convenient grouping and aggregating of metrics independent of the WMS employed. This approach builds upon existing technology, providing additional benefits of modularity and capacity to seamlessly integrate with other data processing or collection systems. We have developed and released an open-source implementation of our system, which we evaluated on Amazon Web Services (AWS) using a transcriptomics data analysis workflow executed on two scientific WMS. The findings of this study indicate that CMK provides valuable insights into resource utilization. In doing so, it paves the way for more efficient management of resources in containerized scientific workflows running in public cloud environments, and it provides a foundation for optimizing task configurations, reducing costs, and enhancing scheduling decisions. Overall, our solution addresses the immediate needs of bioinformatics workflows and offers a scalable and adaptable framework for future advancements in cloud-based scientific computing.

采用各种工作流语言和共享工作流资源库的多种工作流管理系统(WMS)的使用日益增多,增强了开源生物信息学生态系统。高效利用这些系统中的资源对于降低成本和缩短处理时间至关重要,尤其是对于在云环境中运行的大规模生物信息学工作流而言。有鉴于此,我们的研究为多平台监控系统引入了一个新颖的参考架构--云监控套件(CMK)。我们的解决方案旨在从不同 WMS 调度的容器化工作流任务中生成统一的汇总指标。所提解决方案的核心是使用任务标签方法,这种方法可以方便地对指标进行分组和汇总,而与所使用的 WMS 无关。这种方法以现有技术为基础,具有模块化和与其他数据处理或收集系统无缝集成的能力等额外优势。我们在亚马逊网络服务(AWS)上使用在两个科学 WMS 上执行的转录组学数据分析工作流对该系统进行了评估。这项研究的结果表明,CMK 为资源利用提供了有价值的见解。因此,它为在公共云环境中运行的容器化科学工作流中更有效地管理资源铺平了道路,并为优化任务配置、降低成本和加强调度决策奠定了基础。总之,我们的解决方案解决了生物信息学工作流的迫切需求,并为基于云的科学计算的未来发展提供了一个可扩展、可适应的框架。
{"title":"CMK: Enhancing Resource Usage Monitoring across Diverse Bioinformatics Workflow Management Systems","authors":"Robert Nica, Stefan Götz, Germán Moltó","doi":"10.1007/s10723-024-09777-z","DOIUrl":"https://doi.org/10.1007/s10723-024-09777-z","url":null,"abstract":"<p>The increasing use of multiple Workflow Management Systems (WMS) employing various workflow languages and shared workflow repositories enhances the open-source bioinformatics ecosystem. Efficient resource utilization in these systems is crucial for keeping costs low and improving processing times, especially for large-scale bioinformatics workflows running in cloud environments. Recognizing this, our study introduces a novel reference architecture, Cloud Monitoring Kit (CMK), for a multi-platform monitoring system. Our solution is designed to generate uniform, aggregated metrics from containerized workflow tasks scheduled by different WMS. Central to the proposed solution is the use of task labeling methods, which enable convenient grouping and aggregating of metrics independent of the WMS employed. This approach builds upon existing technology, providing additional benefits of modularity and capacity to seamlessly integrate with other data processing or collection systems. We have developed and released an open-source implementation of our system, which we evaluated on Amazon Web Services (AWS) using a transcriptomics data analysis workflow executed on two scientific WMS. The findings of this study indicate that CMK provides valuable insights into resource utilization. In doing so, it paves the way for more efficient management of resources in containerized scientific workflows running in public cloud environments, and it provides a foundation for optimizing task configurations, reducing costs, and enhancing scheduling decisions. Overall, our solution addresses the immediate needs of bioinformatics workflows and offers a scalable and adaptable framework for future advancements in cloud-based scientific computing.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141880788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Resource Utilization Based on Hybrid WOA-LOA Optimization with Credit Based Resource Aware Load Balancing and Scheduling Algorithm for Cloud Computing 基于混合 WOA-LOA 优化的资源利用率与基于信用的资源感知负载平衡和云计算调度算法
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-25 DOI: 10.1007/s10723-024-09776-0
Abhikriti Narwal

In a cloud computing environment, tasks are divided among virtual machines (VMs) with different start times, duration and execution periods. Thus, distributing these loads among the virtual machines is crucial, in order to maximize resource utilization and enhance system performance, load balancing must be implemented that ensures balance across all virtual machines (VMs). In the proposed framework, a credit-based resource-aware load balancing scheduling algorithm (HO-CB-RALB-SA) was created using a hybrid Walrus Optimization Algorithm (WOA) and Lyrebird Optimization Algorithm (LOA) for cloud computing. The proposed model is developed by jointly performing both load balancing and task scheduling. This article improves the credit-based load-balancing ideas by integrating a resource-aware strategy and scheduling algorithm. It maintains a balanced system load by evaluating the load as well as processing capacity of every VM through the use of a resource-aware load balancing algorithm. This method functions primarily on two stages which include scheduling dependent on the VM’s processing power. By employing supply and demand criteria to determine which VM has the least amount of load to map jobs or redistribute jobs from overloaded to underloaded VM. For efficient resource management and equitable task distribution among VM, the load balancing method makes use of a resource-aware optimization algorithm. After that, the credit-based scheduling algorithm weights the tasks and applies intelligent resource mapping that considers the computational capacity and demand of each resource. The FILL and SPILL functions in Resource Aware and Load utilize the hybrid Optimization Algorithm to facilitate this mapping. The user tasks are scheduled in a queued based on the length of the task using the FILL and SPILL scheduler algorithm. This algorithm functions with the assistance of the PEFT approach. The optimal threshold values for each VM are selected by evaluating the task based on the fitness function of minimising makespan and cost function using the hybrid Walrus Optimization Algorithm (WOA) and Lyrebird Optimization Algorithm (LOA).The application has been simulated and the QOS parameter, which includes Turn Around Time (TAT), resource utilization, Average Response Time (ART), Makespan Time (MST), Total Execution Time (TET), Total Processing Cost (TPC), and Total Processing Time (TPT) for the 400, 800, 1200, 1600, and 2000 cloudlets, has been determined by utilizing the cloudsim tool. The performance parameters for the proposed HO-CB-RALB-SA and the existing models are evaluated and compared. For the proposed HO-CB-RALB-SA model with 2000 cloudlets, the following parameter values are found: 526.023 ms of MST, 12741.79 ms of TPT, 33422.87$ of TPC, 23770.45 ms of TET, 172.32 ms of ART, 9593 MB of network utilization, 28.1 of energy consumption, 79.9 Mbps of throughput, 5 ms of TAT, 18.6 ms for total waiting time and 17.5% of resource utilization. Based on s

在云计算环境中,任务被分配给具有不同启动时间、持续时间和执行期的虚拟机(VM)。因此,在虚拟机之间分配这些负载至关重要,为了最大限度地利用资源和提高系统性能,必须实施负载平衡,以确保所有虚拟机(VM)之间的平衡。在所提出的框架中,使用混合华勒斯优化算法(WOA)和Lyrebird优化算法(LOA)为云计算创建了基于信用的资源感知负载平衡调度算法(HO-CB-RALB-SA)。所提出的模型是通过联合执行负载平衡和任务调度而开发的。本文通过整合资源感知策略和调度算法,改进了基于信用的负载平衡思想。它通过使用资源感知负载均衡算法,评估每个虚拟机的负载和处理能力,从而保持系统负载均衡。这种方法主要在两个阶段发挥作用,其中包括根据虚拟机的处理能力进行调度。通过采用供需标准来确定哪个虚拟机的负载量最小,以映射作业或将作业从超载的虚拟机重新分配到负载不足的虚拟机。为了在虚拟机之间实现高效的资源管理和公平的任务分配,负载均衡方法采用了资源感知优化算法。然后,基于信用的调度算法会对任务进行加权,并应用智能资源映射,考虑每种资源的计算能力和需求。资源感知和负载中的 FILL 和 SPILL 功能利用混合优化算法来促进这种映射。利用 FILL 和 SPILL 调度器算法,根据任务长度将用户任务排入队列。该算法在 PEFT 方法的协助下运行。通过使用混合海象优化算法(WOA)和 Lyrebird 优化算法(LOA),根据最小化时间跨度和成本函数的合适度函数对任务进行评估,从而为每个虚拟机选择最佳阈值。利用 cloudsim 工具对该应用进行了仿真,并确定了 400、800、1200、1600 和 2000 小云的 QOS 参数,其中包括周转时间 (TAT)、资源利用率、平均响应时间 (ART)、耗时 (MST)、总执行时间 (TET)、总处理成本 (TPC) 和总处理时间 (TPT)。评估并比较了提议的 HO-CB-RALB-SA 和现有模型的性能参数。对于使用 2000 个小云的 HO-CB-RALB-SA 模型,参数值如下:MST 526.023 ms、TPT 12741.79 ms、TPC 33422.87$、TET 23770.45 ms、ART 172.32 ms、网络利用率 9593 MB、能耗 28.1、吞吐量 79.9 Mbps、TAT 5 ms、总等待时间 18.6 ms 和资源利用率 17.5%。基于多个性能参数,仿真结果表明,在云环境中,HO-CB-RALB-SA 策略在高效利用资源方面优于其他两种现有模型。
{"title":"Resource Utilization Based on Hybrid WOA-LOA Optimization with Credit Based Resource Aware Load Balancing and Scheduling Algorithm for Cloud Computing","authors":"Abhikriti Narwal","doi":"10.1007/s10723-024-09776-0","DOIUrl":"https://doi.org/10.1007/s10723-024-09776-0","url":null,"abstract":"<p>In a cloud computing environment, tasks are divided among virtual machines (VMs) with different start times, duration and execution periods. Thus, distributing these loads among the virtual machines is crucial, in order to maximize resource utilization and enhance system performance, load balancing must be implemented that ensures balance across all virtual machines (VMs). In the proposed framework, a credit-based resource-aware load balancing scheduling algorithm (HO-CB-RALB-SA) was created using a hybrid Walrus Optimization Algorithm (WOA) and Lyrebird Optimization Algorithm (LOA) for cloud computing. The proposed model is developed by jointly performing both load balancing and task scheduling. This article improves the credit-based load-balancing ideas by integrating a resource-aware strategy and scheduling algorithm. It maintains a balanced system load by evaluating the load as well as processing capacity of every VM through the use of a resource-aware load balancing algorithm. This method functions primarily on two stages which include scheduling dependent on the VM’s processing power. By employing supply and demand criteria to determine which VM has the least amount of load to map jobs or redistribute jobs from overloaded to underloaded VM. For efficient resource management and equitable task distribution among VM, the load balancing method makes use of a resource-aware optimization algorithm. After that, the credit-based scheduling algorithm weights the tasks and applies intelligent resource mapping that considers the computational capacity and demand of each resource. The FILL and SPILL functions in Resource Aware and Load utilize the hybrid Optimization Algorithm to facilitate this mapping. The user tasks are scheduled in a queued based on the length of the task using the FILL and SPILL scheduler algorithm. This algorithm functions with the assistance of the PEFT approach. The optimal threshold values for each VM are selected by evaluating the task based on the fitness function of minimising makespan and cost function using the hybrid Walrus Optimization Algorithm (WOA) and Lyrebird Optimization Algorithm (LOA).The application has been simulated and the QOS parameter, which includes Turn Around Time (TAT), resource utilization, Average Response Time (ART), Makespan Time (MST), Total Execution Time (TET), Total Processing Cost (TPC), and Total Processing Time (TPT) for the 400, 800, 1200, 1600, and 2000 cloudlets, has been determined by utilizing the cloudsim tool. The performance parameters for the proposed HO-CB-RALB-SA and the existing models are evaluated and compared. For the proposed HO-CB-RALB-SA model with 2000 cloudlets, the following parameter values are found: 526.023 ms of MST, 12741.79 ms of TPT, 33422.87$ of TPC, 23770.45 ms of TET, 172.32 ms of ART, 9593 MB of network utilization, 28.1 of energy consumption, 79.9 Mbps of throughput, 5 ms of TAT, 18.6 ms for total waiting time and 17.5% of resource utilization. Based on s","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141785241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy-Constrained DAG Scheduling on Edge and Cloud Servers with Overlapped Communication and Computation 具有重叠通信和计算功能的边缘和云服务器上的能量受限 DAG 调度
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-02 DOI: 10.1007/s10723-024-09775-1
Keqin Li

Mobile edge computing (MEC) has been widely applied to numerous areas and aspects of human life and modern society. Many such applications can be represented as directed acyclic graphs (DAG). Device-edge-cloud fusion provides a new kind of heterogeneous, distributed, and collaborative computing environment to support various MEC applications. DAG scheduling is a procedure employed to effectively and efficiently manage and monitor the execution of tasks that have precedence constraints on each other. In this paper, we investigate the NP-hard problems of DAG scheduling and energy-constrained DAG scheduling on mobile devices, edge servers, and cloud servers by designing and evaluating new heuristic algorithms. Our contributions to DAG scheduling can be summarized as follows. First, our heuristic algorithms guarantee that all task dependencies are correctly followed by keeping track of the number of remaining predecessors that are still not completed. Second, our heuristic algorithms ensure that all wireless transmissions between a mobile device and edge/cloud servers are performed one after another. Third, our heuristic algorithms allow an edge/cloud server to start the execution of a task as soon as the transmission of the task is finished. Fourth, we derive a lower bound for the optimal makespan such that the solutions of our heuristic algorithms can be compared with optimal solutions. Our contributions to energy-constrained DAG scheduling can be summarized as follows. First, our heuristic algorithms ensure that the overall computation energy consumption and communication energy consumption does not exceed the given energy constraint. Second, our algorithms adopt an iterative and progressive procedure to determine appropriate computation speed and wireless communication speeds while generating a DAG schedule and satisfying the energy constraint. Third, we derive a lower bound for the optimal makespan and evaluate the performance of our heuristic algorithms in such a way that their heuristic solutions are compared with optimal solutions. To the author’s knowledge, this is the first paper that considers DAG scheduling and energy-constrained DAG scheduling on edge and cloud servers with sequential wireless communications and overlapped communication and computation to minimize makespan.

移动边缘计算(MEC)已广泛应用于人类生活和现代社会的众多领域和方面。许多此类应用都可以表示为有向无环图(DAG)。设备-边缘-云融合为支持各种 MEC 应用提供了一种新型的异构、分布式协作计算环境。DAG 调度是一种有效管理和监控任务执行的程序,这些任务之间存在优先级约束。本文通过设计和评估新的启发式算法,研究了移动设备、边缘服务器和云服务器上的 DAG 调度和能量受限 DAG 调度的 NP 难问题。我们对 DAG 调度的贡献可总结如下。首先,我们的启发式算法通过跟踪仍未完成的剩余前置任务的数量,保证所有任务的依赖关系都得到正确遵循。其次,我们的启发式算法可确保移动设备与边缘/云服务器之间的所有无线传输都是相继进行的。第三,我们的启发式算法允许边缘/云服务器在任务传输完成后立即开始执行任务。第四,我们推导出了最优时间跨度的下限,从而可以将启发式算法的解决方案与最优解决方案进行比较。我们对能量受限 DAG 调度的贡献可总结如下。首先,我们的启发式算法确保整体计算能耗和通信能耗不超过给定的能量约束。其次,我们的算法采用迭代渐进式程序,在生成 DAG 调度并满足能量约束的同时,确定适当的计算速度和无线通信速度。第三,我们推导出了最优时间跨度的下限,并以启发式解决方案与最优解决方案进行比较的方式评估了启发式算法的性能。据作者所知,这是第一篇考虑在边缘服务器和云服务器上进行 DAG 调度和能量受限 DAG 调度的论文,这些服务器具有顺序无线通信和重叠通信与计算,以最小化有效期。
{"title":"Energy-Constrained DAG Scheduling on Edge and Cloud Servers with Overlapped Communication and Computation","authors":"Keqin Li","doi":"10.1007/s10723-024-09775-1","DOIUrl":"https://doi.org/10.1007/s10723-024-09775-1","url":null,"abstract":"<p>Mobile edge computing (MEC) has been widely applied to numerous areas and aspects of human life and modern society. Many such applications can be represented as directed acyclic graphs (DAG). Device-edge-cloud fusion provides a new kind of heterogeneous, distributed, and collaborative computing environment to support various MEC applications. DAG scheduling is a procedure employed to effectively and efficiently manage and monitor the execution of tasks that have precedence constraints on each other. In this paper, we investigate the NP-hard problems of DAG scheduling and energy-constrained DAG scheduling on mobile devices, edge servers, and cloud servers by designing and evaluating new heuristic algorithms. Our contributions to DAG scheduling can be summarized as follows. First, our heuristic algorithms guarantee that all task dependencies are correctly followed by keeping track of the number of remaining predecessors that are still not completed. Second, our heuristic algorithms ensure that all wireless transmissions between a mobile device and edge/cloud servers are performed one after another. Third, our heuristic algorithms allow an edge/cloud server to start the execution of a task as soon as the transmission of the task is finished. Fourth, we derive a lower bound for the optimal makespan such that the solutions of our heuristic algorithms can be compared with optimal solutions. Our contributions to energy-constrained DAG scheduling can be summarized as follows. First, our heuristic algorithms ensure that the overall computation energy consumption and communication energy consumption does not exceed the given energy constraint. Second, our algorithms adopt an iterative and progressive procedure to determine appropriate computation speed and wireless communication speeds while generating a DAG schedule and satisfying the energy constraint. Third, we derive a lower bound for the optimal makespan and evaluate the performance of our heuristic algorithms in such a way that their heuristic solutions are compared with optimal solutions. To the author’s knowledge, this is the first paper that considers DAG scheduling and energy-constrained DAG scheduling on edge and cloud servers with sequential wireless communications and overlapped communication and computation to minimize makespan.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141513872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Resource Allocation Using Deep Deterministic Policy Gradient-Based Federated Learning for Multi-Access Edge Computing 利用基于梯度联合学习的深度确定性策略为多接入边缘计算分配资源
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-27 DOI: 10.1007/s10723-024-09774-2
Zheyu Zhou, Qi Wang, Jizhou Li, Ziyuan Li

The study focuses on utilizing the computational resources present in vehicles to enhance the performance of multi-access edge computing (MEC) systems. While vehicles are typically equipped with computational services for vehicle-centric Internet of Vehicles (IoV) applications, their resources can also be leveraged to reduce the workload on edge servers and improve task processing speed in MEC scenarios. Previous research efforts have overlooked the potential resource utilization of passing vehicles, which can be a valuable addition to MEC systems alongside parked cars. This study introduces an assisted MEC scenario where a base station (BS) with an edge server serves various devices, parked cars, and vehicular traffic. A cooperative approach using the Deep Deterministic Policy Gradient (DDPG) based Federated Learning method is proposed to optimize resource allocation and job offloading. This method enables the transfer of device operations from devices to the BS or from the BS to vehicles based on specific requirements. The proposed system also considers the duration for which a vehicle can provide job offloading services within the range of the BS before leaving. The objective of the DDPG-FL method is to minimize the overall priority-weighted task computation time. Through simulation results and a comparison with three other schemes, the study demonstrates the superiority of their proposed method in seven different scenarios. The findings highlight the potential of incorporating vehicular resources in MEC systems, showcasing improved task processing efficiency and overall system performance.

这项研究的重点是利用车辆中的计算资源来提高多访问边缘计算(MEC)系统的性能。虽然车辆通常都配备了计算服务,用于以车辆为中心的车联网(IoV)应用,但在多访问边缘计算(MEC)场景中,也可以利用车辆资源来减少边缘服务器的工作量,提高任务处理速度。以往的研究工作忽略了过往车辆的潜在资源利用率,而这些车辆与停放的汽车一样,可以成为 MEC 系统的宝贵补充。本研究介绍了一种辅助 MEC 场景,在该场景中,基站(BS)与边缘服务器一起为各种设备、停放的汽车和车流提供服务。研究提出了一种使用基于深度确定性策略梯度(DDPG)的联合学习方法来优化资源分配和作业卸载的合作方法。该方法可根据特定要求将设备操作从设备转移到 BS,或从 BS 转移到车辆。提议的系统还考虑了车辆在离开前在 BS 范围内提供作业卸载服务的持续时间。DDPG-FL 方法的目标是最大限度地减少整体优先级加权任务计算时间。通过模拟结果以及与其他三种方案的比较,该研究证明了他们提出的方法在七种不同场景中的优越性。研究结果凸显了将车辆资源纳入 MEC 系统的潜力,展示了任务处理效率和整体系统性能的提高。
{"title":"Resource Allocation Using Deep Deterministic Policy Gradient-Based Federated Learning for Multi-Access Edge Computing","authors":"Zheyu Zhou, Qi Wang, Jizhou Li, Ziyuan Li","doi":"10.1007/s10723-024-09774-2","DOIUrl":"https://doi.org/10.1007/s10723-024-09774-2","url":null,"abstract":"<p>The study focuses on utilizing the computational resources present in vehicles to enhance the performance of multi-access edge computing (MEC) systems. While vehicles are typically equipped with computational services for vehicle-centric Internet of Vehicles (IoV) applications, their resources can also be leveraged to reduce the workload on edge servers and improve task processing speed in MEC scenarios. Previous research efforts have overlooked the potential resource utilization of passing vehicles, which can be a valuable addition to MEC systems alongside parked cars. This study introduces an assisted MEC scenario where a base station (BS) with an edge server serves various devices, parked cars, and vehicular traffic. A cooperative approach using the Deep Deterministic Policy Gradient (DDPG) based Federated Learning method is proposed to optimize resource allocation and job offloading. This method enables the transfer of device operations from devices to the BS or from the BS to vehicles based on specific requirements. The proposed system also considers the duration for which a vehicle can provide job offloading services within the range of the BS before leaving. The objective of the DDPG-FL method is to minimize the overall priority-weighted task computation time. Through simulation results and a comparison with three other schemes, the study demonstrates the superiority of their proposed method in seven different scenarios. The findings highlight the potential of incorporating vehicular resources in MEC systems, showcasing improved task processing efficiency and overall system performance.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141504926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing Resource Consumption and Reducing Power Usage in Data Centers, A Novel Mathematical VM Replacement Model and Efficient Algorithm 优化数据中心的资源消耗并降低能耗,一种新颖的虚拟机替换数学模型和高效算法
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-20 DOI: 10.1007/s10723-024-09772-4
Reza Rabieyan, Ramin Yahyapour, Patrick Jahnke

This study addresses the issue of power consumption in virtualized cloud data centers by proposing a virtual machine (VM) replacement model and a corresponding algorithm. The model incorporates multi-objective functions, aiming to optimize VM selection based on weights and minimize resource utilization disparities across hosts. Constraints are incorporated to ensure that CPU utilization remains close to the average CPU usage while mitigating overutilization in memory and network bandwidth usage. The proposed algorithm offers a fast and efficient solution with minimal VM replacements. The experimental simulation results demonstrate significant reductions in power consumption compared with a benchmark model. The proposed model and algorithm have been implemented and operated within a real-world cloud infrastructure, emphasizing their practicality.

本研究通过提出一种虚拟机(VM)替换模型和相应算法,解决了虚拟化云数据中心的能耗问题。该模型包含多目标函数,旨在根据权重优化虚拟机选择,并最大限度地减少主机间的资源利用率差异。该模型纳入了一些约束条件,以确保 CPU 利用率接近平均 CPU 利用率,同时减少内存和网络带宽的过度利用。所提出的算法提供了一种快速高效的解决方案,只需最少的虚拟机替换。实验模拟结果表明,与基准模型相比,功耗显著降低。提出的模型和算法已在现实世界的云基础设施中实施和运行,强调了其实用性。
{"title":"Optimizing Resource Consumption and Reducing Power Usage in Data Centers, A Novel Mathematical VM Replacement Model and Efficient Algorithm","authors":"Reza Rabieyan, Ramin Yahyapour, Patrick Jahnke","doi":"10.1007/s10723-024-09772-4","DOIUrl":"https://doi.org/10.1007/s10723-024-09772-4","url":null,"abstract":"<p>This study addresses the issue of power consumption in virtualized cloud data centers by proposing a virtual machine (VM) replacement model and a corresponding algorithm. The model incorporates multi-objective functions, aiming to optimize VM selection based on weights and minimize resource utilization disparities across hosts. Constraints are incorporated to ensure that CPU utilization remains close to the average CPU usage while mitigating overutilization in memory and network bandwidth usage. The proposed algorithm offers a fast and efficient solution with minimal VM replacements. The experimental simulation results demonstrate significant reductions in power consumption compared with a benchmark model. The proposed model and algorithm have been implemented and operated within a real-world cloud infrastructure, emphasizing their practicality.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141504927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EQGSA-DPW: A Quantum-GSA Algorithm-Based Data Placement for Scientific Workflow in Cloud Computing Environment EQGSA-DPW:基于量子-GSA 算法的云计算环境中科学工作流的数据布局
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-18 DOI: 10.1007/s10723-024-09771-5
Zaki Brahmi, Rihab Derouiche

The processing of scientific workflow (SW) in geo-distributed cloud computing holds significant importance in the placement of massive data between various tasks. However, data movement across storage services is a main concern in the geo-distributed data centers, which entails issues related to the cost and energy consumption of both storage services and network infrastructure. Aiming to optimize data placement for SW, this paper proposes EQGSA-DPW a novel algorithm leveraging quantum computing and swarm intelligence optimization to intelligently reduce costs and energy consumption when a SW is processed in multi-cloud. EQGSA-DPW considers multiple objectives (e.g., transmission bandwidth, cost and energy consumption of both service and communication) and improves the GSA algorithm by using the log-sigmoid transfer function as a gravitational constant G and updating agent position by quantum rotation angle amplitude for more diversification. Moreover, to assist EQGSA-DPW in finding the optima, an initial guess is proposed. The performance of our EQGSA-DPW algorithm is evaluated via extensive experiments, which show that our data placement method achieves significantly better performance in terms of cost, energy, and data transfer than competing algorithms. For instance, in terms of energy consumption, EQGSA-DPW can on average achieve up to (25%), (14%), and (40%) reduction over that of GSA, PSO, and ACO-DPDGW algorithms, respectively. As for the storage services cost, EQGSA-DPW values are the lowest.

在地理分布式云计算中处理科学工作流(SW),对于在各种任务之间放置海量数据具有重要意义。然而,在地理分布式数据中心中,数据在存储服务间的移动是一个主要问题,这涉及到存储服务和网络基础设施的成本和能耗问题。为了优化 SW 的数据放置,本文提出了 EQGSA-DPW,这是一种利用量子计算和蜂群智能优化的新型算法,可在多云处理 SW 时智能地降低成本和能耗。EQGSA-DPW 考虑了多个目标(如传输带宽、服务和通信的成本和能耗),并改进了 GSA 算法,使用对数-半规传递函数作为引力常数 G,并通过量子旋转角振幅更新代理位置,使其更加多样化。此外,为了帮助 EQGSA-DPW 找到最优值,还提出了一个初始猜测。我们通过大量实验对 EQGSA-DPW 算法的性能进行了评估,结果表明我们的数据放置方法在成本、能耗和数据传输方面的性能明显优于其他竞争算法。例如,在能耗方面,与GSA、PSO和ACO-DPDGW算法相比,EQGSA-DPW平均可分别减少25%、14%和40%的能耗。至于存储服务成本,EQGSA-DPW的值最低。
{"title":"EQGSA-DPW: A Quantum-GSA Algorithm-Based Data Placement for Scientific Workflow in Cloud Computing Environment","authors":"Zaki Brahmi, Rihab Derouiche","doi":"10.1007/s10723-024-09771-5","DOIUrl":"https://doi.org/10.1007/s10723-024-09771-5","url":null,"abstract":"<p>The processing of scientific workflow (SW) in geo-distributed cloud computing holds significant importance in the placement of massive data between various tasks. However, data movement across storage services is a main concern in the geo-distributed data centers, which entails issues related to the cost and energy consumption of both storage services and network infrastructure. Aiming to optimize data placement for SW, this paper proposes EQGSA-DPW a novel algorithm leveraging quantum computing and swarm intelligence optimization to intelligently reduce costs and energy consumption when a SW is processed in multi-cloud. EQGSA-DPW considers multiple objectives (e.g., transmission bandwidth, cost and energy consumption of both service and communication) and improves the GSA algorithm by using the log-sigmoid transfer function as a gravitational constant <i>G</i> and updating agent position by quantum rotation angle amplitude for more diversification. Moreover, to assist EQGSA-DPW in finding the optima, an initial guess is proposed. The performance of our EQGSA-DPW algorithm is evaluated via extensive experiments, which show that our data placement method achieves significantly better performance in terms of cost, energy, and data transfer than competing algorithms. For instance, in terms of energy consumption, EQGSA-DPW can on average achieve up to <span>(25%)</span>, <span>(14%)</span>, and <span>(40%)</span> reduction over that of GSA, PSO, and ACO-DPDGW algorithms, respectively. As for the storage services cost, EQGSA-DPW values are the lowest.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141504928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Enhanced Energy Aware Resource Optimization for Edge Devices Through Multi-cluster Communication Systems 通过多集群通信系统实现增强型边缘设备能源意识资源优化
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-06 DOI: 10.1007/s10723-024-09773-3
Saihong Li, Yingying Ma, Yusha Zhang, Yinghui Xie

In the realm of the Internet of Things (IoT), the significance of edge devices within multi-cluster communication systems is on the rise. As the quantity of clusters and devices associated with each cluster grows, challenges related to resource optimization emerge. To address these concerns and enhance resource utilization, it is imperative to devise efficient strategies for resource allocation to specific clusters. These strategies encompass the implementation of load-balancing algorithms, dynamic scheduling, and virtualization techniques that generate logical instances of resources within the clusters. Moreover, the implementation of data management techniques is essential to facilitate effective data sharing among clusters. These strategies collectively minimize resource waste, enabling the streamlined management of networking and data resources in a multi-cluster communication system. This paper introduces an energy-efficient resource allocation technique tailored for edge devices in such systems. The proposed strategy leverages a higher-level meta-cluster heuristic to construct an optimization model, aiming to enhance the resource utilization of individual edge nodes. Emphasizing energy consumption and resource optimization while meeting latency requirements, the model employs a graph-based node selection method to assign high-load nodes to optimal clusters. To ensure fairness, resource allocation collaborates with resource descriptions and Quality of Service (QoS) metrics to tailor resource distribution. Additionally, the proposed strategy dynamically updates its parameter settings to adapt to various scenarios. The simulations confirm the superiority of the proposed strategy in different aspects.

在物联网(IoT)领域,边缘设备在多集群通信系统中的重要性与日俱增。随着集群数量和与每个集群相关的设备数量的增加,与资源优化相关的挑战也随之出现。为了解决这些问题并提高资源利用率,当务之急是为特定集群设计高效的资源分配策略。这些策略包括实施负载平衡算法、动态调度和虚拟化技术,从而在集群内生成资源的逻辑实例。此外,数据管理技术的实施对于促进集群间有效的数据共享也至关重要。这些策略共同最大限度地减少了资源浪费,实现了多集群通信系统中网络和数据资源的简化管理。本文介绍了一种为此类系统中的边缘设备量身定制的高能效资源分配技术。所提出的策略利用高层元集群启发式构建优化模型,旨在提高单个边缘节点的资源利用率。该模型强调能耗和资源优化,同时满足延迟要求,采用基于图的节点选择方法,将高负载节点分配到最佳簇。为确保公平性,资源分配与资源描述和服务质量(QoS)指标协作,以定制资源分配。此外,所提出的策略还能动态更新参数设置,以适应各种情况。模拟证实了所提策略在不同方面的优越性。
{"title":"Towards Enhanced Energy Aware Resource Optimization for Edge Devices Through Multi-cluster Communication Systems","authors":"Saihong Li, Yingying Ma, Yusha Zhang, Yinghui Xie","doi":"10.1007/s10723-024-09773-3","DOIUrl":"https://doi.org/10.1007/s10723-024-09773-3","url":null,"abstract":"<p>In the realm of the Internet of Things (IoT), the significance of edge devices within multi-cluster communication systems is on the rise. As the quantity of clusters and devices associated with each cluster grows, challenges related to resource optimization emerge. To address these concerns and enhance resource utilization, it is imperative to devise efficient strategies for resource allocation to specific clusters. These strategies encompass the implementation of load-balancing algorithms, dynamic scheduling, and virtualization techniques that generate logical instances of resources within the clusters. Moreover, the implementation of data management techniques is essential to facilitate effective data sharing among clusters. These strategies collectively minimize resource waste, enabling the streamlined management of networking and data resources in a multi-cluster communication system. This paper introduces an energy-efficient resource allocation technique tailored for edge devices in such systems. The proposed strategy leverages a higher-level meta-cluster heuristic to construct an optimization model, aiming to enhance the resource utilization of individual edge nodes. Emphasizing energy consumption and resource optimization while meeting latency requirements, the model employs a graph-based node selection method to assign high-load nodes to optimal clusters. To ensure fairness, resource allocation collaborates with resource descriptions and Quality of Service (QoS) metrics to tailor resource distribution. Additionally, the proposed strategy dynamically updates its parameter settings to adapt to various scenarios. The simulations confirm the superiority of the proposed strategy in different aspects.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141549678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Grid Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1