Pub Date : 2024-09-12DOI: 10.1007/s10723-024-09780-4
Vasiliki Liagkou, George Fragiadakis, Evangelia Filiopoulou, Christos Michalakelis, Anargyros Tsadimas, Mara Nikolaidou
Cloud computing has gained popularity at a breakneck pace over the last few years. It has revolutionized the way businesses operate by providing a flexible and scalable infrastructure for their computing needs. Cloud providers offer a range of services with a variety of pricing schemes. Cloud pricing schemes are based on functional factors like CPU, RAM, and storage, combined with different payment options, such as pay-per-use, subscription-based, and non-functional aspects, such as scalability and availability. While cloud pricing can be complicated, it is critical for businesses to thoroughly assess and compare pricing policies along with technical requirements to ensure they design an investment strategy. This paper evaluates current pricing strategies for IaaS, CaaS, and PaaS cloud services and also focuses on the three leading cloud providers, Amazon, Microsoft, and Google. To compare pricing policies between different services and providers, a hedonic price index is constructed for each service type based on data collected in 2022. Using the hedonic price index, a comparative analysis between them becomes feasible. The results revealed that providers follow the very same pricing pattern for IaaS and CaaS, with CPU being the main driver of cloud pricing schemes, whereas PaaS pricing fluctuates among cloud providers.
{"title":"Assessing the Complexity of Cloud Pricing Policies: A Comparative Market Analysis","authors":"Vasiliki Liagkou, George Fragiadakis, Evangelia Filiopoulou, Christos Michalakelis, Anargyros Tsadimas, Mara Nikolaidou","doi":"10.1007/s10723-024-09780-4","DOIUrl":"https://doi.org/10.1007/s10723-024-09780-4","url":null,"abstract":"<p>Cloud computing has gained popularity at a breakneck pace over the last few years. It has revolutionized the way businesses operate by providing a flexible and scalable infrastructure for their computing needs. Cloud providers offer a range of services with a variety of pricing schemes. Cloud pricing schemes are based on functional factors like CPU, RAM, and storage, combined with different payment options, such as pay-per-use, subscription-based, and non-functional aspects, such as scalability and availability. While cloud pricing can be complicated, it is critical for businesses to thoroughly assess and compare pricing policies along with technical requirements to ensure they design an investment strategy. This paper evaluates current pricing strategies for IaaS, CaaS, and PaaS cloud services and also focuses on the three leading cloud providers, Amazon, Microsoft, and Google. To compare pricing policies between different services and providers, a hedonic price index is constructed for each service type based on data collected in 2022. Using the hedonic price index, a comparative analysis between them becomes feasible. The results revealed that providers follow the very same pricing pattern for IaaS and CaaS, with CPU being the main driver of cloud pricing schemes, whereas PaaS pricing fluctuates among cloud providers.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142193609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-31DOI: 10.1007/s10723-024-09779-x
Ramin Habibzadeh Sharif, Mohammad Masdari, Ali Ghaffari, Farhad Soleimanian Gharehchopogh
Currently, web service-based edge computing networks are across-the-board, and their users are increasing dramatically. The network users request various services with specific Quality-of-Service (QoS) values. The QoS-aware Web Service Composition (WSC) methods assign available services to users’ tasks and significantly affect their satisfaction. Various methods have been provided to solve the QoS-aware WSC problem; However, this field is still one of the popular research fields since the dimensions of these networks, the number of their users, and the variety of provided services are growing outstandingly. Consequently, this study presents an enhanced Fox Optimizer (FOX)-based framework named EQOLFOX to solve QoS-aware web service composition problems in edge computing environments. In this regard, the Quasi-Oppositional Learning is utilized in the EQOLFOX to diminish the zero-orientation nature of the FOX algorithm. Likewise, a reinitialization strategy is included to enhance EQOLFOX's exploration capability. Besides, a new phase with two new movement strategies is introduced to improve searching abilities. Also, a multi-best strategy is recruited to depart local optimums and lead the population more optimally. Eventually, the greedy selection approach is employed to augment the convergence rate and exploitation capability. The EQOLFOX is applied to ten real-life and artificial web-service-based edge computing environments, each with four different task counts to evaluate its proficiency. The obtained results are compared with the DO, FOX, JS, MVO, RSA, SCA, SMA, and TSA algorithms numerically and visually. The experimental results indicated the contributions' effectiveness and the EQOLFOX's competency.
{"title":"A Quasi-Oppositional Learning-based Fox Optimizer for QoS-aware Web Service Composition in Mobile Edge Computing","authors":"Ramin Habibzadeh Sharif, Mohammad Masdari, Ali Ghaffari, Farhad Soleimanian Gharehchopogh","doi":"10.1007/s10723-024-09779-x","DOIUrl":"https://doi.org/10.1007/s10723-024-09779-x","url":null,"abstract":"<p>Currently, web service-based edge computing networks are across-the-board, and their users are increasing dramatically. The network users request various services with specific Quality-of-Service (QoS) values. The QoS-aware Web Service Composition (WSC) methods assign available services to users’ tasks and significantly affect their satisfaction. Various methods have been provided to solve the QoS-aware WSC problem; However, this field is still one of the popular research fields since the dimensions of these networks, the number of their users, and the variety of provided services are growing outstandingly. Consequently, this study presents an enhanced Fox Optimizer (FOX)-based framework named EQOLFOX to solve QoS-aware web service composition problems in edge computing environments. In this regard, the Quasi-Oppositional Learning is utilized in the EQOLFOX to diminish the zero-orientation nature of the FOX algorithm. Likewise, a reinitialization strategy is included to enhance EQOLFOX's exploration capability. Besides, a new phase with two new movement strategies is introduced to improve searching abilities. Also, a multi-best strategy is recruited to depart local optimums and lead the population more optimally. Eventually, the greedy selection approach is employed to augment the convergence rate and exploitation capability. The EQOLFOX is applied to ten real-life and artificial web-service-based edge computing environments, each with four different task counts to evaluate its proficiency. The obtained results are compared with the DO, FOX, JS, MVO, RSA, SCA, SMA, and TSA algorithms numerically and visually. The experimental results indicated the contributions' effectiveness and the EQOLFOX's competency.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142193616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-13DOI: 10.1007/s10723-024-09778-y
Mohammad Amin Rayej, Hajar Siar, Ahmadreza Hamzei, Mohammad Sadegh Majidi Yazdi, Parsa Mohammadian, Mohammad Izadi
Modeling IoT applications in distributed computing systems as workflows enables automating their procedure. There are different types of workflow-based applications in the literature. Executing IoT applications using device-to-device (D2D) communications in distributed computing systems especially edge paradigms requiring direct communication between devices in a network with a graph topology. This paper introduces a toolkit for simulating resource management of scientific workflows with different structures in distributed environments with graph topology called WIDESim. The proposed simulator enables dynamic resource management and scheduling. We have validated the performance of WIDESim in comparison to standard simulators, also evaluated its performance in real-world scenarios of distributed computing. The results indicate that WIDESim’s performance is close to existing standard simulators besides its improvements. Additionally, the findings demonstrate the satisfactory performance of the extended features incorporated within WIDESim.
{"title":"WIDESim: A Toolkit for Simulating Resource Management Techniques Of Scientific Workflows in Distributed Environments with Graph Topology","authors":"Mohammad Amin Rayej, Hajar Siar, Ahmadreza Hamzei, Mohammad Sadegh Majidi Yazdi, Parsa Mohammadian, Mohammad Izadi","doi":"10.1007/s10723-024-09778-y","DOIUrl":"https://doi.org/10.1007/s10723-024-09778-y","url":null,"abstract":"<p>Modeling IoT applications in distributed computing systems as workflows enables automating their procedure. There are different types of workflow-based applications in the literature. Executing IoT applications using device-to-device (D2D) communications in distributed computing systems especially edge paradigms requiring direct communication between devices in a network with a graph topology. This paper introduces a toolkit for simulating resource management of scientific workflows with different structures in distributed environments with graph topology called WIDESim. The proposed simulator enables dynamic resource management and scheduling. We have validated the performance of WIDESim in comparison to standard simulators, also evaluated its performance in real-world scenarios of distributed computing. The results indicate that WIDESim’s performance is close to existing standard simulators besides its improvements. Additionally, the findings demonstrate the satisfactory performance of the extended features incorporated within WIDESim.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142193617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01DOI: 10.1007/s10723-024-09777-z
Robert Nica, Stefan Götz, Germán Moltó
The increasing use of multiple Workflow Management Systems (WMS) employing various workflow languages and shared workflow repositories enhances the open-source bioinformatics ecosystem. Efficient resource utilization in these systems is crucial for keeping costs low and improving processing times, especially for large-scale bioinformatics workflows running in cloud environments. Recognizing this, our study introduces a novel reference architecture, Cloud Monitoring Kit (CMK), for a multi-platform monitoring system. Our solution is designed to generate uniform, aggregated metrics from containerized workflow tasks scheduled by different WMS. Central to the proposed solution is the use of task labeling methods, which enable convenient grouping and aggregating of metrics independent of the WMS employed. This approach builds upon existing technology, providing additional benefits of modularity and capacity to seamlessly integrate with other data processing or collection systems. We have developed and released an open-source implementation of our system, which we evaluated on Amazon Web Services (AWS) using a transcriptomics data analysis workflow executed on two scientific WMS. The findings of this study indicate that CMK provides valuable insights into resource utilization. In doing so, it paves the way for more efficient management of resources in containerized scientific workflows running in public cloud environments, and it provides a foundation for optimizing task configurations, reducing costs, and enhancing scheduling decisions. Overall, our solution addresses the immediate needs of bioinformatics workflows and offers a scalable and adaptable framework for future advancements in cloud-based scientific computing.
{"title":"CMK: Enhancing Resource Usage Monitoring across Diverse Bioinformatics Workflow Management Systems","authors":"Robert Nica, Stefan Götz, Germán Moltó","doi":"10.1007/s10723-024-09777-z","DOIUrl":"https://doi.org/10.1007/s10723-024-09777-z","url":null,"abstract":"<p>The increasing use of multiple Workflow Management Systems (WMS) employing various workflow languages and shared workflow repositories enhances the open-source bioinformatics ecosystem. Efficient resource utilization in these systems is crucial for keeping costs low and improving processing times, especially for large-scale bioinformatics workflows running in cloud environments. Recognizing this, our study introduces a novel reference architecture, Cloud Monitoring Kit (CMK), for a multi-platform monitoring system. Our solution is designed to generate uniform, aggregated metrics from containerized workflow tasks scheduled by different WMS. Central to the proposed solution is the use of task labeling methods, which enable convenient grouping and aggregating of metrics independent of the WMS employed. This approach builds upon existing technology, providing additional benefits of modularity and capacity to seamlessly integrate with other data processing or collection systems. We have developed and released an open-source implementation of our system, which we evaluated on Amazon Web Services (AWS) using a transcriptomics data analysis workflow executed on two scientific WMS. The findings of this study indicate that CMK provides valuable insights into resource utilization. In doing so, it paves the way for more efficient management of resources in containerized scientific workflows running in public cloud environments, and it provides a foundation for optimizing task configurations, reducing costs, and enhancing scheduling decisions. Overall, our solution addresses the immediate needs of bioinformatics workflows and offers a scalable and adaptable framework for future advancements in cloud-based scientific computing.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141880788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-25DOI: 10.1007/s10723-024-09776-0
Abhikriti Narwal
In a cloud computing environment, tasks are divided among virtual machines (VMs) with different start times, duration and execution periods. Thus, distributing these loads among the virtual machines is crucial, in order to maximize resource utilization and enhance system performance, load balancing must be implemented that ensures balance across all virtual machines (VMs). In the proposed framework, a credit-based resource-aware load balancing scheduling algorithm (HO-CB-RALB-SA) was created using a hybrid Walrus Optimization Algorithm (WOA) and Lyrebird Optimization Algorithm (LOA) for cloud computing. The proposed model is developed by jointly performing both load balancing and task scheduling. This article improves the credit-based load-balancing ideas by integrating a resource-aware strategy and scheduling algorithm. It maintains a balanced system load by evaluating the load as well as processing capacity of every VM through the use of a resource-aware load balancing algorithm. This method functions primarily on two stages which include scheduling dependent on the VM’s processing power. By employing supply and demand criteria to determine which VM has the least amount of load to map jobs or redistribute jobs from overloaded to underloaded VM. For efficient resource management and equitable task distribution among VM, the load balancing method makes use of a resource-aware optimization algorithm. After that, the credit-based scheduling algorithm weights the tasks and applies intelligent resource mapping that considers the computational capacity and demand of each resource. The FILL and SPILL functions in Resource Aware and Load utilize the hybrid Optimization Algorithm to facilitate this mapping. The user tasks are scheduled in a queued based on the length of the task using the FILL and SPILL scheduler algorithm. This algorithm functions with the assistance of the PEFT approach. The optimal threshold values for each VM are selected by evaluating the task based on the fitness function of minimising makespan and cost function using the hybrid Walrus Optimization Algorithm (WOA) and Lyrebird Optimization Algorithm (LOA).The application has been simulated and the QOS parameter, which includes Turn Around Time (TAT), resource utilization, Average Response Time (ART), Makespan Time (MST), Total Execution Time (TET), Total Processing Cost (TPC), and Total Processing Time (TPT) for the 400, 800, 1200, 1600, and 2000 cloudlets, has been determined by utilizing the cloudsim tool. The performance parameters for the proposed HO-CB-RALB-SA and the existing models are evaluated and compared. For the proposed HO-CB-RALB-SA model with 2000 cloudlets, the following parameter values are found: 526.023 ms of MST, 12741.79 ms of TPT, 33422.87$ of TPC, 23770.45 ms of TET, 172.32 ms of ART, 9593 MB of network utilization, 28.1 of energy consumption, 79.9 Mbps of throughput, 5 ms of TAT, 18.6 ms for total waiting time and 17.5% of resource utilization. Based on s
{"title":"Resource Utilization Based on Hybrid WOA-LOA Optimization with Credit Based Resource Aware Load Balancing and Scheduling Algorithm for Cloud Computing","authors":"Abhikriti Narwal","doi":"10.1007/s10723-024-09776-0","DOIUrl":"https://doi.org/10.1007/s10723-024-09776-0","url":null,"abstract":"<p>In a cloud computing environment, tasks are divided among virtual machines (VMs) with different start times, duration and execution periods. Thus, distributing these loads among the virtual machines is crucial, in order to maximize resource utilization and enhance system performance, load balancing must be implemented that ensures balance across all virtual machines (VMs). In the proposed framework, a credit-based resource-aware load balancing scheduling algorithm (HO-CB-RALB-SA) was created using a hybrid Walrus Optimization Algorithm (WOA) and Lyrebird Optimization Algorithm (LOA) for cloud computing. The proposed model is developed by jointly performing both load balancing and task scheduling. This article improves the credit-based load-balancing ideas by integrating a resource-aware strategy and scheduling algorithm. It maintains a balanced system load by evaluating the load as well as processing capacity of every VM through the use of a resource-aware load balancing algorithm. This method functions primarily on two stages which include scheduling dependent on the VM’s processing power. By employing supply and demand criteria to determine which VM has the least amount of load to map jobs or redistribute jobs from overloaded to underloaded VM. For efficient resource management and equitable task distribution among VM, the load balancing method makes use of a resource-aware optimization algorithm. After that, the credit-based scheduling algorithm weights the tasks and applies intelligent resource mapping that considers the computational capacity and demand of each resource. The FILL and SPILL functions in Resource Aware and Load utilize the hybrid Optimization Algorithm to facilitate this mapping. The user tasks are scheduled in a queued based on the length of the task using the FILL and SPILL scheduler algorithm. This algorithm functions with the assistance of the PEFT approach. The optimal threshold values for each VM are selected by evaluating the task based on the fitness function of minimising makespan and cost function using the hybrid Walrus Optimization Algorithm (WOA) and Lyrebird Optimization Algorithm (LOA).The application has been simulated and the QOS parameter, which includes Turn Around Time (TAT), resource utilization, Average Response Time (ART), Makespan Time (MST), Total Execution Time (TET), Total Processing Cost (TPC), and Total Processing Time (TPT) for the 400, 800, 1200, 1600, and 2000 cloudlets, has been determined by utilizing the cloudsim tool. The performance parameters for the proposed HO-CB-RALB-SA and the existing models are evaluated and compared. For the proposed HO-CB-RALB-SA model with 2000 cloudlets, the following parameter values are found: 526.023 ms of MST, 12741.79 ms of TPT, 33422.87$ of TPC, 23770.45 ms of TET, 172.32 ms of ART, 9593 MB of network utilization, 28.1 of energy consumption, 79.9 Mbps of throughput, 5 ms of TAT, 18.6 ms for total waiting time and 17.5% of resource utilization. Based on s","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141785241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-02DOI: 10.1007/s10723-024-09775-1
Keqin Li
Mobile edge computing (MEC) has been widely applied to numerous areas and aspects of human life and modern society. Many such applications can be represented as directed acyclic graphs (DAG). Device-edge-cloud fusion provides a new kind of heterogeneous, distributed, and collaborative computing environment to support various MEC applications. DAG scheduling is a procedure employed to effectively and efficiently manage and monitor the execution of tasks that have precedence constraints on each other. In this paper, we investigate the NP-hard problems of DAG scheduling and energy-constrained DAG scheduling on mobile devices, edge servers, and cloud servers by designing and evaluating new heuristic algorithms. Our contributions to DAG scheduling can be summarized as follows. First, our heuristic algorithms guarantee that all task dependencies are correctly followed by keeping track of the number of remaining predecessors that are still not completed. Second, our heuristic algorithms ensure that all wireless transmissions between a mobile device and edge/cloud servers are performed one after another. Third, our heuristic algorithms allow an edge/cloud server to start the execution of a task as soon as the transmission of the task is finished. Fourth, we derive a lower bound for the optimal makespan such that the solutions of our heuristic algorithms can be compared with optimal solutions. Our contributions to energy-constrained DAG scheduling can be summarized as follows. First, our heuristic algorithms ensure that the overall computation energy consumption and communication energy consumption does not exceed the given energy constraint. Second, our algorithms adopt an iterative and progressive procedure to determine appropriate computation speed and wireless communication speeds while generating a DAG schedule and satisfying the energy constraint. Third, we derive a lower bound for the optimal makespan and evaluate the performance of our heuristic algorithms in such a way that their heuristic solutions are compared with optimal solutions. To the author’s knowledge, this is the first paper that considers DAG scheduling and energy-constrained DAG scheduling on edge and cloud servers with sequential wireless communications and overlapped communication and computation to minimize makespan.
移动边缘计算(MEC)已广泛应用于人类生活和现代社会的众多领域和方面。许多此类应用都可以表示为有向无环图(DAG)。设备-边缘-云融合为支持各种 MEC 应用提供了一种新型的异构、分布式协作计算环境。DAG 调度是一种有效管理和监控任务执行的程序,这些任务之间存在优先级约束。本文通过设计和评估新的启发式算法,研究了移动设备、边缘服务器和云服务器上的 DAG 调度和能量受限 DAG 调度的 NP 难问题。我们对 DAG 调度的贡献可总结如下。首先,我们的启发式算法通过跟踪仍未完成的剩余前置任务的数量,保证所有任务的依赖关系都得到正确遵循。其次,我们的启发式算法可确保移动设备与边缘/云服务器之间的所有无线传输都是相继进行的。第三,我们的启发式算法允许边缘/云服务器在任务传输完成后立即开始执行任务。第四,我们推导出了最优时间跨度的下限,从而可以将启发式算法的解决方案与最优解决方案进行比较。我们对能量受限 DAG 调度的贡献可总结如下。首先,我们的启发式算法确保整体计算能耗和通信能耗不超过给定的能量约束。其次,我们的算法采用迭代渐进式程序,在生成 DAG 调度并满足能量约束的同时,确定适当的计算速度和无线通信速度。第三,我们推导出了最优时间跨度的下限,并以启发式解决方案与最优解决方案进行比较的方式评估了启发式算法的性能。据作者所知,这是第一篇考虑在边缘服务器和云服务器上进行 DAG 调度和能量受限 DAG 调度的论文,这些服务器具有顺序无线通信和重叠通信与计算,以最小化有效期。
{"title":"Energy-Constrained DAG Scheduling on Edge and Cloud Servers with Overlapped Communication and Computation","authors":"Keqin Li","doi":"10.1007/s10723-024-09775-1","DOIUrl":"https://doi.org/10.1007/s10723-024-09775-1","url":null,"abstract":"<p>Mobile edge computing (MEC) has been widely applied to numerous areas and aspects of human life and modern society. Many such applications can be represented as directed acyclic graphs (DAG). Device-edge-cloud fusion provides a new kind of heterogeneous, distributed, and collaborative computing environment to support various MEC applications. DAG scheduling is a procedure employed to effectively and efficiently manage and monitor the execution of tasks that have precedence constraints on each other. In this paper, we investigate the NP-hard problems of DAG scheduling and energy-constrained DAG scheduling on mobile devices, edge servers, and cloud servers by designing and evaluating new heuristic algorithms. Our contributions to DAG scheduling can be summarized as follows. First, our heuristic algorithms guarantee that all task dependencies are correctly followed by keeping track of the number of remaining predecessors that are still not completed. Second, our heuristic algorithms ensure that all wireless transmissions between a mobile device and edge/cloud servers are performed one after another. Third, our heuristic algorithms allow an edge/cloud server to start the execution of a task as soon as the transmission of the task is finished. Fourth, we derive a lower bound for the optimal makespan such that the solutions of our heuristic algorithms can be compared with optimal solutions. Our contributions to energy-constrained DAG scheduling can be summarized as follows. First, our heuristic algorithms ensure that the overall computation energy consumption and communication energy consumption does not exceed the given energy constraint. Second, our algorithms adopt an iterative and progressive procedure to determine appropriate computation speed and wireless communication speeds while generating a DAG schedule and satisfying the energy constraint. Third, we derive a lower bound for the optimal makespan and evaluate the performance of our heuristic algorithms in such a way that their heuristic solutions are compared with optimal solutions. To the author’s knowledge, this is the first paper that considers DAG scheduling and energy-constrained DAG scheduling on edge and cloud servers with sequential wireless communications and overlapped communication and computation to minimize makespan.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141513872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-27DOI: 10.1007/s10723-024-09774-2
Zheyu Zhou, Qi Wang, Jizhou Li, Ziyuan Li
The study focuses on utilizing the computational resources present in vehicles to enhance the performance of multi-access edge computing (MEC) systems. While vehicles are typically equipped with computational services for vehicle-centric Internet of Vehicles (IoV) applications, their resources can also be leveraged to reduce the workload on edge servers and improve task processing speed in MEC scenarios. Previous research efforts have overlooked the potential resource utilization of passing vehicles, which can be a valuable addition to MEC systems alongside parked cars. This study introduces an assisted MEC scenario where a base station (BS) with an edge server serves various devices, parked cars, and vehicular traffic. A cooperative approach using the Deep Deterministic Policy Gradient (DDPG) based Federated Learning method is proposed to optimize resource allocation and job offloading. This method enables the transfer of device operations from devices to the BS or from the BS to vehicles based on specific requirements. The proposed system also considers the duration for which a vehicle can provide job offloading services within the range of the BS before leaving. The objective of the DDPG-FL method is to minimize the overall priority-weighted task computation time. Through simulation results and a comparison with three other schemes, the study demonstrates the superiority of their proposed method in seven different scenarios. The findings highlight the potential of incorporating vehicular resources in MEC systems, showcasing improved task processing efficiency and overall system performance.
{"title":"Resource Allocation Using Deep Deterministic Policy Gradient-Based Federated Learning for Multi-Access Edge Computing","authors":"Zheyu Zhou, Qi Wang, Jizhou Li, Ziyuan Li","doi":"10.1007/s10723-024-09774-2","DOIUrl":"https://doi.org/10.1007/s10723-024-09774-2","url":null,"abstract":"<p>The study focuses on utilizing the computational resources present in vehicles to enhance the performance of multi-access edge computing (MEC) systems. While vehicles are typically equipped with computational services for vehicle-centric Internet of Vehicles (IoV) applications, their resources can also be leveraged to reduce the workload on edge servers and improve task processing speed in MEC scenarios. Previous research efforts have overlooked the potential resource utilization of passing vehicles, which can be a valuable addition to MEC systems alongside parked cars. This study introduces an assisted MEC scenario where a base station (BS) with an edge server serves various devices, parked cars, and vehicular traffic. A cooperative approach using the Deep Deterministic Policy Gradient (DDPG) based Federated Learning method is proposed to optimize resource allocation and job offloading. This method enables the transfer of device operations from devices to the BS or from the BS to vehicles based on specific requirements. The proposed system also considers the duration for which a vehicle can provide job offloading services within the range of the BS before leaving. The objective of the DDPG-FL method is to minimize the overall priority-weighted task computation time. Through simulation results and a comparison with three other schemes, the study demonstrates the superiority of their proposed method in seven different scenarios. The findings highlight the potential of incorporating vehicular resources in MEC systems, showcasing improved task processing efficiency and overall system performance.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141504926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-20DOI: 10.1007/s10723-024-09772-4
Reza Rabieyan, Ramin Yahyapour, Patrick Jahnke
This study addresses the issue of power consumption in virtualized cloud data centers by proposing a virtual machine (VM) replacement model and a corresponding algorithm. The model incorporates multi-objective functions, aiming to optimize VM selection based on weights and minimize resource utilization disparities across hosts. Constraints are incorporated to ensure that CPU utilization remains close to the average CPU usage while mitigating overutilization in memory and network bandwidth usage. The proposed algorithm offers a fast and efficient solution with minimal VM replacements. The experimental simulation results demonstrate significant reductions in power consumption compared with a benchmark model. The proposed model and algorithm have been implemented and operated within a real-world cloud infrastructure, emphasizing their practicality.
本研究通过提出一种虚拟机(VM)替换模型和相应算法,解决了虚拟化云数据中心的能耗问题。该模型包含多目标函数,旨在根据权重优化虚拟机选择,并最大限度地减少主机间的资源利用率差异。该模型纳入了一些约束条件,以确保 CPU 利用率接近平均 CPU 利用率,同时减少内存和网络带宽的过度利用。所提出的算法提供了一种快速高效的解决方案,只需最少的虚拟机替换。实验模拟结果表明,与基准模型相比,功耗显著降低。提出的模型和算法已在现实世界的云基础设施中实施和运行,强调了其实用性。
{"title":"Optimizing Resource Consumption and Reducing Power Usage in Data Centers, A Novel Mathematical VM Replacement Model and Efficient Algorithm","authors":"Reza Rabieyan, Ramin Yahyapour, Patrick Jahnke","doi":"10.1007/s10723-024-09772-4","DOIUrl":"https://doi.org/10.1007/s10723-024-09772-4","url":null,"abstract":"<p>This study addresses the issue of power consumption in virtualized cloud data centers by proposing a virtual machine (VM) replacement model and a corresponding algorithm. The model incorporates multi-objective functions, aiming to optimize VM selection based on weights and minimize resource utilization disparities across hosts. Constraints are incorporated to ensure that CPU utilization remains close to the average CPU usage while mitigating overutilization in memory and network bandwidth usage. The proposed algorithm offers a fast and efficient solution with minimal VM replacements. The experimental simulation results demonstrate significant reductions in power consumption compared with a benchmark model. The proposed model and algorithm have been implemented and operated within a real-world cloud infrastructure, emphasizing their practicality.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141504927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-18DOI: 10.1007/s10723-024-09771-5
Zaki Brahmi, Rihab Derouiche
The processing of scientific workflow (SW) in geo-distributed cloud computing holds significant importance in the placement of massive data between various tasks. However, data movement across storage services is a main concern in the geo-distributed data centers, which entails issues related to the cost and energy consumption of both storage services and network infrastructure. Aiming to optimize data placement for SW, this paper proposes EQGSA-DPW a novel algorithm leveraging quantum computing and swarm intelligence optimization to intelligently reduce costs and energy consumption when a SW is processed in multi-cloud. EQGSA-DPW considers multiple objectives (e.g., transmission bandwidth, cost and energy consumption of both service and communication) and improves the GSA algorithm by using the log-sigmoid transfer function as a gravitational constant G and updating agent position by quantum rotation angle amplitude for more diversification. Moreover, to assist EQGSA-DPW in finding the optima, an initial guess is proposed. The performance of our EQGSA-DPW algorithm is evaluated via extensive experiments, which show that our data placement method achieves significantly better performance in terms of cost, energy, and data transfer than competing algorithms. For instance, in terms of energy consumption, EQGSA-DPW can on average achieve up to (25%), (14%), and (40%) reduction over that of GSA, PSO, and ACO-DPDGW algorithms, respectively. As for the storage services cost, EQGSA-DPW values are the lowest.
{"title":"EQGSA-DPW: A Quantum-GSA Algorithm-Based Data Placement for Scientific Workflow in Cloud Computing Environment","authors":"Zaki Brahmi, Rihab Derouiche","doi":"10.1007/s10723-024-09771-5","DOIUrl":"https://doi.org/10.1007/s10723-024-09771-5","url":null,"abstract":"<p>The processing of scientific workflow (SW) in geo-distributed cloud computing holds significant importance in the placement of massive data between various tasks. However, data movement across storage services is a main concern in the geo-distributed data centers, which entails issues related to the cost and energy consumption of both storage services and network infrastructure. Aiming to optimize data placement for SW, this paper proposes EQGSA-DPW a novel algorithm leveraging quantum computing and swarm intelligence optimization to intelligently reduce costs and energy consumption when a SW is processed in multi-cloud. EQGSA-DPW considers multiple objectives (e.g., transmission bandwidth, cost and energy consumption of both service and communication) and improves the GSA algorithm by using the log-sigmoid transfer function as a gravitational constant <i>G</i> and updating agent position by quantum rotation angle amplitude for more diversification. Moreover, to assist EQGSA-DPW in finding the optima, an initial guess is proposed. The performance of our EQGSA-DPW algorithm is evaluated via extensive experiments, which show that our data placement method achieves significantly better performance in terms of cost, energy, and data transfer than competing algorithms. For instance, in terms of energy consumption, EQGSA-DPW can on average achieve up to <span>(25%)</span>, <span>(14%)</span>, and <span>(40%)</span> reduction over that of GSA, PSO, and ACO-DPDGW algorithms, respectively. As for the storage services cost, EQGSA-DPW values are the lowest.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141504928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-06DOI: 10.1007/s10723-024-09773-3
Saihong Li, Yingying Ma, Yusha Zhang, Yinghui Xie
In the realm of the Internet of Things (IoT), the significance of edge devices within multi-cluster communication systems is on the rise. As the quantity of clusters and devices associated with each cluster grows, challenges related to resource optimization emerge. To address these concerns and enhance resource utilization, it is imperative to devise efficient strategies for resource allocation to specific clusters. These strategies encompass the implementation of load-balancing algorithms, dynamic scheduling, and virtualization techniques that generate logical instances of resources within the clusters. Moreover, the implementation of data management techniques is essential to facilitate effective data sharing among clusters. These strategies collectively minimize resource waste, enabling the streamlined management of networking and data resources in a multi-cluster communication system. This paper introduces an energy-efficient resource allocation technique tailored for edge devices in such systems. The proposed strategy leverages a higher-level meta-cluster heuristic to construct an optimization model, aiming to enhance the resource utilization of individual edge nodes. Emphasizing energy consumption and resource optimization while meeting latency requirements, the model employs a graph-based node selection method to assign high-load nodes to optimal clusters. To ensure fairness, resource allocation collaborates with resource descriptions and Quality of Service (QoS) metrics to tailor resource distribution. Additionally, the proposed strategy dynamically updates its parameter settings to adapt to various scenarios. The simulations confirm the superiority of the proposed strategy in different aspects.
{"title":"Towards Enhanced Energy Aware Resource Optimization for Edge Devices Through Multi-cluster Communication Systems","authors":"Saihong Li, Yingying Ma, Yusha Zhang, Yinghui Xie","doi":"10.1007/s10723-024-09773-3","DOIUrl":"https://doi.org/10.1007/s10723-024-09773-3","url":null,"abstract":"<p>In the realm of the Internet of Things (IoT), the significance of edge devices within multi-cluster communication systems is on the rise. As the quantity of clusters and devices associated with each cluster grows, challenges related to resource optimization emerge. To address these concerns and enhance resource utilization, it is imperative to devise efficient strategies for resource allocation to specific clusters. These strategies encompass the implementation of load-balancing algorithms, dynamic scheduling, and virtualization techniques that generate logical instances of resources within the clusters. Moreover, the implementation of data management techniques is essential to facilitate effective data sharing among clusters. These strategies collectively minimize resource waste, enabling the streamlined management of networking and data resources in a multi-cluster communication system. This paper introduces an energy-efficient resource allocation technique tailored for edge devices in such systems. The proposed strategy leverages a higher-level meta-cluster heuristic to construct an optimization model, aiming to enhance the resource utilization of individual edge nodes. Emphasizing energy consumption and resource optimization while meeting latency requirements, the model employs a graph-based node selection method to assign high-load nodes to optimal clusters. To ensure fairness, resource allocation collaborates with resource descriptions and Quality of Service (QoS) metrics to tailor resource distribution. Additionally, the proposed strategy dynamically updates its parameter settings to adapt to various scenarios. The simulations confirm the superiority of the proposed strategy in different aspects.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141549678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}