首页 > 最新文献

Journal of Grid Computing最新文献

英文 中文
Multi-Agent Systems for Collaborative Inference Based on Deep Policy Q-Inference Network 基于深度策略 Q 推理网络的协作推理多代理系统
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-02-29 DOI: 10.1007/s10723-024-09750-w
Shangshang Wang, Yuqin Jing, Kezhu Wang, Xue Wang

This study tackles the problem of increasing efficiency and scalability in deep neural network (DNN) systems by employing collaborative inference, an approach that is gaining popularity because to its ability to maximize computational resources. It involves splitting a pre-trained DNN model into two parts and running them separately on user equipment (UE) and edge servers. This approach is advantageous because it results in faster and more energy-efficient inference, as computation can be offloaded to edge servers rather than relying solely on UEs. However, a significant challenge of collaborative belief is the dynamic coupling of DNN layers, which makes it difficult to separate and run the layers independently. To address this challenge, we proposed a novel approach to optimize collaborative inference in a multi-agent scenario where a single-edge server coordinates the assumption of multiple UEs. Our proposed method suggests using an autoencoder-based technique to reduce the size of intermediary features and constructing tasks using the deep policy inference Q-inference network’s overhead (DPIQN). To optimize the collaborative inference, employ the Deep Recurrent Policy Inference Q-Network (DRPIQN) technique, which allows for a hybrid action space. The results of the tests demonstrate that this approach can significantly reduce inference latency by up to 56% and energy usage by up to 72% on various networks. Overall, this proposed approach provides an efficient and effective method for implementing collaborative inference in multi-agent scenarios, which could have significant implications for developing DNN systems.

本研究通过采用协作推理来解决提高深度神经网络(DNN)系统效率和可扩展性的问题,协作推理是一种因能最大限度利用计算资源而日益流行的方法。它将预先训练好的 DNN 模型分成两部分,分别在用户设备(UE)和边缘服务器上运行。这种方法的优点是推理速度更快、能效更高,因为计算可以卸载到边缘服务器上,而不是完全依赖 UE。然而,协同信念面临的一个重大挑战是 DNN 各层的动态耦合,这使得各层难以分离和独立运行。为了应对这一挑战,我们提出了一种新方法,以优化多代理场景中的协作推理,即由单个边缘服务器协调多个 UE 的假设。我们提出的方法建议使用基于自动编码器的技术来减少中间特征的大小,并使用深度策略推理 Q-推理网络的开销(DPIQN)来构建任务。为了优化协作推理,采用了深度递归策略推理 Q 网络(DRPIQN)技术,该技术允许混合行动空间。测试结果表明,在各种网络上,这种方法可以将推理延迟大幅减少 56%,将能量消耗大幅减少 72%。总之,这种拟议方法为在多代理场景中实施协作推理提供了一种高效、有效的方法,对开发 DNN 系统具有重要意义。
{"title":"Multi-Agent Systems for Collaborative Inference Based on Deep Policy Q-Inference Network","authors":"Shangshang Wang, Yuqin Jing, Kezhu Wang, Xue Wang","doi":"10.1007/s10723-024-09750-w","DOIUrl":"https://doi.org/10.1007/s10723-024-09750-w","url":null,"abstract":"<p>This study tackles the problem of increasing efficiency and scalability in deep neural network (DNN) systems by employing collaborative inference, an approach that is gaining popularity because to its ability to maximize computational resources. It involves splitting a pre-trained DNN model into two parts and running them separately on user equipment (UE) and edge servers. This approach is advantageous because it results in faster and more energy-efficient inference, as computation can be offloaded to edge servers rather than relying solely on UEs. However, a significant challenge of collaborative belief is the dynamic coupling of DNN layers, which makes it difficult to separate and run the layers independently. To address this challenge, we proposed a novel approach to optimize collaborative inference in a multi-agent scenario where a single-edge server coordinates the assumption of multiple UEs. Our proposed method suggests using an autoencoder-based technique to reduce the size of intermediary features and constructing tasks using the deep policy inference Q-inference network’s overhead (DPIQN). To optimize the collaborative inference, employ the Deep Recurrent Policy Inference Q-Network (DRPIQN) technique, which allows for a hybrid action space. The results of the tests demonstrate that this approach can significantly reduce inference latency by up to 56% and energy usage by up to 72% on various networks. Overall, this proposed approach provides an efficient and effective method for implementing collaborative inference in multi-agent scenarios, which could have significant implications for developing DNN systems.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"77 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140003991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dueling Double Deep Q Network Strategy in MEC for Smart Internet of Vehicles Edge Computing Networks 智能车联网边缘计算网络 MEC 中的双深 Q 网络对决策略
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-02-29 DOI: 10.1007/s10723-024-09752-8
Haotian Pang, Zhanwei Wang

Advancing in communication systems requires nearby devices to act as networks when devices are not in use. Such technology is mobile edge computing, which provides enormous communication services in the network. In this research, we explore a multiuser smart Internet of Vehicles (IoV) network with mobile edge computing (MEC) assistance, where the first edge server can assist in completing the intense computing jobs from the vehicular users. Many currently available works for MEC networks primarily concentrate on minimising system latency to ensure the quality of service (QoS) for users by designing some offloading strategies. Still, they need to account for the retail prices from the server and, as a result, the budgetary constraints of the users. To solve this problem, we present a Dueling Double Deep Q Network (D3QN) with an Optimal Stopping Theory (OST) strategy that helps to solve the multi-task joint edge problems and minimises the offloading problems in MEC-based IoV networks. The multi-task-offloading model aims to increase the likelihood of offloading to the ideal servers by utilising the OST characteristics. Lastly, simulators show how the proposed methods perform better than the traditional ones. The findings demonstrate that the suggested offloading techniques may be successfully applied in mobile nodes and significantly cut the anticipated time required to process the workloads.

通信系统的发展需要附近的设备在不使用时充当网络。这种技术就是移动边缘计算,它能在网络中提供巨大的通信服务。在这项研究中,我们探索了一种具有移动边缘计算(MEC)辅助功能的多用户智能车联网(IoV)网络,在这种网络中,第一边缘服务器可以协助完成来自车辆用户的高强度计算工作。目前,许多针对 MEC 网络的研究主要集中在通过设计一些卸载策略来最大限度地减少系统延迟,从而确保用户的服务质量(QoS)。但是,它们仍需要考虑服务器的零售价格,因此也需要考虑用户的预算限制。为了解决这个问题,我们提出了一种具有最优停止理论(OST)策略的决斗双深Q网络(D3QN),它有助于解决多任务联合边缘问题,并最大限度地减少基于MEC的物联网网络中的卸载问题。多任务卸载模型旨在利用 OST 特性提高向理想服务器卸载的可能性。最后,模拟器显示了建议的方法如何比传统方法表现得更好。研究结果表明,建议的卸载技术可成功应用于移动节点,并大大缩短处理工作负载所需的预期时间。
{"title":"Dueling Double Deep Q Network Strategy in MEC for Smart Internet of Vehicles Edge Computing Networks","authors":"Haotian Pang, Zhanwei Wang","doi":"10.1007/s10723-024-09752-8","DOIUrl":"https://doi.org/10.1007/s10723-024-09752-8","url":null,"abstract":"<p>Advancing in communication systems requires nearby devices to act as networks when devices are not in use. Such technology is mobile edge computing, which provides enormous communication services in the network. In this research, we explore a multiuser smart Internet of Vehicles (IoV) network with mobile edge computing (MEC) assistance, where the first edge server can assist in completing the intense computing jobs from the vehicular users. Many currently available works for MEC networks primarily concentrate on minimising system latency to ensure the quality of service (QoS) for users by designing some offloading strategies. Still, they need to account for the retail prices from the server and, as a result, the budgetary constraints of the users. To solve this problem, we present a Dueling Double Deep Q Network (D3QN) with an Optimal Stopping Theory (OST) strategy that helps to solve the multi-task joint edge problems and minimises the offloading problems in MEC-based IoV networks. The multi-task-offloading model aims to increase the likelihood of offloading to the ideal servers by utilising the OST characteristics. Lastly, simulators show how the proposed methods perform better than the traditional ones. The findings demonstrate that the suggested offloading techniques may be successfully applied in mobile nodes and significantly cut the anticipated time required to process the workloads.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"34 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140004046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Work Scheduling in Cloud Network Based on Deep Q-LSTM Models for Efficient Resource Utilization 基于深度 Q-LSTM 模型的云网络工作调度,实现高效资源利用
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-02-28 DOI: 10.1007/s10723-024-09746-6
Yanli Xing

Edge computing has emerged as an innovative paradigm, bringing cloud service resources closer to mobile consumers at the network's edge. This proximity enables efficient processing of computationally demanding and time-sensitive tasks. However, the dynamic nature of the edge network, characterized by a high density of devices, diverse mobile usage patterns, a wide range of applications, and sporadic traffic, often leads to uneven resource distribution. This imbalance hampers system efficiency and contributes to task failures. To overcome these challenges, we propose a novel approach known as the DRL-LSTM approach, which combines Deep Reinforcement Learning (DRL) with Long Short-Term Memory (LSTM) architecture. The primary objective of the DRL-LSTM approach is to optimize workload planning in edge computing environments. Leveraging the capabilities of DRL, this approach effectively handles complex and multidimensional workload planning problems. By incorporating LSTM as a recurrent neural network, it captures and models temporal dependencies in sequential data, enabling efficient workload management, reduced service time, and enhanced task completion rates. Additionally, the DRL-LSTM approach integrates Deep-Q-Network (DQN) algorithms to address the complexity and high dimensionality of workload scheduling problems. Through simulations, we demonstrate that the DRL-LSTM approach outperforms alternative approaches regarding service time, virtual machine (VM) utilization, and the rate of failed tasks. The integration of DRL and LSTM enables the process to effectively tackle the challenges associated with workload planning in edge computing, leading to improved system performance. The proposed DRL-LSTM approach offers a promising solution for optimizing workload planning in edge computing environments. Combining the power of Deep Reinforcement Learning, Long Short-Term Memory architecture, and Deep-Q-Network algorithms facilitates efficient resource allocation, reduces service time, and increases task completion rates. It holds significant potential for enhancing the overall performance and effectiveness of edge computing systems.

边缘计算已成为一种创新模式,它使云服务资源更接近网络边缘的移动消费者。这种接近性使计算要求高且时间敏感的任务得到高效处理。然而,边缘网络具有设备密度高、移动使用模式多样、应用范围广泛和流量零散等特点,其动态性质往往导致资源分配不均。这种不平衡会影响系统效率,导致任务失败。为了克服这些挑战,我们提出了一种称为 DRL-LSTM 的新方法,它将深度强化学习(DRL)与长短期记忆(LSTM)架构相结合。DRL-LSTM 方法的主要目标是优化边缘计算环境中的工作负载规划。利用 DRL 的功能,该方法可有效处理复杂的多维工作负载规划问题。通过将 LSTM 作为递归神经网络,它可以捕捉并模拟顺序数据中的时间依赖性,从而实现高效的工作量管理、缩短服务时间并提高任务完成率。此外,DRL-LSTM 方法还集成了深度-Q 网络(DQN)算法,以解决工作量调度问题的复杂性和高维性。通过仿真,我们证明 DRL-LSTM 方法在服务时间、虚拟机(VM)利用率和失败任务率方面优于其他方法。DRL 和 LSTM 的集成使该流程能够有效解决边缘计算中与工作负载规划相关的挑战,从而提高系统性能。所提出的 DRL-LSTM 方法为优化边缘计算环境中的工作负载规划提供了一种前景广阔的解决方案。将深度强化学习、长短期记忆架构和深度 Q 网络算法的强大功能结合起来,有助于高效分配资源、缩短服务时间并提高任务完成率。它在提高边缘计算系统的整体性能和效率方面具有巨大潜力。
{"title":"Work Scheduling in Cloud Network Based on Deep Q-LSTM Models for Efficient Resource Utilization","authors":"Yanli Xing","doi":"10.1007/s10723-024-09746-6","DOIUrl":"https://doi.org/10.1007/s10723-024-09746-6","url":null,"abstract":"<p>Edge computing has emerged as an innovative paradigm, bringing cloud service resources closer to mobile consumers at the network's edge. This proximity enables efficient processing of computationally demanding and time-sensitive tasks. However, the dynamic nature of the edge network, characterized by a high density of devices, diverse mobile usage patterns, a wide range of applications, and sporadic traffic, often leads to uneven resource distribution. This imbalance hampers system efficiency and contributes to task failures. To overcome these challenges, we propose a novel approach known as the DRL-LSTM approach, which combines Deep Reinforcement Learning (DRL) with Long Short-Term Memory (LSTM) architecture. The primary objective of the DRL-LSTM approach is to optimize workload planning in edge computing environments. Leveraging the capabilities of DRL, this approach effectively handles complex and multidimensional workload planning problems. By incorporating LSTM as a recurrent neural network, it captures and models temporal dependencies in sequential data, enabling efficient workload management, reduced service time, and enhanced task completion rates. Additionally, the DRL-LSTM approach integrates Deep-Q-Network (DQN) algorithms to address the complexity and high dimensionality of workload scheduling problems. Through simulations, we demonstrate that the DRL-LSTM approach outperforms alternative approaches regarding service time, virtual machine (VM) utilization, and the rate of failed tasks. The integration of DRL and LSTM enables the process to effectively tackle the challenges associated with workload planning in edge computing, leading to improved system performance. The proposed DRL-LSTM approach offers a promising solution for optimizing workload planning in edge computing environments. Combining the power of Deep Reinforcement Learning, Long Short-Term Memory architecture, and Deep-Q-Network algorithms facilitates efficient resource allocation, reduces service time, and increases task completion rates. It holds significant potential for enhancing the overall performance and effectiveness of edge computing systems.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"29 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140004255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic Multi-Resource Fair Allocation with Elastic Demands 具有弹性需求的动态多资源公平分配
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-02-27 DOI: 10.1007/s10723-024-09754-6
Hao Guo, Weidong Li

In this paper, we study dynamic multi-resource maximin share fair allocation based on the elastic demands of users in a cloud computing system. In this problem, users do not stay in the computing system all the time. Users are assigned resources only if they stay in the system. To further improve the utilization of resources, the model in this paper allows users to dynamically select the method of processing tasks based on the resources allocated to each time slot. For this problem, we propose a mechanism called maximin share fairness with elastic demands (MMS-ED) in a cloud computing system. We prove theoretically that the allocation returned by the mechanism is a Lorenz-dominating allocation, that the allocation satisfies the cumulative maximin share fairness, and that the mechanism is Pareto efficiency, proportionality, and strategy-proofness. Within a specific setting, MMS-ED performs better, and it also satisfies another desirable property weighted envy-freeness. In addition, we designed an algorithm to realize this mechanism, conducted simulation experiments with Alibaba cluster traces, and we analyzed the impact from three perspectives of elastic demand and cumulative fairness. The experimental results show that the MMS-ED mechanism performs better than do the other three similar mechanisms in terms of resource utilization and user utility; moreover, the introduction of elastic demand and cumulative fairness can effectively improve resource utilization.

本文研究了云计算系统中基于用户弹性需求的动态多资源最大共享公平分配问题。在这个问题中,用户不会一直待在计算系统中。只有当用户留在系统中时,才会为其分配资源。为了进一步提高资源利用率,本文中的模型允许用户根据分配给每个时隙的资源动态选择处理任务的方法。针对这一问题,我们在云计算系统中提出了一种称为弹性需求最大化共享公平性(MMS-ED)的机制。我们从理论上证明了该机制返回的分配是洛伦兹主导分配,该分配满足累积最大化份额公平性,并且该机制具有帕累托效率、比例性和策略防错性。在特定情况下,MMS-ED 的表现更好,而且它还满足另一个理想的加权无嫉妒属性。此外,我们还设计了实现该机制的算法,利用阿里巴巴集群痕迹进行了仿真实验,并从弹性需求和累积公平性三个角度分析了其影响。实验结果表明,MMS-ED 机制在资源利用率和用户效用方面的表现优于其他三种类似机制;此外,引入弹性需求和累积公平性可以有效提高资源利用率。
{"title":"Dynamic Multi-Resource Fair Allocation with Elastic Demands","authors":"Hao Guo, Weidong Li","doi":"10.1007/s10723-024-09754-6","DOIUrl":"https://doi.org/10.1007/s10723-024-09754-6","url":null,"abstract":"<p>In this paper, we study dynamic multi-resource maximin share fair allocation based on the elastic demands of users in a cloud computing system. In this problem, users do not stay in the computing system all the time. Users are assigned resources only if they stay in the system. To further improve the utilization of resources, the model in this paper allows users to dynamically select the method of processing tasks based on the resources allocated to each time slot. For this problem, we propose a mechanism called maximin share fairness with elastic demands (MMS-ED) in a cloud computing system. We prove theoretically that the allocation returned by the mechanism is a Lorenz-dominating allocation, that the allocation satisfies the cumulative maximin share fairness, and that the mechanism is Pareto efficiency, proportionality, and strategy-proofness. Within a specific setting, MMS-ED performs better, and it also satisfies another desirable property weighted envy-freeness. In addition, we designed an algorithm to realize this mechanism, conducted simulation experiments with Alibaba cluster traces, and we analyzed the impact from three perspectives of elastic demand and cumulative fairness. The experimental results show that the MMS-ED mechanism performs better than do the other three similar mechanisms in terms of resource utilization and user utility; moreover, the introduction of elastic demand and cumulative fairness can effectively improve resource utilization.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"3 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140004158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint Task Offloading Based on Distributed Deep Reinforcement Learning-Based Genetic Optimization Algorithm for Internet of Vehicles 基于分布式深度强化学习遗传优化算法的车联网联合任务卸载
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-02-26 DOI: 10.1007/s10723-024-09741-x
Hulin Jin, Yong-Guk Kim, Zhiran Jin, Chunyang Fan, Yonglong Xu

The growing number of individual vehicles and intelligent transportation systems have accelerated the development of Internet of Vehicles (IoV) technologies. The Internet of Vehicles (IoV) refers to a highly interactive network containing data regarding places, speeds, routes, and other aspects of vehicles. Task offloading was implemented to solve the issue that the current task scheduling models and tactics are primarily simplistic and do not consider the acceptable distribution of tasks, which results in a poor unloading completion rate. This work evaluates the Joint Task Offloading problem by Distributed Deep Reinforcement Learning (DDRL)-Based Genetic Optimization Algorithm (GOA). A system’s utility optimisation model is initially accomplished objectively using divisions between interaction and computation models. DDRL-GOA resolves the issue to produce the best task offloading method. The research increased job completion rates by modifying the complexity design and universal best-case scenario assurances using DDRL-GOA. Finally, empirical research is performed to validate the proposed technique in scenario development. We also construct joint task offloading, load distribution, and resource allocation to lower system costs as integer concerns. In addition to having a high convergence efficiency, the experimental results show that the proposed approach has a substantially lower system cost when compared to current methods.

越来越多的个体车辆和智能交通系统加速了车联网(IoV)技术的发展。车辆互联网(IoV)指的是一个高度交互的网络,其中包含有关地点、速度、路线和车辆其他方面的数据。目前的任务调度模型和战术主要是简单化的,没有考虑任务的可接受分布,导致卸载完成率不高,为了解决这一问题,实现了任务卸载。这项工作通过基于分布式深度强化学习(DDRL)的遗传优化算法(GOA)评估了联合任务卸载问题。系统的效用优化模型最初是通过交互模型和计算模型的划分来客观完成的。DDRL-GOA 解决了这一问题,从而产生了最佳的任务卸载方法。研究通过使用 DDRL-GOA 修改复杂性设计和通用最佳情况保证,提高了任务完成率。最后,我们进行了实证研究,以验证所提出的情景开发技术。我们还构建了联合任务卸载、负载分配和资源分配以降低系统成本的整数关注点。除了具有较高的收敛效率外,实验结果表明,与现有方法相比,建议的方法大大降低了系统成本。
{"title":"Joint Task Offloading Based on Distributed Deep Reinforcement Learning-Based Genetic Optimization Algorithm for Internet of Vehicles","authors":"Hulin Jin, Yong-Guk Kim, Zhiran Jin, Chunyang Fan, Yonglong Xu","doi":"10.1007/s10723-024-09741-x","DOIUrl":"https://doi.org/10.1007/s10723-024-09741-x","url":null,"abstract":"<p>The growing number of individual vehicles and intelligent transportation systems have accelerated the development of Internet of Vehicles (IoV) technologies. The Internet of Vehicles (IoV) refers to a highly interactive network containing data regarding places, speeds, routes, and other aspects of vehicles. Task offloading was implemented to solve the issue that the current task scheduling models and tactics are primarily simplistic and do not consider the acceptable distribution of tasks, which results in a poor unloading completion rate. This work evaluates the Joint Task Offloading problem by Distributed Deep Reinforcement Learning (DDRL)-Based Genetic Optimization Algorithm (GOA). A system’s utility optimisation model is initially accomplished objectively using divisions between interaction and computation models. DDRL-GOA resolves the issue to produce the best task offloading method. The research increased job completion rates by modifying the complexity design and universal best-case scenario assurances using DDRL-GOA. Finally, empirical research is performed to validate the proposed technique in scenario development. We also construct joint task offloading, load distribution, and resource allocation to lower system costs as integer concerns. In addition to having a high convergence efficiency, the experimental results show that the proposed approach has a substantially lower system cost when compared to current methods.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"46 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139969509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decentralized AI-Based Task Distribution on Blockchain for Cloud Industrial Internet of Things 云工业物联网区块链上基于人工智能的去中心化任务分配
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-02-24 DOI: 10.1007/s10723-024-09751-9
Amir Javadpour, Arun Kumar Sangaiah, Weizhe Zhang, Ankit Vidyarthi, HamidReza Ahmadi

This study presents an environmentally friendly mechanism for task distribution designed explicitly for blockchain Proof of Authority (POA) consensus. This approach facilitates the selection of virtual machines for tasks such as data processing, transaction verification, and adding new blocks to the blockchain. Given the current lack of effective methods for integrating POA blockchain into the Cloud Industrial Internet of Things (CIIoT) due to their inefficiency and low throughput, we propose a novel algorithm that employs the Dynamic Voltage and Frequency Scaling (DVFS) technique, replacing the periodic transaction authentication process among validator candidates. Managing computer power consumption becomes a critical concern, especially within the Internet of Things ecosystem, where device power is constrained, and transaction scalability is crucial. Virtual machines must validate transactions (tasks) within specific time frames and deadlines. The DVFS technique efficiently reduces power consumption by intelligently scheduling and allocating tasks to virtual machines. Furthermore, we leverage artificial intelligence and neural networks to match tasks with suitable virtual machines. The simulation results demonstrate that our proposed approach harnesses migration and DVFS strategies to optimize virtual machine utilization, resulting in decreased energy and power consumption compared to non-DVFS methods. This achievement marks a significant stride towards seamlessly integrating blockchain and IoT, establishing an ecologically sustainable network. Our approach boasts additional benefits, including decentralization, enhanced data quality, and heightened security. We analyze simulation runtime and energy consumption in a comprehensive evaluation against existing techniques such as WPEG, IRMBBC, and BEMEC. The findings underscore the efficiency of our technique (LBDVFSb) across both criteria.

本研究提出了一种任务分配的环境友好型机制,该机制专门为区块链授权证明(POA)共识而设计。这种方法有助于为数据处理、交易验证和向区块链添加新区块等任务选择虚拟机。由于效率低、吞吐量小,目前缺乏将 POA 区块链集成到云工业物联网(CIIoT)中的有效方法,有鉴于此,我们提出了一种采用动态电压和频率扩展(DVFS)技术的新型算法,以取代验证器候选者之间的定期交易验证过程。管理计算机功耗已成为一个关键问题,尤其是在物联网生态系统中,设备功耗有限,而事务的可扩展性至关重要。虚拟机必须在特定的时间框架和期限内验证事务(任务)。DVFS 技术通过智能调度和分配任务给虚拟机,有效降低了功耗。此外,我们还利用人工智能和神经网络将任务与合适的虚拟机相匹配。仿真结果表明,与非 DVFS 方法相比,我们提出的方法利用迁移和 DVFS 策略优化了虚拟机利用率,从而降低了能耗和功耗。这一成就标志着在无缝整合区块链和物联网、建立生态可持续网络方面迈出了重要一步。我们的方法还具有其他优势,包括去中心化、提高数据质量和安全性。我们在与 WPEG、IRMBBC 和 BEMEC 等现有技术的综合评估中分析了模拟运行时间和能耗。评估结果凸显了我们的技术(LBDVFSb)在这两个标准上的效率。
{"title":"Decentralized AI-Based Task Distribution on Blockchain for Cloud Industrial Internet of Things","authors":"Amir Javadpour, Arun Kumar Sangaiah, Weizhe Zhang, Ankit Vidyarthi, HamidReza Ahmadi","doi":"10.1007/s10723-024-09751-9","DOIUrl":"https://doi.org/10.1007/s10723-024-09751-9","url":null,"abstract":"<p>This study presents an environmentally friendly mechanism for task distribution designed explicitly for blockchain Proof of Authority (POA) consensus. This approach facilitates the selection of virtual machines for tasks such as data processing, transaction verification, and adding new blocks to the blockchain. Given the current lack of effective methods for integrating POA blockchain into the Cloud Industrial Internet of Things (CIIoT) due to their inefficiency and low throughput, we propose a novel algorithm that employs the Dynamic Voltage and Frequency Scaling (DVFS) technique, replacing the periodic transaction authentication process among validator candidates. Managing computer power consumption becomes a critical concern, especially within the Internet of Things ecosystem, where device power is constrained, and transaction scalability is crucial. Virtual machines must validate transactions (tasks) within specific time frames and deadlines. The DVFS technique efficiently reduces power consumption by intelligently scheduling and allocating tasks to virtual machines. Furthermore, we leverage artificial intelligence and neural networks to match tasks with suitable virtual machines. The simulation results demonstrate that our proposed approach harnesses migration and DVFS strategies to optimize virtual machine utilization, resulting in decreased energy and power consumption compared to non-DVFS methods. This achievement marks a significant stride towards seamlessly integrating blockchain and IoT, establishing an ecologically sustainable network. Our approach boasts additional benefits, including decentralization, enhanced data quality, and heightened security. We analyze simulation runtime and energy consumption in a comprehensive evaluation against existing techniques such as WPEG, IRMBBC, and BEMEC. The findings underscore the efficiency of our technique (LBDVFSb) across both criteria.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"14 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139949300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Probabilistic Deadline-aware Application Offloading in a Multi-Queueing Fog System: A Max Entropy Framework 多队列雾系统中的概率截止时间感知应用卸载:最大熵框架
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-02-22 DOI: 10.1007/s10723-024-09753-7

Abstract

Cloud computing and its derivatives, such as fog and edge computing, have propelled the IoT era, integrating AI and deep learning for process automation. Despite transformative growth in healthcare, education, and automation domains, challenges persist, particularly in addressing the impact of multi-hopping public networks on data upload time, affecting response time, failure rates, and security. Existing scheduling algorithms, designed for multiple parameters like deadline, priority, rate of arrival, and arrival pattern, can minimize execution time for high-priority applications. However, the difficulty lies in simultaneously minimizing overall application execution time while mitigating resource depletion issues for low-priority applications. This paper introduces a cloud-fog-based computing architecture to tackle fog node resource starvation, incorporating joint probability, loss probability, and maximum entropy concepts. The proposed model utilizes a probabilistic application scheduling algorithm, considering priority and deadline and employing expected loss probability for task offloading. Additionally, a second algorithm focuses on resource starvation, optimizing task sequence for minimal response time and improved quality of service in a multi-Queueing fog system. The paper demonstrates that the proposed model outperforms state-of-the-art models, achieving a 3.43-5.71% quality of service improvement and a 99.75-267.68 msec reduction in response time through efficient resource allocation.

摘要 云计算及其衍生产品(如雾计算和边缘计算)推动了物联网时代的到来,并将人工智能和深度学习整合到流程自动化中。尽管在医疗保健、教育和自动化领域取得了变革性增长,但挑战依然存在,特别是在解决多跳公共网络对数据上传时间的影响方面,影响响应时间、故障率和安全性。现有的调度算法针对截止日期、优先级、到达率和到达模式等多个参数进行设计,可以最大限度地缩短高优先级应用的执行时间。然而,难点在于如何在减少低优先级应用的资源耗尽问题的同时,最大限度地缩短整体应用的执行时间。本文结合联合概率、损失概率和最大熵概念,介绍了一种基于云雾的计算架构,以解决雾节点资源匮乏问题。建议的模型采用概率应用调度算法,考虑优先级和截止日期,并利用预期损失概率进行任务卸载。此外,第二种算法重点关注资源饥饿问题,优化任务序列,以实现最短响应时间,提高多队列雾系统的服务质量。论文表明,所提出的模型优于最先进的模型,通过有效的资源分配,服务质量提高了 3.43-5.71%,响应时间缩短了 99.75-267.68 毫秒。
{"title":"A Probabilistic Deadline-aware Application Offloading in a Multi-Queueing Fog System: A Max Entropy Framework","authors":"","doi":"10.1007/s10723-024-09753-7","DOIUrl":"https://doi.org/10.1007/s10723-024-09753-7","url":null,"abstract":"<h3>Abstract</h3> <p>Cloud computing and its derivatives, such as fog and edge computing, have propelled the IoT era, integrating AI and deep learning for process automation. Despite transformative growth in healthcare, education, and automation domains, challenges persist, particularly in addressing the impact of multi-hopping public networks on data upload time, affecting response time, failure rates, and security. Existing scheduling algorithms, designed for multiple parameters like deadline, priority, rate of arrival, and arrival pattern, can minimize execution time for high-priority applications. However, the difficulty lies in simultaneously minimizing overall application execution time while mitigating resource depletion issues for low-priority applications. This paper introduces a cloud-fog-based computing architecture to tackle fog node resource starvation, incorporating joint probability, loss probability, and maximum entropy concepts. The proposed model utilizes a probabilistic application scheduling algorithm, considering priority and deadline and employing expected loss probability for task offloading. Additionally, a second algorithm focuses on resource starvation, optimizing task sequence for minimal response time and improved quality of service in a multi-Queueing fog system. The paper demonstrates that the proposed model outperforms state-of-the-art models, achieving a 3.43-5.71% quality of service improvement and a 99.75-267.68 msec reduction in response time through efficient resource allocation.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"40 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139918706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Employing RNN and Petri Nets to Secure Edge Computing Threats in Smart Cities 利用 RNN 和 Petri 网防范智能城市中的边缘计算威胁
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-02-22 DOI: 10.1007/s10723-023-09733-3

Abstract

The Industrial Internet of Things (IIoT) revolution has led to the development a potential system that enhances communication among a city's assets. This system relies on wireless connections to numerous limited gadgets deployed throughout the urban landscape. However, technology has exposed these networks to various harmful assaults, cyberattacks, and potential hacker threats, jeopardizing the security of wireless information transmission. Specifically, unprotected IIoT networks act as vulnerable backdoor entry points for potential attacks. To address these challenges, this project proposes a comprehensive security structure that combines Extreme Learning Machines based Replicator Neural Networks (ELM-RNN) with Deep Reinforcement Learning based Deep Q-Networks (DRL-DQN) to safeguard against edge computing risks in intelligent cities. The proposed system starts by introducing a distributed authorization mechanism that employs an established trust paradigm to effectively regulate data flows within the network. Furthermore, a novel framework called Secure Trust-Aware Philosopher Privacy and Authentication (STAPPA), modeled using Petri Net, mitigates network privacy breaches and enhances data protection. The system employs the Garson algorithm alongside the ELM-based RNN to optimize network performance and strengthen anomaly detection capabilities. This enables efficient determination of the shortest routes, accurate anomaly detection, and effective search optimization within the network environment. Through extensive simulation, the proposed security framework demonstrates remarkable detection and accuracy rates by leveraging the power of reinforcement learning.

摘要 工业物联网(IIoT)革命促使开发了一种潜在的系统,以加强城市资产之间的通信。该系统依赖于与部署在城市各处的众多有限小工具的无线连接。然而,技术使这些网络面临各种有害攻击、网络攻击和潜在的黑客威胁,从而危及无线信息传输的安全性。具体来说,未受保护的物联网网络是潜在攻击的脆弱后门入口。为应对这些挑战,本项目提出了一种综合安全结构,将基于极限学习机的复制器神经网络(ELM-RNN)与基于深度强化学习的深度 Q 网络(DRL-DQN)相结合,以防范智慧城市中的边缘计算风险。拟议的系统首先引入了分布式授权机制,该机制采用既定的信任范式来有效规范网络内的数据流。此外,一个名为 "安全信任感知哲学家隐私和认证(STAPPA)"的新型框架采用 Petri 网建模,可减轻网络隐私泄露并加强数据保护。该系统采用了 Garson 算法和基于 ELM 的 RNN,以优化网络性能并加强异常检测能力。这样就能在网络环境中高效确定最短路径、准确检测异常并有效优化搜索。通过大量仿真,所提出的安全框架利用强化学习的强大功能,展示了出色的检测率和准确率。
{"title":"Employing RNN and Petri Nets to Secure Edge Computing Threats in Smart Cities","authors":"","doi":"10.1007/s10723-023-09733-3","DOIUrl":"https://doi.org/10.1007/s10723-023-09733-3","url":null,"abstract":"<h3>Abstract</h3> <p>The Industrial Internet of Things (IIoT) revolution has led to the development a potential system that enhances communication among a city's assets. This system relies on wireless connections to numerous limited gadgets deployed throughout the urban landscape. However, technology has exposed these networks to various harmful assaults, cyberattacks, and potential hacker threats, jeopardizing the security of wireless information transmission. Specifically, unprotected IIoT networks act as vulnerable backdoor entry points for potential attacks. To address these challenges, this project proposes a comprehensive security structure that combines Extreme Learning Machines based Replicator Neural Networks (ELM-RNN) with Deep Reinforcement Learning based Deep Q-Networks (DRL-DQN) to safeguard against edge computing risks in intelligent cities. The proposed system starts by introducing a distributed authorization mechanism that employs an established trust paradigm to effectively regulate data flows within the network. Furthermore, a novel framework called Secure Trust-Aware Philosopher Privacy and Authentication (STAPPA), modeled using Petri Net, mitigates network privacy breaches and enhances data protection. The system employs the Garson algorithm alongside the ELM-based RNN to optimize network performance and strengthen anomaly detection capabilities. This enables efficient determination of the shortest routes, accurate anomaly detection, and effective search optimization within the network environment. Through extensive simulation, the proposed security framework demonstrates remarkable detection and accuracy rates by leveraging the power of reinforcement learning.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"1 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139918777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Edge Computing Empowered Smart Healthcare: Monitoring and Diagnosis with Deep Learning Methods 边缘计算助力智能医疗:利用深度学习方法进行监测和诊断
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-02-21 DOI: 10.1007/s10723-023-09726-2

Abstract

Nowadays, data syncing before switchover and migration are two of the most pressing issues confronting cloud-based architecture. The requirement for a centrally managed IoT-based infrastructure has limited scalability due to security problems with cloud computing. The fundamental factor is that health systems, such as health monitoring, etc., demand computational operations on large amounts of data, which leads to the sensitivity of device latency emerging during these systems. Fog computing is a novel approach to increasing the effectiveness of cloud computing by allowing the use of necessary resources and close to end users. Existing fog computing approaches still have several drawbacks, including the tendency to either overestimate reaction time or consider result correctness, but managing both at once compromises system compatibility. To focus on deep learning algorithms and automated monitoring, FETCH is a proposed framework that connects with edge computing devices. It provides a constructive framework for real-life healthcare systems, such as those treating heart disease and other conditions. The suggested fog-enabled cloud computing system uses FogBus, which exhibits benefits in terms of power consumption, communication bandwidth, oscillation, delay, execution duration, and correctness.

摘要 如今,切换前的数据同步和迁移是云架构面临的两个最紧迫的问题。由于云计算存在安全问题,对集中管理的物联网基础设施的要求限制了其可扩展性。最根本的因素是,健康监测等健康系统需要对大量数据进行计算操作,这导致在这些系统中出现设备延迟的敏感性。雾计算是一种提高云计算效率的新方法,它允许使用必要的资源并接近终端用户。现有的雾计算方法仍存在一些缺点,包括倾向于高估反应时间或考虑结果的正确性,但同时管理这两种情况会影响系统的兼容性。为了专注于深度学习算法和自动监控,FETCH 是一个连接边缘计算设备的拟议框架。它为现实生活中的医疗保健系统(如治疗心脏病和其他疾病的系统)提供了一个建设性框架。建议的雾化云计算系统使用 FogBus,它在功耗、通信带宽、振荡、延迟、执行持续时间和正确性方面都有优势。
{"title":"Edge Computing Empowered Smart Healthcare: Monitoring and Diagnosis with Deep Learning Methods","authors":"","doi":"10.1007/s10723-023-09726-2","DOIUrl":"https://doi.org/10.1007/s10723-023-09726-2","url":null,"abstract":"<h3>Abstract</h3> <p>Nowadays, data syncing before switchover and migration are two of the most pressing issues confronting cloud-based architecture. The requirement for a centrally managed IoT-based infrastructure has limited scalability due to security problems with cloud computing. The fundamental factor is that health systems, such as health monitoring, etc., demand computational operations on large amounts of data, which leads to the sensitivity of device latency emerging during these systems. Fog computing is a novel approach to increasing the effectiveness of cloud computing by allowing the use of necessary resources and close to end users. Existing fog computing approaches still have several drawbacks, including the tendency to either overestimate reaction time or consider result correctness, but managing both at once compromises system compatibility. To focus on deep learning algorithms and automated monitoring, FETCH is a proposed framework that connects with edge computing devices. It provides a constructive framework for real-life healthcare systems, such as those treating heart disease and other conditions. The suggested fog-enabled cloud computing system uses FogBus, which exhibits benefits in terms of power consumption, communication bandwidth, oscillation, delay, execution duration, and correctness.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"2 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139918699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic Resource Management in MEC Powered by Edge Intelligence for Smart City Internet of Things 边缘智能支持 MEC 中的动态资源管理,实现智慧城市物联网
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-02-13 DOI: 10.1007/s10723-024-09749-3
Xucheng Wan

The Internet of Things (IoT) has become an infrastructure that makes smart cities possible. is both accurate and efficient. The intelligent production industry 4.0 period has made mobile edge computing (MEC) essential. Computationally demanding tasks can be delegated from the MEC server to the central cloud servers for processing in a smart city. This paper develops the integrated optimization framework for offloading tasks and dynamic resource allocation to reduce the power usage of all Internet of Things (IoT) gadgets subjected to delay limits and resource limitations. A Federated Learning FL-DDPG algorithm based on the Deep Deterministic Policy Gradient (DDPG) architecture is suggested for dynamic resource management in MEC networks. This research addresses the optimization issues for the CPU frequencies, transmit power, and IoT device offloading decisions for a multi-mobile edge computing (MEC) server and multi-IoT cellular networks. A weighted average of the processing load on the central MEC server (PMS), the system’s overall energy use, and the task-dropping expense is calculated as an optimization issue. The Lyapunov optimization theory formulates a random optimization strategy to reduce the energy use of IoT devices in MEC networks and reduce bandwidth assignment and transmitting power distribution. Additionally, the modeling studies demonstrate that, compared to other benchmark approaches, the suggested algorithm efficiently enhances system performance while consuming less energy.

物联网(IoT)已成为使智慧城市成为可能的基础设施。智能生产工业 4.0 时代使移动边缘计算(MEC)变得至关重要。在智慧城市中,计算要求高的任务可从 MEC 服务器下放至中央云服务器进行处理。本文开发了用于卸载任务和动态资源分配的集成优化框架,以减少所有受延迟限制和资源限制的物联网(IoT)小工具的功耗。针对 MEC 网络中的动态资源管理,提出了一种基于深度确定性策略梯度(DDPG)架构的联合学习 FL-DDPG 算法。这项研究解决了多移动边缘计算(MEC)服务器和多物联网蜂窝网络的 CPU 频率、发射功率和物联网设备卸载决策的优化问题。作为一个优化问题,计算了中央 MEC 服务器(PMS)的处理负载、系统的总体能耗和任务卸载费用的加权平均值。李亚普诺夫优化理论提出了一种随机优化策略,以降低 MEC 网络中物联网设备的能耗,减少带宽分配和发射功率分配。此外,建模研究表明,与其他基准方法相比,所建议的算法能有效提高系统性能,同时能耗更低。
{"title":"Dynamic Resource Management in MEC Powered by Edge Intelligence for Smart City Internet of Things","authors":"Xucheng Wan","doi":"10.1007/s10723-024-09749-3","DOIUrl":"https://doi.org/10.1007/s10723-024-09749-3","url":null,"abstract":"<p>The Internet of Things (IoT) has become an infrastructure that makes smart cities possible. is both accurate and efficient. The intelligent production industry 4.0 period has made mobile edge computing (MEC) essential. Computationally demanding tasks can be delegated from the MEC server to the central cloud servers for processing in a smart city. This paper develops the integrated optimization framework for offloading tasks and dynamic resource allocation to reduce the power usage of all Internet of Things (IoT) gadgets subjected to delay limits and resource limitations. A Federated Learning FL-DDPG algorithm based on the Deep Deterministic Policy Gradient (DDPG) architecture is suggested for dynamic resource management in MEC networks. This research addresses the optimization issues for the CPU frequencies, transmit power, and IoT device offloading decisions for a multi-mobile edge computing (MEC) server and multi-IoT cellular networks. A weighted average of the processing load on the central MEC server (PMS), the system’s overall energy use, and the task-dropping expense is calculated as an optimization issue. The Lyapunov optimization theory formulates a random optimization strategy to reduce the energy use of IoT devices in MEC networks and reduce bandwidth assignment and transmitting power distribution. Additionally, the modeling studies demonstrate that, compared to other benchmark approaches, the suggested algorithm efficiently enhances system performance while consuming less energy.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"93 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139760872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Grid Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1