Pub Date : 2024-02-27DOI: 10.1007/s10723-024-09754-6
Hao Guo, Weidong Li
In this paper, we study dynamic multi-resource maximin share fair allocation based on the elastic demands of users in a cloud computing system. In this problem, users do not stay in the computing system all the time. Users are assigned resources only if they stay in the system. To further improve the utilization of resources, the model in this paper allows users to dynamically select the method of processing tasks based on the resources allocated to each time slot. For this problem, we propose a mechanism called maximin share fairness with elastic demands (MMS-ED) in a cloud computing system. We prove theoretically that the allocation returned by the mechanism is a Lorenz-dominating allocation, that the allocation satisfies the cumulative maximin share fairness, and that the mechanism is Pareto efficiency, proportionality, and strategy-proofness. Within a specific setting, MMS-ED performs better, and it also satisfies another desirable property weighted envy-freeness. In addition, we designed an algorithm to realize this mechanism, conducted simulation experiments with Alibaba cluster traces, and we analyzed the impact from three perspectives of elastic demand and cumulative fairness. The experimental results show that the MMS-ED mechanism performs better than do the other three similar mechanisms in terms of resource utilization and user utility; moreover, the introduction of elastic demand and cumulative fairness can effectively improve resource utilization.
{"title":"Dynamic Multi-Resource Fair Allocation with Elastic Demands","authors":"Hao Guo, Weidong Li","doi":"10.1007/s10723-024-09754-6","DOIUrl":"https://doi.org/10.1007/s10723-024-09754-6","url":null,"abstract":"<p>In this paper, we study dynamic multi-resource maximin share fair allocation based on the elastic demands of users in a cloud computing system. In this problem, users do not stay in the computing system all the time. Users are assigned resources only if they stay in the system. To further improve the utilization of resources, the model in this paper allows users to dynamically select the method of processing tasks based on the resources allocated to each time slot. For this problem, we propose a mechanism called maximin share fairness with elastic demands (MMS-ED) in a cloud computing system. We prove theoretically that the allocation returned by the mechanism is a Lorenz-dominating allocation, that the allocation satisfies the cumulative maximin share fairness, and that the mechanism is Pareto efficiency, proportionality, and strategy-proofness. Within a specific setting, MMS-ED performs better, and it also satisfies another desirable property weighted envy-freeness. In addition, we designed an algorithm to realize this mechanism, conducted simulation experiments with Alibaba cluster traces, and we analyzed the impact from three perspectives of elastic demand and cumulative fairness. The experimental results show that the MMS-ED mechanism performs better than do the other three similar mechanisms in terms of resource utilization and user utility; moreover, the introduction of elastic demand and cumulative fairness can effectively improve resource utilization.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140004158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-26DOI: 10.1007/s10723-024-09741-x
Hulin Jin, Yong-Guk Kim, Zhiran Jin, Chunyang Fan, Yonglong Xu
The growing number of individual vehicles and intelligent transportation systems have accelerated the development of Internet of Vehicles (IoV) technologies. The Internet of Vehicles (IoV) refers to a highly interactive network containing data regarding places, speeds, routes, and other aspects of vehicles. Task offloading was implemented to solve the issue that the current task scheduling models and tactics are primarily simplistic and do not consider the acceptable distribution of tasks, which results in a poor unloading completion rate. This work evaluates the Joint Task Offloading problem by Distributed Deep Reinforcement Learning (DDRL)-Based Genetic Optimization Algorithm (GOA). A system’s utility optimisation model is initially accomplished objectively using divisions between interaction and computation models. DDRL-GOA resolves the issue to produce the best task offloading method. The research increased job completion rates by modifying the complexity design and universal best-case scenario assurances using DDRL-GOA. Finally, empirical research is performed to validate the proposed technique in scenario development. We also construct joint task offloading, load distribution, and resource allocation to lower system costs as integer concerns. In addition to having a high convergence efficiency, the experimental results show that the proposed approach has a substantially lower system cost when compared to current methods.
{"title":"Joint Task Offloading Based on Distributed Deep Reinforcement Learning-Based Genetic Optimization Algorithm for Internet of Vehicles","authors":"Hulin Jin, Yong-Guk Kim, Zhiran Jin, Chunyang Fan, Yonglong Xu","doi":"10.1007/s10723-024-09741-x","DOIUrl":"https://doi.org/10.1007/s10723-024-09741-x","url":null,"abstract":"<p>The growing number of individual vehicles and intelligent transportation systems have accelerated the development of Internet of Vehicles (IoV) technologies. The Internet of Vehicles (IoV) refers to a highly interactive network containing data regarding places, speeds, routes, and other aspects of vehicles. Task offloading was implemented to solve the issue that the current task scheduling models and tactics are primarily simplistic and do not consider the acceptable distribution of tasks, which results in a poor unloading completion rate. This work evaluates the Joint Task Offloading problem by Distributed Deep Reinforcement Learning (DDRL)-Based Genetic Optimization Algorithm (GOA). A system’s utility optimisation model is initially accomplished objectively using divisions between interaction and computation models. DDRL-GOA resolves the issue to produce the best task offloading method. The research increased job completion rates by modifying the complexity design and universal best-case scenario assurances using DDRL-GOA. Finally, empirical research is performed to validate the proposed technique in scenario development. We also construct joint task offloading, load distribution, and resource allocation to lower system costs as integer concerns. In addition to having a high convergence efficiency, the experimental results show that the proposed approach has a substantially lower system cost when compared to current methods.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139969509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study presents an environmentally friendly mechanism for task distribution designed explicitly for blockchain Proof of Authority (POA) consensus. This approach facilitates the selection of virtual machines for tasks such as data processing, transaction verification, and adding new blocks to the blockchain. Given the current lack of effective methods for integrating POA blockchain into the Cloud Industrial Internet of Things (CIIoT) due to their inefficiency and low throughput, we propose a novel algorithm that employs the Dynamic Voltage and Frequency Scaling (DVFS) technique, replacing the periodic transaction authentication process among validator candidates. Managing computer power consumption becomes a critical concern, especially within the Internet of Things ecosystem, where device power is constrained, and transaction scalability is crucial. Virtual machines must validate transactions (tasks) within specific time frames and deadlines. The DVFS technique efficiently reduces power consumption by intelligently scheduling and allocating tasks to virtual machines. Furthermore, we leverage artificial intelligence and neural networks to match tasks with suitable virtual machines. The simulation results demonstrate that our proposed approach harnesses migration and DVFS strategies to optimize virtual machine utilization, resulting in decreased energy and power consumption compared to non-DVFS methods. This achievement marks a significant stride towards seamlessly integrating blockchain and IoT, establishing an ecologically sustainable network. Our approach boasts additional benefits, including decentralization, enhanced data quality, and heightened security. We analyze simulation runtime and energy consumption in a comprehensive evaluation against existing techniques such as WPEG, IRMBBC, and BEMEC. The findings underscore the efficiency of our technique (LBDVFSb) across both criteria.
{"title":"Decentralized AI-Based Task Distribution on Blockchain for Cloud Industrial Internet of Things","authors":"Amir Javadpour, Arun Kumar Sangaiah, Weizhe Zhang, Ankit Vidyarthi, HamidReza Ahmadi","doi":"10.1007/s10723-024-09751-9","DOIUrl":"https://doi.org/10.1007/s10723-024-09751-9","url":null,"abstract":"<p>This study presents an environmentally friendly mechanism for task distribution designed explicitly for blockchain Proof of Authority (POA) consensus. This approach facilitates the selection of virtual machines for tasks such as data processing, transaction verification, and adding new blocks to the blockchain. Given the current lack of effective methods for integrating POA blockchain into the Cloud Industrial Internet of Things (CIIoT) due to their inefficiency and low throughput, we propose a novel algorithm that employs the Dynamic Voltage and Frequency Scaling (DVFS) technique, replacing the periodic transaction authentication process among validator candidates. Managing computer power consumption becomes a critical concern, especially within the Internet of Things ecosystem, where device power is constrained, and transaction scalability is crucial. Virtual machines must validate transactions (tasks) within specific time frames and deadlines. The DVFS technique efficiently reduces power consumption by intelligently scheduling and allocating tasks to virtual machines. Furthermore, we leverage artificial intelligence and neural networks to match tasks with suitable virtual machines. The simulation results demonstrate that our proposed approach harnesses migration and DVFS strategies to optimize virtual machine utilization, resulting in decreased energy and power consumption compared to non-DVFS methods. This achievement marks a significant stride towards seamlessly integrating blockchain and IoT, establishing an ecologically sustainable network. Our approach boasts additional benefits, including decentralization, enhanced data quality, and heightened security. We analyze simulation runtime and energy consumption in a comprehensive evaluation against existing techniques such as WPEG, IRMBBC, and BEMEC. The findings underscore the efficiency of our technique (LBDVFSb) across both criteria.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139949300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-22DOI: 10.1007/s10723-024-09753-7
Abstract
Cloud computing and its derivatives, such as fog and edge computing, have propelled the IoT era, integrating AI and deep learning for process automation. Despite transformative growth in healthcare, education, and automation domains, challenges persist, particularly in addressing the impact of multi-hopping public networks on data upload time, affecting response time, failure rates, and security. Existing scheduling algorithms, designed for multiple parameters like deadline, priority, rate of arrival, and arrival pattern, can minimize execution time for high-priority applications. However, the difficulty lies in simultaneously minimizing overall application execution time while mitigating resource depletion issues for low-priority applications. This paper introduces a cloud-fog-based computing architecture to tackle fog node resource starvation, incorporating joint probability, loss probability, and maximum entropy concepts. The proposed model utilizes a probabilistic application scheduling algorithm, considering priority and deadline and employing expected loss probability for task offloading. Additionally, a second algorithm focuses on resource starvation, optimizing task sequence for minimal response time and improved quality of service in a multi-Queueing fog system. The paper demonstrates that the proposed model outperforms state-of-the-art models, achieving a 3.43-5.71% quality of service improvement and a 99.75-267.68 msec reduction in response time through efficient resource allocation.
{"title":"A Probabilistic Deadline-aware Application Offloading in a Multi-Queueing Fog System: A Max Entropy Framework","authors":"","doi":"10.1007/s10723-024-09753-7","DOIUrl":"https://doi.org/10.1007/s10723-024-09753-7","url":null,"abstract":"<h3>Abstract</h3> <p>Cloud computing and its derivatives, such as fog and edge computing, have propelled the IoT era, integrating AI and deep learning for process automation. Despite transformative growth in healthcare, education, and automation domains, challenges persist, particularly in addressing the impact of multi-hopping public networks on data upload time, affecting response time, failure rates, and security. Existing scheduling algorithms, designed for multiple parameters like deadline, priority, rate of arrival, and arrival pattern, can minimize execution time for high-priority applications. However, the difficulty lies in simultaneously minimizing overall application execution time while mitigating resource depletion issues for low-priority applications. This paper introduces a cloud-fog-based computing architecture to tackle fog node resource starvation, incorporating joint probability, loss probability, and maximum entropy concepts. The proposed model utilizes a probabilistic application scheduling algorithm, considering priority and deadline and employing expected loss probability for task offloading. Additionally, a second algorithm focuses on resource starvation, optimizing task sequence for minimal response time and improved quality of service in a multi-Queueing fog system. The paper demonstrates that the proposed model outperforms state-of-the-art models, achieving a 3.43-5.71% quality of service improvement and a 99.75-267.68 msec reduction in response time through efficient resource allocation.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139918706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-22DOI: 10.1007/s10723-023-09733-3
Abstract
The Industrial Internet of Things (IIoT) revolution has led to the development a potential system that enhances communication among a city's assets. This system relies on wireless connections to numerous limited gadgets deployed throughout the urban landscape. However, technology has exposed these networks to various harmful assaults, cyberattacks, and potential hacker threats, jeopardizing the security of wireless information transmission. Specifically, unprotected IIoT networks act as vulnerable backdoor entry points for potential attacks. To address these challenges, this project proposes a comprehensive security structure that combines Extreme Learning Machines based Replicator Neural Networks (ELM-RNN) with Deep Reinforcement Learning based Deep Q-Networks (DRL-DQN) to safeguard against edge computing risks in intelligent cities. The proposed system starts by introducing a distributed authorization mechanism that employs an established trust paradigm to effectively regulate data flows within the network. Furthermore, a novel framework called Secure Trust-Aware Philosopher Privacy and Authentication (STAPPA), modeled using Petri Net, mitigates network privacy breaches and enhances data protection. The system employs the Garson algorithm alongside the ELM-based RNN to optimize network performance and strengthen anomaly detection capabilities. This enables efficient determination of the shortest routes, accurate anomaly detection, and effective search optimization within the network environment. Through extensive simulation, the proposed security framework demonstrates remarkable detection and accuracy rates by leveraging the power of reinforcement learning.
摘要 工业物联网(IIoT)革命促使开发了一种潜在的系统,以加强城市资产之间的通信。该系统依赖于与部署在城市各处的众多有限小工具的无线连接。然而,技术使这些网络面临各种有害攻击、网络攻击和潜在的黑客威胁,从而危及无线信息传输的安全性。具体来说,未受保护的物联网网络是潜在攻击的脆弱后门入口。为应对这些挑战,本项目提出了一种综合安全结构,将基于极限学习机的复制器神经网络(ELM-RNN)与基于深度强化学习的深度 Q 网络(DRL-DQN)相结合,以防范智慧城市中的边缘计算风险。拟议的系统首先引入了分布式授权机制,该机制采用既定的信任范式来有效规范网络内的数据流。此外,一个名为 "安全信任感知哲学家隐私和认证(STAPPA)"的新型框架采用 Petri 网建模,可减轻网络隐私泄露并加强数据保护。该系统采用了 Garson 算法和基于 ELM 的 RNN,以优化网络性能并加强异常检测能力。这样就能在网络环境中高效确定最短路径、准确检测异常并有效优化搜索。通过大量仿真,所提出的安全框架利用强化学习的强大功能,展示了出色的检测率和准确率。
{"title":"Employing RNN and Petri Nets to Secure Edge Computing Threats in Smart Cities","authors":"","doi":"10.1007/s10723-023-09733-3","DOIUrl":"https://doi.org/10.1007/s10723-023-09733-3","url":null,"abstract":"<h3>Abstract</h3> <p>The Industrial Internet of Things (IIoT) revolution has led to the development a potential system that enhances communication among a city's assets. This system relies on wireless connections to numerous limited gadgets deployed throughout the urban landscape. However, technology has exposed these networks to various harmful assaults, cyberattacks, and potential hacker threats, jeopardizing the security of wireless information transmission. Specifically, unprotected IIoT networks act as vulnerable backdoor entry points for potential attacks. To address these challenges, this project proposes a comprehensive security structure that combines Extreme Learning Machines based Replicator Neural Networks (ELM-RNN) with Deep Reinforcement Learning based Deep Q-Networks (DRL-DQN) to safeguard against edge computing risks in intelligent cities. The proposed system starts by introducing a distributed authorization mechanism that employs an established trust paradigm to effectively regulate data flows within the network. Furthermore, a novel framework called Secure Trust-Aware Philosopher Privacy and Authentication (STAPPA), modeled using Petri Net, mitigates network privacy breaches and enhances data protection. The system employs the Garson algorithm alongside the ELM-based RNN to optimize network performance and strengthen anomaly detection capabilities. This enables efficient determination of the shortest routes, accurate anomaly detection, and effective search optimization within the network environment. Through extensive simulation, the proposed security framework demonstrates remarkable detection and accuracy rates by leveraging the power of reinforcement learning.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139918777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-21DOI: 10.1007/s10723-023-09726-2
Abstract
Nowadays, data syncing before switchover and migration are two of the most pressing issues confronting cloud-based architecture. The requirement for a centrally managed IoT-based infrastructure has limited scalability due to security problems with cloud computing. The fundamental factor is that health systems, such as health monitoring, etc., demand computational operations on large amounts of data, which leads to the sensitivity of device latency emerging during these systems. Fog computing is a novel approach to increasing the effectiveness of cloud computing by allowing the use of necessary resources and close to end users. Existing fog computing approaches still have several drawbacks, including the tendency to either overestimate reaction time or consider result correctness, but managing both at once compromises system compatibility. To focus on deep learning algorithms and automated monitoring, FETCH is a proposed framework that connects with edge computing devices. It provides a constructive framework for real-life healthcare systems, such as those treating heart disease and other conditions. The suggested fog-enabled cloud computing system uses FogBus, which exhibits benefits in terms of power consumption, communication bandwidth, oscillation, delay, execution duration, and correctness.
{"title":"Edge Computing Empowered Smart Healthcare: Monitoring and Diagnosis with Deep Learning Methods","authors":"","doi":"10.1007/s10723-023-09726-2","DOIUrl":"https://doi.org/10.1007/s10723-023-09726-2","url":null,"abstract":"<h3>Abstract</h3> <p>Nowadays, data syncing before switchover and migration are two of the most pressing issues confronting cloud-based architecture. The requirement for a centrally managed IoT-based infrastructure has limited scalability due to security problems with cloud computing. The fundamental factor is that health systems, such as health monitoring, etc., demand computational operations on large amounts of data, which leads to the sensitivity of device latency emerging during these systems. Fog computing is a novel approach to increasing the effectiveness of cloud computing by allowing the use of necessary resources and close to end users. Existing fog computing approaches still have several drawbacks, including the tendency to either overestimate reaction time or consider result correctness, but managing both at once compromises system compatibility. To focus on deep learning algorithms and automated monitoring, FETCH is a proposed framework that connects with edge computing devices. It provides a constructive framework for real-life healthcare systems, such as those treating heart disease and other conditions. The suggested fog-enabled cloud computing system uses FogBus, which exhibits benefits in terms of power consumption, communication bandwidth, oscillation, delay, execution duration, and correctness.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139918699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-13DOI: 10.1007/s10723-024-09749-3
Xucheng Wan
The Internet of Things (IoT) has become an infrastructure that makes smart cities possible. is both accurate and efficient. The intelligent production industry 4.0 period has made mobile edge computing (MEC) essential. Computationally demanding tasks can be delegated from the MEC server to the central cloud servers for processing in a smart city. This paper develops the integrated optimization framework for offloading tasks and dynamic resource allocation to reduce the power usage of all Internet of Things (IoT) gadgets subjected to delay limits and resource limitations. A Federated Learning FL-DDPG algorithm based on the Deep Deterministic Policy Gradient (DDPG) architecture is suggested for dynamic resource management in MEC networks. This research addresses the optimization issues for the CPU frequencies, transmit power, and IoT device offloading decisions for a multi-mobile edge computing (MEC) server and multi-IoT cellular networks. A weighted average of the processing load on the central MEC server (PMS), the system’s overall energy use, and the task-dropping expense is calculated as an optimization issue. The Lyapunov optimization theory formulates a random optimization strategy to reduce the energy use of IoT devices in MEC networks and reduce bandwidth assignment and transmitting power distribution. Additionally, the modeling studies demonstrate that, compared to other benchmark approaches, the suggested algorithm efficiently enhances system performance while consuming less energy.
{"title":"Dynamic Resource Management in MEC Powered by Edge Intelligence for Smart City Internet of Things","authors":"Xucheng Wan","doi":"10.1007/s10723-024-09749-3","DOIUrl":"https://doi.org/10.1007/s10723-024-09749-3","url":null,"abstract":"<p>The Internet of Things (IoT) has become an infrastructure that makes smart cities possible. is both accurate and efficient. The intelligent production industry 4.0 period has made mobile edge computing (MEC) essential. Computationally demanding tasks can be delegated from the MEC server to the central cloud servers for processing in a smart city. This paper develops the integrated optimization framework for offloading tasks and dynamic resource allocation to reduce the power usage of all Internet of Things (IoT) gadgets subjected to delay limits and resource limitations. A Federated Learning FL-DDPG algorithm based on the Deep Deterministic Policy Gradient (DDPG) architecture is suggested for dynamic resource management in MEC networks. This research addresses the optimization issues for the CPU frequencies, transmit power, and IoT device offloading decisions for a multi-mobile edge computing (MEC) server and multi-IoT cellular networks. A weighted average of the processing load on the central MEC server (PMS), the system’s overall energy use, and the task-dropping expense is calculated as an optimization issue. The Lyapunov optimization theory formulates a random optimization strategy to reduce the energy use of IoT devices in MEC networks and reduce bandwidth assignment and transmitting power distribution. Additionally, the modeling studies demonstrate that, compared to other benchmark approaches, the suggested algorithm efficiently enhances system performance while consuming less energy.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139760872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-12DOI: 10.1007/s10723-024-09744-8
Sheng Chai, Jimmy Huang
Conventional detection techniques aimed at intelligent devices rely primarily on deep learning algorithms, which, despite their high precision, are hindered by significant computer power and energy requirements. This work proposes a novel solution to these constraints using mobile edge computing (MEC). We present the Dependent Task-Offloading technique (DTOS), a deep reinforcement learning-based technique for optimizing task offloading to numerous heterogeneous edge servers in intelligent prosthesis applications. By expressing the task offloading problem as a Markov decision process, DTOS addresses the dual challenge of lowering network service latency and power utilisation. DTOS employs a weighted sum optimisation method in this approach to find the best policy. The technique uses parallel deep neural networks (DNNs), which not only create offloading possibilities but also cache the most successful options for further iterations. Furthermore, the DTOS modifies DNN variables using a prioritized experience replay method, which improves learning by focusing on valuable experiences. The use of DTOS in a real-world MEC scenario, where a deep learning-based movement intent detection algorithm is deployed on intelligent prostheses, demonstrates its applicability and effectiveness. The experimental results show that DTOS consistently makes optimal decisions in work offloading and planning, demonstrating its potential to improve the operational efficiency of intelligent prostheses significantly. Thus, the study introduces a novel approach that combines the characteristics of deep reinforcement learning with MEC, demonstrating a substantial development in the field of intelligent prostheses through optimal task offloading and reduced resource usage.
{"title":"Dependent Task Scheduling Using Parallel Deep Neural Networks in Mobile Edge Computing","authors":"Sheng Chai, Jimmy Huang","doi":"10.1007/s10723-024-09744-8","DOIUrl":"https://doi.org/10.1007/s10723-024-09744-8","url":null,"abstract":"<p>Conventional detection techniques aimed at intelligent devices rely primarily on deep learning algorithms, which, despite their high precision, are hindered by significant computer power and energy requirements. This work proposes a novel solution to these constraints using mobile edge computing (MEC). We present the Dependent Task-Offloading technique (DTOS), a deep reinforcement learning-based technique for optimizing task offloading to numerous heterogeneous edge servers in intelligent prosthesis applications. By expressing the task offloading problem as a Markov decision process, DTOS addresses the dual challenge of lowering network service latency and power utilisation. DTOS employs a weighted sum optimisation method in this approach to find the best policy. The technique uses parallel deep neural networks (DNNs), which not only create offloading possibilities but also cache the most successful options for further iterations. Furthermore, the DTOS modifies DNN variables using a prioritized experience replay method, which improves learning by focusing on valuable experiences. The use of DTOS in a real-world MEC scenario, where a deep learning-based movement intent detection algorithm is deployed on intelligent prostheses, demonstrates its applicability and effectiveness. The experimental results show that DTOS consistently makes optimal decisions in work offloading and planning, demonstrating its potential to improve the operational efficiency of intelligent prostheses significantly. Thus, the study introduces a novel approach that combines the characteristics of deep reinforcement learning with MEC, demonstrating a substantial development in the field of intelligent prostheses through optimal task offloading and reduced resource usage.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139760771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-12DOI: 10.1007/s10723-024-09748-4
Jie Zhao, Ahmed M. El-Sherbeeny
With the rapid development of technology, the Internet of vehicles (IoV) has become increasingly important. However, as the number of vehicles on highways increases, ensuring reliable communication between them has become a significant challenge. To address this issue, this paper proposes a novel approach that combines Non-Orthogonal Multiple Access (NOMA) with a time-optimized multitask offloading model based on Optimal Stopping Theory (OST) principles. NOMA-OST is a promising technology that can address the high volume of multiple access and the need for reliable communication in IoV. A NOMA-OST-based IoV system is proposed to meet the Vehicle-to-Vehicle (V2V) communication requirements. This approach optimizes joint task offloading and resource allocation for multiple users, tasks, and servers. NOMA enables efficient resource sharing by accommodating multiple devices, whereas OST ensures timely and intelligent task offloading decisions, resulting in improved reliability and efficiency in V2V communication within IoV, making it a highly innovative and technically robust solution. It suggests a low-complexity sub-optimal matching approach for sub-channel allocation to increase the effectiveness of offloading. Simulation results show that NOMA with OST significantly improves the system’s energy efficiency (EE) and reduces computation time. The approach also enhances the effectiveness of task offloading and resource allocation, leading to better overall system performance. The performance of NOMA with OST under V2V communication requirements in IoV is significantly improved compared to traditional orthogonal multiaccess methods. Overall, NOMA with OST is a promising technology that can address the high reliability of V2V communication requirements in IoV. It can improve system performance, and energy efficiency and reduce computation time, making it a valuable technology for IoV applications.
{"title":"Joint Task Offloading and Multi-Task Offloading Based on NOMA Enhanced Internet of Vehicles in Edge Computing","authors":"Jie Zhao, Ahmed M. El-Sherbeeny","doi":"10.1007/s10723-024-09748-4","DOIUrl":"https://doi.org/10.1007/s10723-024-09748-4","url":null,"abstract":"<p>With the rapid development of technology, the Internet of vehicles (IoV) has become increasingly important. However, as the number of vehicles on highways increases, ensuring reliable communication between them has become a significant challenge. To address this issue, this paper proposes a novel approach that combines Non-Orthogonal Multiple Access (NOMA) with a time-optimized multitask offloading model based on Optimal Stopping Theory (OST) principles. NOMA-OST is a promising technology that can address the high volume of multiple access and the need for reliable communication in IoV. A NOMA-OST-based IoV system is proposed to meet the Vehicle-to-Vehicle (V2V) communication requirements. This approach optimizes joint task offloading and resource allocation for multiple users, tasks, and servers. NOMA enables efficient resource sharing by accommodating multiple devices, whereas OST ensures timely and intelligent task offloading decisions, resulting in improved reliability and efficiency in V2V communication within IoV, making it a highly innovative and technically robust solution. It suggests a low-complexity sub-optimal matching approach for sub-channel allocation to increase the effectiveness of offloading. Simulation results show that NOMA with OST significantly improves the system’s energy efficiency (EE) and reduces computation time. The approach also enhances the effectiveness of task offloading and resource allocation, leading to better overall system performance. The performance of NOMA with OST under V2V communication requirements in IoV is significantly improved compared to traditional orthogonal multiaccess methods. Overall, NOMA with OST is a promising technology that can address the high reliability of V2V communication requirements in IoV. It can improve system performance, and energy efficiency and reduce computation time, making it a valuable technology for IoV applications.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139760788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-09DOI: 10.1007/s10723-024-09742-w
Jianjia Liu, Xin Yang, Tiannan Liao, Yong Hang
The Internet of Things (IoT) is developing a more significant transformation in the healthcare industry by improving patient care with reduced cost of treatments. Main aim of this research is to monitor the Covid-19 patients and report the health issues immediately using IoT. Collected data is analyzed using deep learning model. The technological advancement of sensor and mobile technologies came up with IoT-based healthcare systems. These systems are more preventive than the traditional healthcare systems. This paper developed an efficient real-time IoT-based COVID-19 monitoring and prediction system using a deep learning model. By collecting symptomatic patient data and analyzing it, the COVID-19 suspects are predicted in the early stages in a better way. The effective parameters are selected using the Modified Chicken Swarm optimization (MCSO) approach by mining the health parameters gathered from the sensors. The COVID-19 presence is computed using the hybrid Deep learning model called Convolution and graph LSTM using the desired features. (ConvGLSTM). This process includes four stages such as data collection, data analysis (feature selection), diagnostic system (DL model), and the cloud system (Storage). The developed model is experimented with using the dataset from Srinagar based on parameters such as accuracy, precision, recall, F1 score, RMSE, and AUC. Based on the outcome, the proposed model is effective and superior to the traditional approaches to the early identification of COVID-19.
{"title":"An IoT-based Covid-19 Healthcare Monitoring and Prediction Using Deep Learning Methods","authors":"Jianjia Liu, Xin Yang, Tiannan Liao, Yong Hang","doi":"10.1007/s10723-024-09742-w","DOIUrl":"https://doi.org/10.1007/s10723-024-09742-w","url":null,"abstract":"<p>The Internet of Things (IoT) is developing a more significant transformation in the healthcare industry by improving patient care with reduced cost of treatments. Main aim of this research is to monitor the Covid-19 patients and report the health issues immediately using IoT. Collected data is analyzed using deep learning model. The technological advancement of sensor and mobile technologies came up with IoT-based healthcare systems. These systems are more preventive than the traditional healthcare systems. This paper developed an efficient real-time IoT-based COVID-19 monitoring and prediction system using a deep learning model. By collecting symptomatic patient data and analyzing it, the COVID-19 suspects are predicted in the early stages in a better way. The effective parameters are selected using the Modified Chicken Swarm optimization (MCSO) approach by mining the health parameters gathered from the sensors. The COVID-19 presence is computed using the hybrid Deep learning model called Convolution and graph LSTM using the desired features. (ConvGLSTM). This process includes four stages such as data collection, data analysis (feature selection), diagnostic system (DL model), and the cloud system (Storage). The developed model is experimented with using the dataset from Srinagar based on parameters such as accuracy, precision, recall, F1 score, RMSE, and AUC. Based on the outcome, the proposed model is effective and superior to the traditional approaches to the early identification of COVID-19.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139760773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}