Pub Date : 2024-01-20DOI: 10.1007/s10723-023-09739-x
Y. P. Tsang, C. K. M. Lee, Kening Zhang, C. H. Wu, W. H. Ip
The emergence of blockchain technology has seen applications increasingly hybridise cloud storage and distributed ledger technology in the Internet of Things (IoT) and cyber-physical systems, complicating data management in decentralised applications (DApps). Because it is inefficient for blockchain technology to handle large amounts of data, effective on-chain and off-chain data management in peer-to-peer networks and cloud storage has drawn considerable attention. Space reservation is a cost-effective approach to managing cloud storage effectively, contrasting with the demand for additional space in real-time. Furthermore, off-chain data replication in the peer-to-peer network can eliminate single points of failure of DApps. However, recent research has rarely discussed optimising on-chain and off-chain data management in the blockchain-enabled IoT (BIoT) environment. In this study, the BIoT environment is modelled, with cloud storage and blockchain orchestrated over the peer-to-peer network. The asynchronous advantage actor-critic algorithm is applied to exploit intelligent agents with the optimal policy for data packing, space reservation, and data replication to achieve an intelligent data management strategy. The experimental analysis reveals that the proposed scheme demonstrates rapid convergence and superior performance in terms of average total reward compared with other typical schemes, resulting in enhanced scalability, security and reliability of blockchain-IoT networks, leading to an intelligent data management strategy.
{"title":"On-Chain and Off-Chain Data Management for Blockchain-Internet of Things: A Multi-Agent Deep Reinforcement Learning Approach","authors":"Y. P. Tsang, C. K. M. Lee, Kening Zhang, C. H. Wu, W. H. Ip","doi":"10.1007/s10723-023-09739-x","DOIUrl":"https://doi.org/10.1007/s10723-023-09739-x","url":null,"abstract":"<p>The emergence of blockchain technology has seen applications increasingly hybridise cloud storage and distributed ledger technology in the Internet of Things (IoT) and cyber-physical systems, complicating data management in decentralised applications (DApps). Because it is inefficient for blockchain technology to handle large amounts of data, effective on-chain and off-chain data management in peer-to-peer networks and cloud storage has drawn considerable attention. Space reservation is a cost-effective approach to managing cloud storage effectively, contrasting with the demand for additional space in real-time. Furthermore, off-chain data replication in the peer-to-peer network can eliminate single points of failure of DApps. However, recent research has rarely discussed optimising on-chain and off-chain data management in the blockchain-enabled IoT (BIoT) environment. In this study, the BIoT environment is modelled, with cloud storage and blockchain orchestrated over the peer-to-peer network. The asynchronous advantage actor-critic algorithm is applied to exploit intelligent agents with the optimal policy for data packing, space reservation, and data replication to achieve an intelligent data management strategy. The experimental analysis reveals that the proposed scheme demonstrates rapid convergence and superior performance in terms of average total reward compared with other typical schemes, resulting in enhanced scalability, security and reliability of blockchain-IoT networks, leading to an intelligent data management strategy.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139509263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-19DOI: 10.1007/s10723-023-09736-0
Xuemei Li, Xuelian Liu, Da Xie, Chong Chen
In the internet of things, machine learning at the edge of cloud continuum is developing rapidly, providing more convenient services for design developers. The paper proposes a lidar target detection method based on scene density-awareness network for cloud continuum. The density-awareness network architecture is designed, and the context column feature network is proposed. The BEV density attention feature network is designed by cascading the density feature map with the spatial attention mechanism, and then connected with the BEV column feature network to generate the ablation BEV map. Multi-head detector is designed to regress the object center point, scale size and direction, and loss function is used for active supervision. The experiment is conducted on Alibaba Cloud services. On the validation dataset of KITTI, the 3D objects and BEV objects are detected and evaluated for three types of objects. The results show that most of the AP values of the density-awareness model proposed in this paper are higher than other methods, and the detection time is 0.09 s, which can meet the requirements of high accuracy and real-time of vehicle-borne lidar target detection.
{"title":"3D Lidar Target Detection Method at the Edge for the Cloud Continuum","authors":"Xuemei Li, Xuelian Liu, Da Xie, Chong Chen","doi":"10.1007/s10723-023-09736-0","DOIUrl":"https://doi.org/10.1007/s10723-023-09736-0","url":null,"abstract":"<p>In the internet of things, machine learning at the edge of cloud continuum is developing rapidly, providing more convenient services for design developers. The paper proposes a lidar target detection method based on scene density-awareness network for cloud continuum. The density-awareness network architecture is designed, and the context column feature network is proposed. The BEV density attention feature network is designed by cascading the density feature map with the spatial attention mechanism, and then connected with the BEV column feature network to generate the ablation BEV map. Multi-head detector is designed to regress the object center point, scale size and direction, and loss function is used for active supervision. The experiment is conducted on Alibaba Cloud services. On the validation dataset of KITTI, the 3D objects and BEV objects are detected and evaluated for three types of objects. The results show that most of the AP values of the density-awareness model proposed in this paper are higher than other methods, and the detection time is 0.09 s, which can meet the requirements of high accuracy and real-time of vehicle-borne lidar target detection.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139509072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-19DOI: 10.1007/s10723-024-09743-9
Avishek Sinha, Samayveer Singh, Harsh K. Verma
In recent times, edge computing has arisen as a highly promising paradigm aimed at facilitating resource-intensive Internet of Things (IoT) applications by offering low-latency services. However, the constrained computational capabilities of the IoT nodes present considerable obstacles when it comes to efficient task-scheduling applications. In this paper, a nature-inspired coati optimization-based energy-aware task scheduling (CO-ETS) approach is proposed to address the challenge of efficiently assigning tasks to available edge devices. The proposed work incorporates a fitness function that effectively enhances task assignment optimization, leading to improved system efficiency, reduced power consumption, and enhanced system reliability. Moreover, we integrate blockchain with AI-driven task scheduling to fortify security, protect user privacy, and optimize edge computing in IoT-based environments. The blockchain-based approach ensures a secure and trusted decentralized identity management and reputation system for IoT edge networks. To validate the effectiveness of the proposed CO-ETS approach, we conduct a comparative analysis against state-of-the-art methods by considering metrics such as makespan, CPU execution time, energy consumption, and mean wait time. The proposed approach offers promising solutions to optimize task allocation, enhance system performance, and ensure secure and privacy-preserving operations in edge computing environments.
{"title":"AI-Driven Task Scheduling Strategy with Blockchain Integration for Edge Computing","authors":"Avishek Sinha, Samayveer Singh, Harsh K. Verma","doi":"10.1007/s10723-024-09743-9","DOIUrl":"https://doi.org/10.1007/s10723-024-09743-9","url":null,"abstract":"<p>In recent times, edge computing has arisen as a highly promising paradigm aimed at facilitating resource-intensive Internet of Things (IoT) applications by offering low-latency services. However, the constrained computational capabilities of the IoT nodes present considerable obstacles when it comes to efficient task-scheduling applications. In this paper, a nature-inspired coati optimization-based energy-aware task scheduling (CO-ETS) approach is proposed to address the challenge of efficiently assigning tasks to available edge devices. The proposed work incorporates a fitness function that effectively enhances task assignment optimization, leading to improved system efficiency, reduced power consumption, and enhanced system reliability. Moreover, we integrate blockchain with AI-driven task scheduling to fortify security, protect user privacy, and optimize edge computing in IoT-based environments. The blockchain-based approach ensures a secure and trusted decentralized identity management and reputation system for IoT edge networks. To validate the effectiveness of the proposed CO-ETS approach, we conduct a comparative analysis against state-of-the-art methods by considering metrics such as makespan, CPU execution time, energy consumption, and mean wait time. The proposed approach offers promising solutions to optimize task allocation, enhance system performance, and ensure secure and privacy-preserving operations in edge computing environments.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139509189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-18DOI: 10.1007/s10723-023-09735-1
Xiaofeng Yang
Accounting informatization is a crucial component of enterprise informatization, significantly impacting operational efficiency in accounting and finance. Advances in information technology have introduced automation techniques that accelerate the processing of accounting information cost-effectively. Integrating artificial intelligence, cloud computing, and edge computing is pivotal in streamlining and optimizing these processes. Traditionally, accounting informatization relied on system servers and local storage for data processing. However, the era of big data necessitates a shift to cloud computing frameworks for efficient data storage and processing. Despite the advantages of cloud storage, concerns arise regarding data security and the substantial data transactions between the cloud and source devices. To address these challenges, this research proposes a novel algorithm, Heterogeneous Distributed Deep Learning with Data Offloading (DDLO) algorithm. DDLO leverages the synergy between edge devices and cloud computing to enhance data processes. Edge computing enables rapid processing of large volumes of data at or near the data collection sites, optimizing day-to-day operations for enterprises. Furthermore, machine learning algorithms at edge devices enhance data processing efficiency, augmenting the computing environment's overall performance. The proposed DDLO algorithm fosters a hybrid machine learning approach for computing joint tasks and multi-tasking in accounting informatization. It enables dynamic resource allocation, allowing selected data or model updates to be offloaded to the cloud for complex tasks. The algorithm's performance is rigorously evaluated using key metrics, including computing time, offloading time, accuracy, and cost levels. By capitalizing on the strengths of edge computing, cloud computing, and artificial intelligence, the DDLO algorithm effectively addresses accounting informatization challenges. It empowers enterprises to process vast amounts of accounting data efficiently and securely while improving overall operational efficiency. Regarding time, using terasort in tasks offloading using DDLO consumes less milliseconds 0t 33 ms which is lesser than other techniques.
{"title":"Optimizing Accounting Informatization through Simultaneous Multi-Tasking across Edge and Cloud Devices using Hybrid Machine Learning Models","authors":"Xiaofeng Yang","doi":"10.1007/s10723-023-09735-1","DOIUrl":"https://doi.org/10.1007/s10723-023-09735-1","url":null,"abstract":"<p>Accounting informatization is a crucial component of enterprise informatization, significantly impacting operational efficiency in accounting and finance. Advances in information technology have introduced automation techniques that accelerate the processing of accounting information cost-effectively. Integrating artificial intelligence, cloud computing, and edge computing is pivotal in streamlining and optimizing these processes. Traditionally, accounting informatization relied on system servers and local storage for data processing. However, the era of big data necessitates a shift to cloud computing frameworks for efficient data storage and processing. Despite the advantages of cloud storage, concerns arise regarding data security and the substantial data transactions between the cloud and source devices. To address these challenges, this research proposes a novel algorithm, Heterogeneous Distributed Deep Learning with Data Offloading (DDLO) algorithm. DDLO leverages the synergy between edge devices and cloud computing to enhance data processes. Edge computing enables rapid processing of large volumes of data at or near the data collection sites, optimizing day-to-day operations for enterprises. Furthermore, machine learning algorithms at edge devices enhance data processing efficiency, augmenting the computing environment's overall performance. The proposed DDLO algorithm fosters a hybrid machine learning approach for computing joint tasks and multi-tasking in accounting informatization. It enables dynamic resource allocation, allowing selected data or model updates to be offloaded to the cloud for complex tasks. The algorithm's performance is rigorously evaluated using key metrics, including computing time, offloading time, accuracy, and cost levels. By capitalizing on the strengths of edge computing, cloud computing, and artificial intelligence, the DDLO algorithm effectively addresses accounting informatization challenges. It empowers enterprises to process vast amounts of accounting data efficiently and securely while improving overall operational efficiency. Regarding time, using terasort in tasks offloading using DDLO consumes less milliseconds 0t 33 ms which is lesser than other techniques.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139501489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-13DOI: 10.1007/s10723-023-09734-2
Aida A. Nasr
{"title":"CMSV: a New Cloud Multi-Agents for Self-Driving Vehicles as a Services","authors":"Aida A. Nasr","doi":"10.1007/s10723-023-09734-2","DOIUrl":"https://doi.org/10.1007/s10723-023-09734-2","url":null,"abstract":"","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-09DOI: 10.1007/s10723-023-09724-4
Bingtao Liu
The Internet of Vehicles (IoV) technology is progressively maturing because of the growth of private cars and the establishment of intelligent transportation systems. The development of smart cars has, therefore, been followed by a parallel rise in the volume of media and video games in the automobile and a massive increase in the need for processing resources. Smart cars cannot process the enormous quantity of requests created by vehicles because they have limited computing power and must maintain many outstanding jobs in their queues. The distribution of edge servers near the customer side of the highway may also accomplish real-time resource requests, and edge servers can assist with the shortage of computational power. Nevertheless, the substantial amount of energy created while processing is also an issue we must address. A joint task offloading strategy based on mobile edge computing and fog computing (EFTO) was presented in this paper to address this problem. Practically, the position of the processing activity is first discovered by obtaining the computing task's route, which reveals all the task's routing details from the starting point to the desired place. Next, to minimize the time and time expended during offloading and processing, a multi-objective optimization problem is implemented using the task offloading technique F-TORA based on the Takagi–Sugeno fuzzy neural network (T-S FNN). Finally, comparative trials showing a decrease in time consumed and an optimization of energy use compared to alternative offloading techniques prove the effectiveness of EFTO.
{"title":"Hybrid Fuzzy Neural Network for Joint Task Offloading in the Internet of Vehicles","authors":"Bingtao Liu","doi":"10.1007/s10723-023-09724-4","DOIUrl":"https://doi.org/10.1007/s10723-023-09724-4","url":null,"abstract":"<p>The Internet of Vehicles (IoV) technology is progressively maturing because of the growth of private cars and the establishment of intelligent transportation systems. The development of smart cars has, therefore, been followed by a parallel rise in the volume of media and video games in the automobile and a massive increase in the need for processing resources. Smart cars cannot process the enormous quantity of requests created by vehicles because they have limited computing power and must maintain many outstanding jobs in their queues. The distribution of edge servers near the customer side of the highway may also accomplish real-time resource requests, and edge servers can assist with the shortage of computational power. Nevertheless, the substantial amount of energy created while processing is also an issue we must address. A joint task offloading strategy based on mobile edge computing and fog computing (EFTO) was presented in this paper to address this problem. Practically, the position of the processing activity is first discovered by obtaining the computing task's route, which reveals all the task's routing details from the starting point to the desired place. Next, to minimize the time and time expended during offloading and processing, a multi-objective optimization problem is implemented using the task offloading technique F-TORA based on the Takagi–Sugeno fuzzy neural network (T-S FNN). Finally, comparative trials showing a decrease in time consumed and an optimization of energy use compared to alternative offloading techniques prove the effectiveness of EFTO.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139414716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-08DOI: 10.1007/s10723-023-09721-7
XuanWen, Hai Meng Sun
The surge in computing demands of onboard devices in vehicles has necessitated the adoption of mobile edge computing (MEC) to cater to their computational and storage needs. This paper presents a task offloading strategy for mobile edge computing based on collaborative roadside parking cooperation, leveraging idle computing resources in roadside vehicles. The proposed method establishes resource sharing and mutual utilization among roadside vehicles, roadside units (RSUs), and cloud servers, transforming the computing task offloading problem into a constrained optimization challenge. To address the complexity of this optimization problem, a novel Hybrid Algorithm based on the Hill-Climbing and Genetic Algorithm (HHGA) is proposed, combined with the powerful Simulated Annealing (SA) algorithm. The HHGA-SA Algorithm integrates the advantages of both HHGA and SA to efficiently explore the solution space and optimize task execution with reduced delay and energy consumption. The HHGA component of the algorithm utilizes the strengths of Genetic Algorithm and Hill-Climbing. The Genetic Algorithm enables global exploration, identifying potential optimal solutions, while Hill-Climbing refines the solutions locally to improve their quality. By harnessing the synergies between these techniques, the HHGA-SA Algorithm navigates the multi-constraint landscape effectively, producing robust and high-quality solutions for task offloading. To evaluate the efficacy of the proposed approach, extensive simulations are conducted in a realistic roadside parking cooperation-based Mobile Edge Computing scenario. Comparative analyses with standard Genetic Algorithms and Hill-Climbing demonstrate the superiority of the HHGA-SA Algorithm, showcasing substantial enhancements in task execution efficiency and energy utilization. The study highlights the potential of leveraging idle computing resources in roadside parking vehicles to enhance Mobile Edge Computing capabilities. The collaborative approach facilitated by the HHGA-SA Algorithm fosters efficient task offloading among roadside vehicles, RSUs, and cloud servers, elevating overall system performance.
{"title":"Parking Cooperation-Based Mobile Edge Computing Using Task Offloading Strategy","authors":"XuanWen, Hai Meng Sun","doi":"10.1007/s10723-023-09721-7","DOIUrl":"https://doi.org/10.1007/s10723-023-09721-7","url":null,"abstract":"<p>The surge in computing demands of onboard devices in vehicles has necessitated the adoption of mobile edge computing (MEC) to cater to their computational and storage needs. This paper presents a task offloading strategy for mobile edge computing based on collaborative roadside parking cooperation, leveraging idle computing resources in roadside vehicles. The proposed method establishes resource sharing and mutual utilization among roadside vehicles, roadside units (RSUs), and cloud servers, transforming the computing task offloading problem into a constrained optimization challenge. To address the complexity of this optimization problem, a novel Hybrid Algorithm based on the Hill-Climbing and Genetic Algorithm (HHGA) is proposed, combined with the powerful Simulated Annealing (SA) algorithm. The HHGA-SA Algorithm integrates the advantages of both HHGA and SA to efficiently explore the solution space and optimize task execution with reduced delay and energy consumption. The HHGA component of the algorithm utilizes the strengths of Genetic Algorithm and Hill-Climbing. The Genetic Algorithm enables global exploration, identifying potential optimal solutions, while Hill-Climbing refines the solutions locally to improve their quality. By harnessing the synergies between these techniques, the HHGA-SA Algorithm navigates the multi-constraint landscape effectively, producing robust and high-quality solutions for task offloading. To evaluate the efficacy of the proposed approach, extensive simulations are conducted in a realistic roadside parking cooperation-based Mobile Edge Computing scenario. Comparative analyses with standard Genetic Algorithms and Hill-Climbing demonstrate the superiority of the HHGA-SA Algorithm, showcasing substantial enhancements in task execution efficiency and energy utilization. The study highlights the potential of leveraging idle computing resources in roadside parking vehicles to enhance Mobile Edge Computing capabilities. The collaborative approach facilitated by the HHGA-SA Algorithm fosters efficient task offloading among roadside vehicles, RSUs, and cloud servers, elevating overall system performance.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139398597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-08DOI: 10.1007/s10723-023-09723-5
Huayi Yin, Xindong Huang, Erzhong Cao
The number of task demands created by smart terminals is rising dramatically because of the increasing usage of industrial Internet technologies in intelligent production lines. Speed of response is vital when dealing with such large activities. The current work needs to work with the task scheduling flow of smart manufacturing lines. The proposed method addresses the limitations of the current approach, particularly in the context of task scheduling and task scheduling flow within intelligent production lines. This study concentrates on solving the multi-objective task scheduling challenge in intelligent manufacturing by introducing a task scheduling approach based on job prioritization. To achieve this, a multi-objective task scheduling mechanism was developed, aiming to reduce service latency and energy consumption. This mechanism was integrated into a cloud-edge computing framework for intelligent production lines. The task scheduling strategy and task flow scheduling were optimized using Particle Swarm Optimization (PSO) and Gravitational Search Algorithm (GSA). Lastly, thorough simulation studies evaluate Multi-PSG, demonstrating that it beats every other algorithm regarding job completion rate. The completion rate of all tasks is greater than 90% when the number of nodes exceeds 10, which satisfies the real-time demands of the related tasks in the smart manufacturing processes. The method also performs better than other methods regarding power usage and maximum completion rate.
{"title":"A Cloud-Edge-Based Multi-Objective Task Scheduling Approach for Smart Manufacturing Lines","authors":"Huayi Yin, Xindong Huang, Erzhong Cao","doi":"10.1007/s10723-023-09723-5","DOIUrl":"https://doi.org/10.1007/s10723-023-09723-5","url":null,"abstract":"<p>The number of task demands created by smart terminals is rising dramatically because of the increasing usage of industrial Internet technologies in intelligent production lines. Speed of response is vital when dealing with such large activities. The current work needs to work with the task scheduling flow of smart manufacturing lines. The proposed method addresses the limitations of the current approach, particularly in the context of task scheduling and task scheduling flow within intelligent production lines. This study concentrates on solving the multi-objective task scheduling challenge in intelligent manufacturing by introducing a task scheduling approach based on job prioritization. To achieve this, a multi-objective task scheduling mechanism was developed, aiming to reduce service latency and energy consumption. This mechanism was integrated into a cloud-edge computing framework for intelligent production lines. The task scheduling strategy and task flow scheduling were optimized using Particle Swarm Optimization (PSO) and Gravitational Search Algorithm (GSA). Lastly, thorough simulation studies evaluate Multi-PSG, demonstrating that it beats every other algorithm regarding job completion rate. The completion rate of all tasks is greater than 90% when the number of nodes exceeds 10, which satisfies the real-time demands of the related tasks in the smart manufacturing processes. The method also performs better than other methods regarding power usage and maximum completion rate.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139398585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-30DOI: 10.1007/s10723-023-09717-3
Yibo Han, Pu Han, Bo Yuan, Zheng Zhang, Lu Liu, John Panneerselvam
The diagnosis of the cardiovascular disease relies heavily on the automated classification of electrocardiograms (ECG) for arrhythmia monitoring, which is often performed using machine learning (ML) algorithms. However, current ML algorithms are typically deployed using cloud-based inferences, which may not meet the reliability and security requirements for ECG monitoring. A newer solution, edge inference, has been developed to address speed, security, connection, and reliability issues. This paper presents an edge-based algorithm that combines continuous wavelet transform (CWT), and short-time Fourier transform (STFT), in a hybrid convolutional neural network (CNN) and Long Short-Term Memory (LSTM) model techniques for real-time ECG classification and arrhythmia detection. The algorithm incorporates an STFT CWT-based 1D convolutional (Conv1D) layer as a Finite Impulse Response (FIR) filter to generate the spectrogram of the input ECG signal. The output feature maps from the Conv1D layer are then reshaped into a 2D heart map image and fed into a hybrid convolutional neural network (2D-CNN) and Long Short-Term Memory (LSTM) classification model. The MIT-BIH arrhythmia database is used to train and evaluate the model. Using a cloud platform, four model versions are learned, considered, and optimized for edge computing on a Raspberry Pi device. Techniques such as weight quantization and pruning enhance the algorithms created for edge inference. The proposed classifiers can operate with a total target size of 90 KB, an overall inference time of 9 ms, and higher memory use of 12 MB while achieving up to 99.6% classification accuracy and a 99.88% F1-score at the edge. Thanks to its results, the suggested classifier is highly versatile and can be used for arrhythmia monitoring on various edge devices.
心血管疾病的诊断在很大程度上依赖于心电图(ECG)的自动分类,以监测心律失常,而这通常是通过机器学习(ML)算法来实现的。然而,目前的 ML 算法通常使用基于云的推理进行部署,这可能无法满足心电图监测的可靠性和安全性要求。为了解决速度、安全性、连接和可靠性等问题,人们开发了一种更新的解决方案--边缘推理。本文介绍了一种基于边缘的算法,该算法在混合卷积神经网络(CNN)和长短期记忆(LSTM)模型技术中结合了连续小波变换(CWT)和短时傅立叶变换(STFT),用于实时心电图分类和心律失常检测。该算法将基于 STFT CWT 的一维卷积(Conv1D)层作为有限脉冲响应(FIR)滤波器,生成输入心电信号的频谱图。然后将 Conv1D 层的输出特征图重塑为二维心脏图图像,并输入混合卷积神经网络(2D-CNN)和长短期记忆(LSTM)分类模型。MIT-BIH 心律失常数据库用于训练和评估该模型。利用云平台,在 Raspberry Pi 设备上针对边缘计算学习、考虑和优化了四个模型版本。权重量化和剪枝等技术增强了为边缘推理创建的算法。建议的分类器可以在总目标大小为 90 KB、整体推理时间为 9 ms、内存使用量为 12 MB 的情况下运行,同时在边缘实现高达 99.6% 的分类准确率和 99.88% 的 F1 分数。由于其结果,建议的分类器具有很强的通用性,可用于各种边缘设备的心律失常监测。
{"title":"Novel Transformation Deep Learning Model for Electrocardiogram Classification and Arrhythmia Detection using Edge Computing","authors":"Yibo Han, Pu Han, Bo Yuan, Zheng Zhang, Lu Liu, John Panneerselvam","doi":"10.1007/s10723-023-09717-3","DOIUrl":"https://doi.org/10.1007/s10723-023-09717-3","url":null,"abstract":"<p>The diagnosis of the cardiovascular disease relies heavily on the automated classification of electrocardiograms (ECG) for arrhythmia monitoring, which is often performed using machine learning (ML) algorithms. However, current ML algorithms are typically deployed using cloud-based inferences, which may not meet the reliability and security requirements for ECG monitoring. A newer solution, edge inference, has been developed to address speed, security, connection, and reliability issues. This paper presents an edge-based algorithm that combines continuous wavelet transform (CWT), and short-time Fourier transform (STFT), in a hybrid convolutional neural network (CNN) and Long Short-Term Memory (LSTM) model techniques for real-time ECG classification and arrhythmia detection. The algorithm incorporates an STFT CWT-based 1D convolutional (Conv1D) layer as a Finite Impulse Response (FIR) filter to generate the spectrogram of the input ECG signal. The output feature maps from the Conv1D layer are then reshaped into a 2D heart map image and fed into a hybrid convolutional neural network (2D-CNN) and Long Short-Term Memory (LSTM) classification model. The MIT-BIH arrhythmia database is used to train and evaluate the model. Using a cloud platform, four model versions are learned, considered, and optimized for edge computing on a Raspberry Pi device. Techniques such as weight quantization and pruning enhance the algorithms created for edge inference. The proposed classifiers can operate with a total target size of 90 KB, an overall inference time of 9 ms, and higher memory use of 12 MB while achieving up to 99.6% classification accuracy and a 99.88% F1-score at the edge. Thanks to its results, the suggested classifier is highly versatile and can be used for arrhythmia monitoring on various edge devices.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2023-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139069908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The orchestration of cloud computing infrastructures is challenging, considering the number, heterogeneity and dynamicity of the involved resources, along with the highly distributed nature of the applications that use them for computation and storage. Evidently, the volume of relevant monitoring data can be significant, and the ability to collect, analyze, and act on this data in real time is critical for the infrastructure’s efficient use. In this study, we introduce a novel methodology that adeptly manages the diverse, dynamic, and voluminous nature of cloud resources and the applications that they support. We use knowledge graphs to represent computing and storage resources and illustrate the relationships between them and the applications that utilize them. We then train GraphSAGE to acquire vector-based representations of the infrastructures’ properties, while preserving the structural properties of the graph. These are efficiently provided as input to two unsupervised machine learning algorithms, namely CBLOF and Isolation Forest, for the detection of storage and computing overusage events, where CBLOF demonstrates better performance across all our evaluation metrics. Following the detection of such events, we have also developed appropriate re-optimization mechanisms that ensure the performance of the served applications. Evaluated in a simulated environment, our methods demonstrate a significant advancement in anomaly detection and infrastructure optimization. The results underscore the potential of this closed-loop operation in dynamically adapting to the evolving demands of cloud infrastructures. By integrating data representation and machine learning methods with proactive management strategies, this research contributes substantially to the field of cloud computing, offering a scalable, intelligent solution for modern cloud infrastructures.
{"title":"Anomaly Detection in Cloud Computing using Knowledge Graph Embedding and Machine Learning Mechanisms","authors":"Katerina Mitropoulou, Panagiotis Kokkinos, Polyzois Soumplis, Emmanouel Varvarigos","doi":"10.1007/s10723-023-09727-1","DOIUrl":"https://doi.org/10.1007/s10723-023-09727-1","url":null,"abstract":"<p>The orchestration of cloud computing infrastructures is challenging, considering the number, heterogeneity and dynamicity of the involved resources, along with the highly distributed nature of the applications that use them for computation and storage. Evidently, the volume of relevant monitoring data can be significant, and the ability to collect, analyze, and act on this data in real time is critical for the infrastructure’s efficient use. In this study, we introduce a novel methodology that adeptly manages the diverse, dynamic, and voluminous nature of cloud resources and the applications that they support. We use knowledge graphs to represent computing and storage resources and illustrate the relationships between them and the applications that utilize them. We then train GraphSAGE to acquire vector-based representations of the infrastructures’ properties, while preserving the structural properties of the graph. These are efficiently provided as input to two unsupervised machine learning algorithms, namely CBLOF and Isolation Forest, for the detection of storage and computing overusage events, where CBLOF demonstrates better performance across all our evaluation metrics. Following the detection of such events, we have also developed appropriate re-optimization mechanisms that ensure the performance of the served applications. Evaluated in a simulated environment, our methods demonstrate a significant advancement in anomaly detection and infrastructure optimization. The results underscore the potential of this closed-loop operation in dynamically adapting to the evolving demands of cloud infrastructures. By integrating data representation and machine learning methods with proactive management strategies, this research contributes substantially to the field of cloud computing, offering a scalable, intelligent solution for modern cloud infrastructures.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2023-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139069793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}