Pub Date : 2023-12-30DOI: 10.1007/s10723-023-09717-3
Yibo Han, Pu Han, Bo Yuan, Zheng Zhang, Lu Liu, John Panneerselvam
The diagnosis of the cardiovascular disease relies heavily on the automated classification of electrocardiograms (ECG) for arrhythmia monitoring, which is often performed using machine learning (ML) algorithms. However, current ML algorithms are typically deployed using cloud-based inferences, which may not meet the reliability and security requirements for ECG monitoring. A newer solution, edge inference, has been developed to address speed, security, connection, and reliability issues. This paper presents an edge-based algorithm that combines continuous wavelet transform (CWT), and short-time Fourier transform (STFT), in a hybrid convolutional neural network (CNN) and Long Short-Term Memory (LSTM) model techniques for real-time ECG classification and arrhythmia detection. The algorithm incorporates an STFT CWT-based 1D convolutional (Conv1D) layer as a Finite Impulse Response (FIR) filter to generate the spectrogram of the input ECG signal. The output feature maps from the Conv1D layer are then reshaped into a 2D heart map image and fed into a hybrid convolutional neural network (2D-CNN) and Long Short-Term Memory (LSTM) classification model. The MIT-BIH arrhythmia database is used to train and evaluate the model. Using a cloud platform, four model versions are learned, considered, and optimized for edge computing on a Raspberry Pi device. Techniques such as weight quantization and pruning enhance the algorithms created for edge inference. The proposed classifiers can operate with a total target size of 90 KB, an overall inference time of 9 ms, and higher memory use of 12 MB while achieving up to 99.6% classification accuracy and a 99.88% F1-score at the edge. Thanks to its results, the suggested classifier is highly versatile and can be used for arrhythmia monitoring on various edge devices.
心血管疾病的诊断在很大程度上依赖于心电图(ECG)的自动分类,以监测心律失常,而这通常是通过机器学习(ML)算法来实现的。然而,目前的 ML 算法通常使用基于云的推理进行部署,这可能无法满足心电图监测的可靠性和安全性要求。为了解决速度、安全性、连接和可靠性等问题,人们开发了一种更新的解决方案--边缘推理。本文介绍了一种基于边缘的算法,该算法在混合卷积神经网络(CNN)和长短期记忆(LSTM)模型技术中结合了连续小波变换(CWT)和短时傅立叶变换(STFT),用于实时心电图分类和心律失常检测。该算法将基于 STFT CWT 的一维卷积(Conv1D)层作为有限脉冲响应(FIR)滤波器,生成输入心电信号的频谱图。然后将 Conv1D 层的输出特征图重塑为二维心脏图图像,并输入混合卷积神经网络(2D-CNN)和长短期记忆(LSTM)分类模型。MIT-BIH 心律失常数据库用于训练和评估该模型。利用云平台,在 Raspberry Pi 设备上针对边缘计算学习、考虑和优化了四个模型版本。权重量化和剪枝等技术增强了为边缘推理创建的算法。建议的分类器可以在总目标大小为 90 KB、整体推理时间为 9 ms、内存使用量为 12 MB 的情况下运行,同时在边缘实现高达 99.6% 的分类准确率和 99.88% 的 F1 分数。由于其结果,建议的分类器具有很强的通用性,可用于各种边缘设备的心律失常监测。
{"title":"Novel Transformation Deep Learning Model for Electrocardiogram Classification and Arrhythmia Detection using Edge Computing","authors":"Yibo Han, Pu Han, Bo Yuan, Zheng Zhang, Lu Liu, John Panneerselvam","doi":"10.1007/s10723-023-09717-3","DOIUrl":"https://doi.org/10.1007/s10723-023-09717-3","url":null,"abstract":"<p>The diagnosis of the cardiovascular disease relies heavily on the automated classification of electrocardiograms (ECG) for arrhythmia monitoring, which is often performed using machine learning (ML) algorithms. However, current ML algorithms are typically deployed using cloud-based inferences, which may not meet the reliability and security requirements for ECG monitoring. A newer solution, edge inference, has been developed to address speed, security, connection, and reliability issues. This paper presents an edge-based algorithm that combines continuous wavelet transform (CWT), and short-time Fourier transform (STFT), in a hybrid convolutional neural network (CNN) and Long Short-Term Memory (LSTM) model techniques for real-time ECG classification and arrhythmia detection. The algorithm incorporates an STFT CWT-based 1D convolutional (Conv1D) layer as a Finite Impulse Response (FIR) filter to generate the spectrogram of the input ECG signal. The output feature maps from the Conv1D layer are then reshaped into a 2D heart map image and fed into a hybrid convolutional neural network (2D-CNN) and Long Short-Term Memory (LSTM) classification model. The MIT-BIH arrhythmia database is used to train and evaluate the model. Using a cloud platform, four model versions are learned, considered, and optimized for edge computing on a Raspberry Pi device. Techniques such as weight quantization and pruning enhance the algorithms created for edge inference. The proposed classifiers can operate with a total target size of 90 KB, an overall inference time of 9 ms, and higher memory use of 12 MB while achieving up to 99.6% classification accuracy and a 99.88% F1-score at the edge. Thanks to its results, the suggested classifier is highly versatile and can be used for arrhythmia monitoring on various edge devices.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"82 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2023-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139069908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The orchestration of cloud computing infrastructures is challenging, considering the number, heterogeneity and dynamicity of the involved resources, along with the highly distributed nature of the applications that use them for computation and storage. Evidently, the volume of relevant monitoring data can be significant, and the ability to collect, analyze, and act on this data in real time is critical for the infrastructure’s efficient use. In this study, we introduce a novel methodology that adeptly manages the diverse, dynamic, and voluminous nature of cloud resources and the applications that they support. We use knowledge graphs to represent computing and storage resources and illustrate the relationships between them and the applications that utilize them. We then train GraphSAGE to acquire vector-based representations of the infrastructures’ properties, while preserving the structural properties of the graph. These are efficiently provided as input to two unsupervised machine learning algorithms, namely CBLOF and Isolation Forest, for the detection of storage and computing overusage events, where CBLOF demonstrates better performance across all our evaluation metrics. Following the detection of such events, we have also developed appropriate re-optimization mechanisms that ensure the performance of the served applications. Evaluated in a simulated environment, our methods demonstrate a significant advancement in anomaly detection and infrastructure optimization. The results underscore the potential of this closed-loop operation in dynamically adapting to the evolving demands of cloud infrastructures. By integrating data representation and machine learning methods with proactive management strategies, this research contributes substantially to the field of cloud computing, offering a scalable, intelligent solution for modern cloud infrastructures.
{"title":"Anomaly Detection in Cloud Computing using Knowledge Graph Embedding and Machine Learning Mechanisms","authors":"Katerina Mitropoulou, Panagiotis Kokkinos, Polyzois Soumplis, Emmanouel Varvarigos","doi":"10.1007/s10723-023-09727-1","DOIUrl":"https://doi.org/10.1007/s10723-023-09727-1","url":null,"abstract":"<p>The orchestration of cloud computing infrastructures is challenging, considering the number, heterogeneity and dynamicity of the involved resources, along with the highly distributed nature of the applications that use them for computation and storage. Evidently, the volume of relevant monitoring data can be significant, and the ability to collect, analyze, and act on this data in real time is critical for the infrastructure’s efficient use. In this study, we introduce a novel methodology that adeptly manages the diverse, dynamic, and voluminous nature of cloud resources and the applications that they support. We use knowledge graphs to represent computing and storage resources and illustrate the relationships between them and the applications that utilize them. We then train GraphSAGE to acquire vector-based representations of the infrastructures’ properties, while preserving the structural properties of the graph. These are efficiently provided as input to two unsupervised machine learning algorithms, namely CBLOF and Isolation Forest, for the detection of storage and computing overusage events, where CBLOF demonstrates better performance across all our evaluation metrics. Following the detection of such events, we have also developed appropriate re-optimization mechanisms that ensure the performance of the served applications. Evaluated in a simulated environment, our methods demonstrate a significant advancement in anomaly detection and infrastructure optimization. The results underscore the potential of this closed-loop operation in dynamically adapting to the evolving demands of cloud infrastructures. By integrating data representation and machine learning methods with proactive management strategies, this research contributes substantially to the field of cloud computing, offering a scalable, intelligent solution for modern cloud infrastructures.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"81 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2023-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139069793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-29DOI: 10.1007/s10723-023-09720-8
Xi Zhen, Li Zhen
With the rapid development of digital technologies, scholars and industries are pushing into the information age, where data processing is the accounting industry's major challenge. This study aimed to analyze the use of these digital technologies for strategic performance attainment and mediating the accounting information system (AIS). Further, this study also explores the moderation of the DT and strategic performance linkage. In this rapid change, the business organization is crucial to competition. Hence, technology is the key factor for maintaining the competitiveness of the industrialists, specifically where information plays a vital role in making management decisions. Accounting software is a significant tool that efficiently collects data and makes timely decisions to declare the business strategy to respond quickly to the market. However, the available accounting software is costly, and small-scale businesses cannot afford it. Therefore, this paper developed a digital accounting system using artificial intelligence (AI) and edge computing (EC) to process and store the accounting data. This article introduces novel edge framework for digital data processing with advanced data processing methods. The with the growth of IoT, the data sizes have increased significantly. Moreover, the traditional cloud platforms are enriched with EC to process the vast amount of data where it is collected. Therefore, the business can adapt to new size data and raise its standards in terms of technical content. It will define the distributed storage in the cloud and test the cluster performance of the system once the system design and its effects on the system. In the end, the system operation time, load balancing and rows of data is tested experimentally. The results and its analysis demonstrated that the data processing with EC for AIS utilized is improved acceleration rate, operational efficiency and execution rate.
{"title":"Accounting Information Systems and Strategic Performance: The Interplay of Digital Technology and Edge Computing Devices","authors":"Xi Zhen, Li Zhen","doi":"10.1007/s10723-023-09720-8","DOIUrl":"https://doi.org/10.1007/s10723-023-09720-8","url":null,"abstract":"<p>With the rapid development of digital technologies, scholars and industries are pushing into the information age, where data processing is the accounting industry's major challenge. This study aimed to analyze the use of these digital technologies for strategic performance attainment and mediating the accounting information system (AIS). Further, this study also explores the moderation of the DT and strategic performance linkage. In this rapid change, the business organization is crucial to competition. Hence, technology is the key factor for maintaining the competitiveness of the industrialists, specifically where information plays a vital role in making management decisions. Accounting software is a significant tool that efficiently collects data and makes timely decisions to declare the business strategy to respond quickly to the market. However, the available accounting software is costly, and small-scale businesses cannot afford it. Therefore, this paper developed a digital accounting system using artificial intelligence (AI) and edge computing (EC) to process and store the accounting data. This article introduces novel edge framework for digital data processing with advanced data processing methods. The with the growth of IoT, the data sizes have increased significantly. Moreover, the traditional cloud platforms are enriched with EC to process the vast amount of data where it is collected. Therefore, the business can adapt to new size data and raise its standards in terms of technical content. It will define the distributed storage in the cloud and test the cluster performance of the system once the system design and its effects on the system. In the end, the system operation time, load balancing and rows of data is tested experimentally. The results and its analysis demonstrated that the data processing with EC for AIS utilized is improved acceleration rate, operational efficiency and execution rate.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"7 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2023-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139069792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-28DOI: 10.1007/s10723-023-09719-1
Ming Zhang
The current task offloading technique needs to be performed more effectively. Onboard terminals cannot execute efficient computation due to the explosive expansion of data flow, the quick increase in vehicle population, and the growing scarcity of spectrum resources. As a result, this study suggests a task-offloading technique based on reinforcement learning computing for the Internet of Vehicles edge computing architecture. The system framework for the Internet of Vehicles has been initially developed. Although the control centre gathers all vehicle information, the roadside unit collects vehicle data from the neighborhood and sends it to a mobile edge computing server for processing. Then, to guarantee that job dispatching in the Internet of Vehicles is logical, the computation model, communications approach, interfering approach, and concerns about confidentiality are established. This research examines the best way to analyze and design a computation offloading approach for a multiuser smart Internet of Vehicles (IoV) based on mobile edge computing (MEC). We present an analytical offloading strategy for various MEC networks, covering one-to-one, one-to-two, and two-to-one situations, as it is challenging to determine an analytical offloading proportion for a generic MEC-based IoV network. The suggested analytic offload strategy may match the brute force (BF) approach with the best performance of the Deep Deterministic Policy Gradient (DDPG). For the analytical offloading design for a general MEC-based IoV, the analytical results in this study can be a valuable source of information.
{"title":"Development of Analytical Offloading for Innovative Internet of Vehicles Based on Mobile Edge Computing","authors":"Ming Zhang","doi":"10.1007/s10723-023-09719-1","DOIUrl":"https://doi.org/10.1007/s10723-023-09719-1","url":null,"abstract":"<p>The current task offloading technique needs to be performed more effectively. Onboard terminals cannot execute efficient computation due to the explosive expansion of data flow, the quick increase in vehicle population, and the growing scarcity of spectrum resources. As a result, this study suggests a task-offloading technique based on reinforcement learning computing for the Internet of Vehicles edge computing architecture. The system framework for the Internet of Vehicles has been initially developed. Although the control centre gathers all vehicle information, the roadside unit collects vehicle data from the neighborhood and sends it to a mobile edge computing server for processing. Then, to guarantee that job dispatching in the Internet of Vehicles is logical, the computation model, communications approach, interfering approach, and concerns about confidentiality are established. This research examines the best way to analyze and design a computation offloading approach for a multiuser smart Internet of Vehicles (IoV) based on mobile edge computing (MEC). We present an analytical offloading strategy for various MEC networks, covering one-to-one, one-to-two, and two-to-one situations, as it is challenging to determine an analytical offloading proportion for a generic MEC-based IoV network. The suggested analytic offload strategy may match the brute force (BF) approach with the best performance of the Deep Deterministic Policy Gradient (DDPG). For the analytical offloading design for a general MEC-based IoV, the analytical results in this study can be a valuable source of information.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"67 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2023-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139051107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-23DOI: 10.1007/s10723-023-09706-6
Sheng Chai, Lantian Guo
Chronic illnesses such as heart disease, diabetes, cancer, and respiratory diseases are complex and pose a significant threat to global health. Processing heart data is particularly challenging due to the variability of symptoms. However, advancements in smart wearable devices, computing technologies, and IoT solutions have made heart data processing easier. This proposed model integrates Edge-Fog-Cloud computing to provide rapid and accurate results, making it a promising solution for heart data processing. Patient data is collected using hardware components, and cardiac feature extraction is used to obtain crucial features from data signals. The Optimized Cascaded Convolution Neural Network (CCNN) processes these features, and the CCNN's hyperparameters are optimized using both PSO (Particle Swarm Optimization) and GSO(Galactic Swarm Optimization) techniques. The proposed system leverages the strengths of both optimization algorithms to improve the accuracy and efficiency of the heart data processing system. The GSO-CCNN optimizes the CCNN's hyperparameters, while the PSO-CCNN optimizes the feature selection process. Combining both algorithms enhances the system's ability to identify relevant features and optimize the CCNN's architecture. Performance analysis demonstrates that the proposed technique, which integrates Edge-Fog-Cloud computing with combined PSO-CCNN and GSO-CCNN techniques, outperforms traditional models such as PSO-CCNN, GSO-CCNN, WOA-CCNN, and DHOA-CCNN, which utilize traditional cloud and edge technologies. The proposed model is evaluated in terms of time, energy consumption, bandwidth, and the standard performance metrics of accuracy, precision, recall, specificity, and F1-score. Therefore, the proposed system's comparative analysis ensures its efficiency over conventional models for heart data processing.
{"title":"Edge Computing with Fog-cloud for Heart Data Processing using Particle Swarm Optimized Deep Learning Technique","authors":"Sheng Chai, Lantian Guo","doi":"10.1007/s10723-023-09706-6","DOIUrl":"https://doi.org/10.1007/s10723-023-09706-6","url":null,"abstract":"<p>Chronic illnesses such as heart disease, diabetes, cancer, and respiratory diseases are complex and pose a significant threat to global health. Processing heart data is particularly challenging due to the variability of symptoms. However, advancements in smart wearable devices, computing technologies, and IoT solutions have made heart data processing easier. This proposed model integrates Edge-Fog-Cloud computing to provide rapid and accurate results, making it a promising solution for heart data processing. Patient data is collected using hardware components, and cardiac feature extraction is used to obtain crucial features from data signals. The Optimized Cascaded Convolution Neural Network (CCNN) processes these features, and the CCNN's hyperparameters are optimized using both PSO (Particle Swarm Optimization) and GSO(Galactic Swarm Optimization) techniques. The proposed system leverages the strengths of both optimization algorithms to improve the accuracy and efficiency of the heart data processing system. The GSO-CCNN optimizes the CCNN's hyperparameters, while the PSO-CCNN optimizes the feature selection process. Combining both algorithms enhances the system's ability to identify relevant features and optimize the CCNN's architecture. Performance analysis demonstrates that the proposed technique, which integrates Edge-Fog-Cloud computing with combined PSO-CCNN and GSO-CCNN techniques, outperforms traditional models such as PSO-CCNN, GSO-CCNN, WOA-CCNN, and DHOA-CCNN, which utilize traditional cloud and edge technologies. The proposed model is evaluated in terms of time, energy consumption, bandwidth, and the standard performance metrics of accuracy, precision, recall, specificity, and F1-score. Therefore, the proposed system's comparative analysis ensures its efficiency over conventional models for heart data processing.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"35 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2023-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139030887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-19DOI: 10.1007/s10723-023-09722-6
Yu Song, Xin He, Xiwang Tang, Bo Yin, Jie Du, Jiali Liu, Zhongbao Zhao, Shigang Geng
Today, things that are accessible worldwide are upgrading to innovative technology. In this research, an intelligent garbage system will be designed with State-of-the-art methods using deep learning technologies. Garbage is highly produced due to urbanization and the rising population in urban areas. It is essential to manage daily trash from homes and living environments. This research aims to provide an intelligent IoT-based garbage bin system, and classification is done using Deep learning techniques. This smart bin is capable of sensing more varieties of garbage from home. Though there are more technologies successfully implemented with IoT and machine learning, there is still a need for sustainable natural technologies to manage daily waste. The innovative IoT-based garbage system uses various sensors like humidity, temperature, gas, and liquid sensors to identify the garbage condition. Initially, the Smart Garbage Bin system is designed, and then the data are collected using a garbage annotation application. Next, the deep learning method is used for object detection and classification of garbage images. Arithmetic Optimization Algorithm (AOA) with Improved RefineDet (IRD) is used for object detection. Next, the EfficientNet-B0 model is used for the classification of garbage images. The garbage content is identified, and the content is prepared to train the deep learning model to perform efficient classification tasks. For result evaluation, smart bins are deployed in real-time, and accuracy is estimated. Furthermore, fine-tuning region-specific litter photos led to enhanced categorization.
{"title":"DEEPBIN: Deep Learning Based Garbage Classification for Households Using Sustainable Natural Technologies","authors":"Yu Song, Xin He, Xiwang Tang, Bo Yin, Jie Du, Jiali Liu, Zhongbao Zhao, Shigang Geng","doi":"10.1007/s10723-023-09722-6","DOIUrl":"https://doi.org/10.1007/s10723-023-09722-6","url":null,"abstract":"<p>Today, things that are accessible worldwide are upgrading to innovative technology. In this research, an intelligent garbage system will be designed with State-of-the-art methods using deep learning technologies. Garbage is highly produced due to urbanization and the rising population in urban areas. It is essential to manage daily trash from homes and living environments. This research aims to provide an intelligent IoT-based garbage bin system, and classification is done using Deep learning techniques. This smart bin is capable of sensing more varieties of garbage from home. Though there are more technologies successfully implemented with IoT and machine learning, there is still a need for sustainable natural technologies to manage daily waste. The innovative IoT-based garbage system uses various sensors like humidity, temperature, gas, and liquid sensors to identify the garbage condition. Initially, the Smart Garbage Bin system is designed, and then the data are collected using a garbage annotation application. Next, the deep learning method is used for object detection and classification of garbage images. Arithmetic Optimization Algorithm (AOA) with Improved RefineDet (IRD) is used for object detection. Next, the EfficientNet-B0 model is used for the classification of garbage images. The garbage content is identified, and the content is prepared to train the deep learning model to perform efficient classification tasks. For result evaluation, smart bins are deployed in real-time, and accuracy is estimated. Furthermore, fine-tuning region-specific litter photos led to enhanced categorization.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"44 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138745456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-19DOI: 10.1007/s10723-023-09709-3
Yuliang Gai, Yuxin Liu, Minghao Li, Shengcheng Yang
The architectural design of smart cities should prioritize the provision of critical medical services. This involves establishing improved connectivity and leveraging supercomputing capabilities to enhance the quality of services (QoS) offered to residents. Edge computing is vital in healthcare applications by enabling low network latencies necessary for real-time data processing. By implementing edge computing, smart cities can benefit from reduced latency, increased bandwidth, and improved power consumption efficiency. In the context of Mobile Edge Computing (MEC), the study proposes a novel approach called the Markovian Decision Process with Federated Deep Recurrent Neural Network (MDP-FDRNN) as the primary algorithm for managing resource allocation. MEC focuses on utilizing edge computing capabilities to process data and perform computations at the network's edges. The conducted tests demonstrate that the MDP-FDRNN algorithm is superior and well-suited for effectively resolving high-processing traffic at the network's edges. It significantly reduces processing time, particularly crucial for healthcare operations related to patients' health problems. By employing the MDP-FDRNN algorithm in resource allocation management, smart cities can efficiently utilize their edge computing infrastructure to handle complex processing tasks. The superior performance of this algorithm in reducing processing time showcases its potential to support critical healthcare operations within smart cities, thereby enhancing the overall quality of healthcare services provided to residents. This article underscores the significance of implementing appropriate technology, including edge computing and the IoM, in developing prosperous smart cities. It also highlights the effectiveness of the MDP-FDRNN algorithm in managing resource allocation and addressing processing challenges at the network's edges, particularly in healthcare operations.
{"title":"Markovian with Federated Deep Recurrent Neural Network for Edge—IoMT to Improve Healthcare in Smart Cities","authors":"Yuliang Gai, Yuxin Liu, Minghao Li, Shengcheng Yang","doi":"10.1007/s10723-023-09709-3","DOIUrl":"https://doi.org/10.1007/s10723-023-09709-3","url":null,"abstract":"<p>The architectural design of smart cities should prioritize the provision of critical medical services. This involves establishing improved connectivity and leveraging supercomputing capabilities to enhance the quality of services (QoS) offered to residents. Edge computing is vital in healthcare applications by enabling low network latencies necessary for real-time data processing. By implementing edge computing, smart cities can benefit from reduced latency, increased bandwidth, and improved power consumption efficiency. In the context of Mobile Edge Computing (MEC), the study proposes a novel approach called the Markovian Decision Process with Federated Deep Recurrent Neural Network (MDP-FDRNN) as the primary algorithm for managing resource allocation. MEC focuses on utilizing edge computing capabilities to process data and perform computations at the network's edges. The conducted tests demonstrate that the MDP-FDRNN algorithm is superior and well-suited for effectively resolving high-processing traffic at the network's edges. It significantly reduces processing time, particularly crucial for healthcare operations related to patients' health problems. By employing the MDP-FDRNN algorithm in resource allocation management, smart cities can efficiently utilize their edge computing infrastructure to handle complex processing tasks. The superior performance of this algorithm in reducing processing time showcases its potential to support critical healthcare operations within smart cities, thereby enhancing the overall quality of healthcare services provided to residents. This article underscores the significance of implementing appropriate technology, including edge computing and the IoM, in developing prosperous smart cities. It also highlights the effectiveness of the MDP-FDRNN algorithm in managing resource allocation and addressing processing challenges at the network's edges, particularly in healthcare operations.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"1 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138745508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-15DOI: 10.1007/s10723-023-09715-5
Hulin Jin, Zhiran Jin, Yong-Guk Kim, Chunyang Fan
Abstract
The human-machine interface (HMI) collects electrophysiology signals incoming from the patient and utilizes them to operate the device. However, most applications are currently in the testing phase and are typically unavailable to everyone. Developing wearable HMI devices that are intelligent and more comfortable has been a focus of study in recent times. This work developed a portable, eight-channel electromyography (EMG) signal-based device that can distinguish 21 different types of motion. To identify the EMG signals, an analog front-end (AFE) integrated chip (IC) was created, and an integrated EMG signal acquisition device combining a stretchy wristband was made. Using the EMG movement signals of 10 volunteers, a SIAT database of 21 gestures was created. Using the SIAT dataset, a lightweight 2D CNN-LSTM model was developed and specialized training was given. The signal recognition accuracy is 96.4%, and the training process took a median of 14 min 13 s. The model may be used on lower-performance edge computing devices because of its compact size, and it is anticipated that it will eventually be applied to smartphone terminals.
{"title":"Integration of a Lightweight Customized 2D CNN Model to an Edge Computing System for Real-Time Multiple Gesture Recognition","authors":"Hulin Jin, Zhiran Jin, Yong-Guk Kim, Chunyang Fan","doi":"10.1007/s10723-023-09715-5","DOIUrl":"https://doi.org/10.1007/s10723-023-09715-5","url":null,"abstract":"<h3>Abstract</h3> <p>The human-machine interface (HMI) collects electrophysiology signals incoming from the patient and utilizes them to operate the device. However, most applications are currently in the testing phase and are typically unavailable to everyone. Developing wearable HMI devices that are intelligent and more comfortable has been a focus of study in recent times. This work developed a portable, eight-channel electromyography (EMG) signal-based device that can distinguish 21 different types of motion. To identify the EMG signals, an analog front-end (AFE) integrated chip (IC) was created, and an integrated EMG signal acquisition device combining a stretchy wristband was made. Using the EMG movement signals of 10 volunteers, a SIAT database of 21 gestures was created. Using the SIAT dataset, a lightweight 2D CNN-LSTM model was developed and specialized training was given. The signal recognition accuracy is 96.4%, and the training process took a median of 14 min 13 s. The model may be used on lower-performance edge computing devices because of its compact size, and it is anticipated that it will eventually be applied to smartphone terminals.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"181 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138682052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-15DOI: 10.1007/s10723-023-09712-8
Yang-Yang Liu, Ying Zhang, Yue Wu, Man Feng
Edge computing, blockchain technology, and the Internet of Things have all been identified as key enablers of innovative city initiatives. A comprehensive examination of the research found that IoT, blockchain, and edge computing are now major factors in how efficiently smart cities provide healthcare. IoT has been determined to be the most used of the three technologies. In this observation, edge computing and blockchain technology are more applicable to the healthcare industry for assessing intelligent and secured data. Edge computing has been touted as an important technology for low-cost remote access, cutting latency, and boosting efficiency. Smart cities are incorporated with intelligent devices to enhance the person's day-to-day life. Intelligent of Medical Things (IoMT) and Edge computing (EC) are these things’ bases. The increasing Quality of Services (QoS) of healthcare services requires supercomputing that connects IoMT with intelligent devices with edge processing. The healthcare applications of smart cities need reduced latencies. Therefore, EC is necessary to reduce latency, energy, bandwidth, and scalability. This paper developed a deep Q reinforcement learning algorithm with evolutionary optimization and compared it with the traditional deep learning approaches for process congestion to reduce the time and latency related to patient health monitoring. The energy consumption, latency computation, and cost computation of the proposed model is less when compared to existing techniques. Among 100 tasks, nearly 95% of the tasks are offloaded efficiently in the minimum time.
{"title":"Healthcare and Fitness Services: A Comprehensive Assessment of Blockchain, IoT, and Edge Computing in Smart Cities","authors":"Yang-Yang Liu, Ying Zhang, Yue Wu, Man Feng","doi":"10.1007/s10723-023-09712-8","DOIUrl":"https://doi.org/10.1007/s10723-023-09712-8","url":null,"abstract":"<p>Edge computing, blockchain technology, and the Internet of Things have all been identified as key enablers of innovative city initiatives. A comprehensive examination of the research found that IoT, blockchain, and edge computing are now major factors in how efficiently smart cities provide healthcare. IoT has been determined to be the most used of the three technologies. In this observation, edge computing and blockchain technology are more applicable to the healthcare industry for assessing intelligent and secured data. Edge computing has been touted as an important technology for low-cost remote access, cutting latency, and boosting efficiency. Smart cities are incorporated with intelligent devices to enhance the person's day-to-day life. Intelligent of Medical Things (IoMT) and Edge computing (EC) are these things’ bases. The increasing Quality of Services (QoS) of healthcare services requires supercomputing that connects IoMT with intelligent devices with edge processing. The healthcare applications of smart cities need reduced latencies. Therefore, EC is necessary to reduce latency, energy, bandwidth, and scalability. This paper developed a deep Q reinforcement learning algorithm with evolutionary optimization and compared it with the traditional deep learning approaches for process congestion to reduce the time and latency related to patient health monitoring. The energy consumption, latency computation, and cost computation of the proposed model is less when compared to existing techniques. Among 100 tasks, nearly 95% of the tasks are offloaded efficiently in the minimum time.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"83 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138682200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-07DOI: 10.1007/s10723-023-09718-2
Andre Bento, Filipe Araujo, Raul Barbosa
Cloud services have become increasingly popular for developing large-scale applications due to the abundance of resources they offer. The scalability and accessibility of these resources have made it easier for organizations of all sizes to develop and implement sophisticated and demanding applications to meet demand instantly. As monetary fees are involved in the use of the cloud, one of the challenges for application developers and operators is to balance their budget constraints with crucial quality attributes, such as availability. Industry standards usually default to simplified solutions that cannot simultaneously consider competing objectives. Our research addresses this challenge by proposing a Cost-Availability Aware Scaling (CAAS) approach that uses multi-objective optimization of availability and cost. We evaluate CAAS using two open-source microservices applications, yielding improved results compared to the industry standard CPU-based Autoscaler (AS). CAAS can find optimal system configurations with higher availability, between 1 and 2 nines on average, and reduced costs, 6% on average, with the first application, and 1 nine of availability on average, and reduced costs up to 18% on average, with the second application. The gap in the results between our model and the default AS suggests that operators can significantly improve the operation of their applications.
云服务因其提供的丰富资源,在开发大型应用程序方面越来越受欢迎。这些资源的可扩展性和可访问性使各种规模的组织更容易开发和实施复杂、高要求的应用程序,以即时满足需求。由于云的使用涉及货币费用,应用程序开发人员和运营商面临的挑战之一是如何在预算限制和关键质量属性(如可用性)之间取得平衡。行业标准通常默认采用简化的解决方案,无法同时考虑相互竞争的目标。为了应对这一挑战,我们的研究提出了一种成本-可用性感知扩展(CAAS)方法,该方法使用可用性和成本的多目标优化。我们使用两个开源微服务应用对 CAAS 进行了评估,与基于 CPU 的行业标准自动分级器(AS)相比,结果有所改进。CAAS 可以找到最佳系统配置,第一个应用的可用性平均在 1 到 2 个 9 之间,成本平均降低了 6%;第二个应用的可用性平均为 1 个 9,成本平均降低了 18%。我们的模型与默认 AS 之间的结果差距表明,运营商可以显著改善其应用程序的运行。
{"title":"Cost-Availability Aware Scaling: Towards Optimal Scaling of Cloud Services","authors":"Andre Bento, Filipe Araujo, Raul Barbosa","doi":"10.1007/s10723-023-09718-2","DOIUrl":"https://doi.org/10.1007/s10723-023-09718-2","url":null,"abstract":"<p>Cloud services have become increasingly popular for developing large-scale applications due to the abundance of resources they offer. The scalability and accessibility of these resources have made it easier for organizations of all sizes to develop and implement sophisticated and demanding applications to meet demand instantly. As monetary fees are involved in the use of the cloud, one of the challenges for application developers and operators is to balance their budget constraints with crucial quality attributes, such as availability. Industry standards usually default to simplified solutions that cannot simultaneously consider competing objectives. Our research addresses this challenge by proposing a Cost-Availability Aware Scaling (CAAS) approach that uses multi-objective optimization of availability and cost. We evaluate CAAS using two open-source microservices applications, yielding improved results compared to the industry standard CPU-based Autoscaler (AS). CAAS can find optimal system configurations with higher availability, between 1 and 2 nines on average, and reduced costs, 6% on average, with the first application, and 1 nine of availability on average, and reduced costs up to 18% on average, with the second application. The gap in the results between our model and the default AS suggests that operators can significantly improve the operation of their applications.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"157 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138545561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}