首页 > 最新文献

Journal of Grid Computing最新文献

英文 中文
Novel Transformation Deep Learning Model for Electrocardiogram Classification and Arrhythmia Detection using Edge Computing 利用边缘计算进行心电图分类和心律失常检测的新型转换深度学习模型
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-12-30 DOI: 10.1007/s10723-023-09717-3
Yibo Han, Pu Han, Bo Yuan, Zheng Zhang, Lu Liu, John Panneerselvam

The diagnosis of the cardiovascular disease relies heavily on the automated classification of electrocardiograms (ECG) for arrhythmia monitoring, which is often performed using machine learning (ML) algorithms. However, current ML algorithms are typically deployed using cloud-based inferences, which may not meet the reliability and security requirements for ECG monitoring. A newer solution, edge inference, has been developed to address speed, security, connection, and reliability issues. This paper presents an edge-based algorithm that combines continuous wavelet transform (CWT), and short-time Fourier transform (STFT), in a hybrid convolutional neural network (CNN) and Long Short-Term Memory (LSTM) model techniques for real-time ECG classification and arrhythmia detection. The algorithm incorporates an STFT CWT-based 1D convolutional (Conv1D) layer as a Finite Impulse Response (FIR) filter to generate the spectrogram of the input ECG signal. The output feature maps from the Conv1D layer are then reshaped into a 2D heart map image and fed into a hybrid convolutional neural network (2D-CNN) and Long Short-Term Memory (LSTM) classification model. The MIT-BIH arrhythmia database is used to train and evaluate the model. Using a cloud platform, four model versions are learned, considered, and optimized for edge computing on a Raspberry Pi device. Techniques such as weight quantization and pruning enhance the algorithms created for edge inference. The proposed classifiers can operate with a total target size of 90 KB, an overall inference time of 9 ms, and higher memory use of 12 MB while achieving up to 99.6% classification accuracy and a 99.88% F1-score at the edge. Thanks to its results, the suggested classifier is highly versatile and can be used for arrhythmia monitoring on various edge devices.

心血管疾病的诊断在很大程度上依赖于心电图(ECG)的自动分类,以监测心律失常,而这通常是通过机器学习(ML)算法来实现的。然而,目前的 ML 算法通常使用基于云的推理进行部署,这可能无法满足心电图监测的可靠性和安全性要求。为了解决速度、安全性、连接和可靠性等问题,人们开发了一种更新的解决方案--边缘推理。本文介绍了一种基于边缘的算法,该算法在混合卷积神经网络(CNN)和长短期记忆(LSTM)模型技术中结合了连续小波变换(CWT)和短时傅立叶变换(STFT),用于实时心电图分类和心律失常检测。该算法将基于 STFT CWT 的一维卷积(Conv1D)层作为有限脉冲响应(FIR)滤波器,生成输入心电信号的频谱图。然后将 Conv1D 层的输出特征图重塑为二维心脏图图像,并输入混合卷积神经网络(2D-CNN)和长短期记忆(LSTM)分类模型。MIT-BIH 心律失常数据库用于训练和评估该模型。利用云平台,在 Raspberry Pi 设备上针对边缘计算学习、考虑和优化了四个模型版本。权重量化和剪枝等技术增强了为边缘推理创建的算法。建议的分类器可以在总目标大小为 90 KB、整体推理时间为 9 ms、内存使用量为 12 MB 的情况下运行,同时在边缘实现高达 99.6% 的分类准确率和 99.88% 的 F1 分数。由于其结果,建议的分类器具有很强的通用性,可用于各种边缘设备的心律失常监测。
{"title":"Novel Transformation Deep Learning Model for Electrocardiogram Classification and Arrhythmia Detection using Edge Computing","authors":"Yibo Han, Pu Han, Bo Yuan, Zheng Zhang, Lu Liu, John Panneerselvam","doi":"10.1007/s10723-023-09717-3","DOIUrl":"https://doi.org/10.1007/s10723-023-09717-3","url":null,"abstract":"<p>The diagnosis of the cardiovascular disease relies heavily on the automated classification of electrocardiograms (ECG) for arrhythmia monitoring, which is often performed using machine learning (ML) algorithms. However, current ML algorithms are typically deployed using cloud-based inferences, which may not meet the reliability and security requirements for ECG monitoring. A newer solution, edge inference, has been developed to address speed, security, connection, and reliability issues. This paper presents an edge-based algorithm that combines continuous wavelet transform (CWT), and short-time Fourier transform (STFT), in a hybrid convolutional neural network (CNN) and Long Short-Term Memory (LSTM) model techniques for real-time ECG classification and arrhythmia detection. The algorithm incorporates an STFT CWT-based 1D convolutional (Conv1D) layer as a Finite Impulse Response (FIR) filter to generate the spectrogram of the input ECG signal. The output feature maps from the Conv1D layer are then reshaped into a 2D heart map image and fed into a hybrid convolutional neural network (2D-CNN) and Long Short-Term Memory (LSTM) classification model. The MIT-BIH arrhythmia database is used to train and evaluate the model. Using a cloud platform, four model versions are learned, considered, and optimized for edge computing on a Raspberry Pi device. Techniques such as weight quantization and pruning enhance the algorithms created for edge inference. The proposed classifiers can operate with a total target size of 90 KB, an overall inference time of 9 ms, and higher memory use of 12 MB while achieving up to 99.6% classification accuracy and a 99.88% F1-score at the edge. Thanks to its results, the suggested classifier is highly versatile and can be used for arrhythmia monitoring on various edge devices.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"82 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2023-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139069908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anomaly Detection in Cloud Computing using Knowledge Graph Embedding and Machine Learning Mechanisms 利用知识图谱嵌入和机器学习机制进行云计算异常检测
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-12-29 DOI: 10.1007/s10723-023-09727-1
Katerina Mitropoulou, Panagiotis Kokkinos, Polyzois Soumplis, Emmanouel Varvarigos

The orchestration of cloud computing infrastructures is challenging, considering the number, heterogeneity and dynamicity of the involved resources, along with the highly distributed nature of the applications that use them for computation and storage. Evidently, the volume of relevant monitoring data can be significant, and the ability to collect, analyze, and act on this data in real time is critical for the infrastructure’s efficient use. In this study, we introduce a novel methodology that adeptly manages the diverse, dynamic, and voluminous nature of cloud resources and the applications that they support. We use knowledge graphs to represent computing and storage resources and illustrate the relationships between them and the applications that utilize them. We then train GraphSAGE to acquire vector-based representations of the infrastructures’ properties, while preserving the structural properties of the graph. These are efficiently provided as input to two unsupervised machine learning algorithms, namely CBLOF and Isolation Forest, for the detection of storage and computing overusage events, where CBLOF demonstrates better performance across all our evaluation metrics. Following the detection of such events, we have also developed appropriate re-optimization mechanisms that ensure the performance of the served applications. Evaluated in a simulated environment, our methods demonstrate a significant advancement in anomaly detection and infrastructure optimization. The results underscore the potential of this closed-loop operation in dynamically adapting to the evolving demands of cloud infrastructures. By integrating data representation and machine learning methods with proactive management strategies, this research contributes substantially to the field of cloud computing, offering a scalable, intelligent solution for modern cloud infrastructures.

考虑到相关资源的数量、异构性和动态性,以及使用这些资源进行计算和存储的应用程序的高度分布性,云计算基础设施的协调工作极具挑战性。显而易见,相关的监控数据量可能非常大,而实时收集、分析和处理这些数据的能力对于基础设施的高效利用至关重要。在本研究中,我们介绍了一种新颖的方法,该方法能有效管理云资源及其支持的应用程序的多样性、动态性和海量性。我们使用知识图谱来表示计算和存储资源,并说明它们与使用它们的应用程序之间的关系。然后,我们对 GraphSAGE 进行训练,以获取基于向量的基础设施属性表示,同时保留图的结构属性。这些信息被有效地作为输入提供给两种无监督机器学习算法,即 CBLOF 和 Isolation Forest,用于检测存储和计算过度使用事件。在检测到此类事件后,我们还开发了适当的重新优化机制,以确保所服务应用程序的性能。在模拟环境中进行的评估表明,我们的方法在异常检测和基础设施优化方面取得了重大进展。结果凸显了这种闭环操作在动态适应云基础设施不断变化的需求方面的潜力。通过将数据表示和机器学习方法与前瞻性管理策略相结合,这项研究为云计算领域做出了重大贡献,为现代云基础设施提供了可扩展的智能解决方案。
{"title":"Anomaly Detection in Cloud Computing using Knowledge Graph Embedding and Machine Learning Mechanisms","authors":"Katerina Mitropoulou, Panagiotis Kokkinos, Polyzois Soumplis, Emmanouel Varvarigos","doi":"10.1007/s10723-023-09727-1","DOIUrl":"https://doi.org/10.1007/s10723-023-09727-1","url":null,"abstract":"<p>The orchestration of cloud computing infrastructures is challenging, considering the number, heterogeneity and dynamicity of the involved resources, along with the highly distributed nature of the applications that use them for computation and storage. Evidently, the volume of relevant monitoring data can be significant, and the ability to collect, analyze, and act on this data in real time is critical for the infrastructure’s efficient use. In this study, we introduce a novel methodology that adeptly manages the diverse, dynamic, and voluminous nature of cloud resources and the applications that they support. We use knowledge graphs to represent computing and storage resources and illustrate the relationships between them and the applications that utilize them. We then train GraphSAGE to acquire vector-based representations of the infrastructures’ properties, while preserving the structural properties of the graph. These are efficiently provided as input to two unsupervised machine learning algorithms, namely CBLOF and Isolation Forest, for the detection of storage and computing overusage events, where CBLOF demonstrates better performance across all our evaluation metrics. Following the detection of such events, we have also developed appropriate re-optimization mechanisms that ensure the performance of the served applications. Evaluated in a simulated environment, our methods demonstrate a significant advancement in anomaly detection and infrastructure optimization. The results underscore the potential of this closed-loop operation in dynamically adapting to the evolving demands of cloud infrastructures. By integrating data representation and machine learning methods with proactive management strategies, this research contributes substantially to the field of cloud computing, offering a scalable, intelligent solution for modern cloud infrastructures.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"81 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2023-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139069793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accounting Information Systems and Strategic Performance: The Interplay of Digital Technology and Edge Computing Devices 会计信息系统与战略绩效:数字技术与边缘计算设备的相互作用
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-12-29 DOI: 10.1007/s10723-023-09720-8
Xi Zhen, Li Zhen

With the rapid development of digital technologies, scholars and industries are pushing into the information age, where data processing is the accounting industry's major challenge. This study aimed to analyze the use of these digital technologies for strategic performance attainment and mediating the accounting information system (AIS). Further, this study also explores the moderation of the DT and strategic performance linkage. In this rapid change, the business organization is crucial to competition. Hence, technology is the key factor for maintaining the competitiveness of the industrialists, specifically where information plays a vital role in making management decisions. Accounting software is a significant tool that efficiently collects data and makes timely decisions to declare the business strategy to respond quickly to the market. However, the available accounting software is costly, and small-scale businesses cannot afford it. Therefore, this paper developed a digital accounting system using artificial intelligence (AI) and edge computing (EC) to process and store the accounting data. This article introduces novel edge framework for digital data processing with advanced data processing methods. The with the growth of IoT, the data sizes have increased significantly. Moreover, the traditional cloud platforms are enriched with EC to process the vast amount of data where it is collected. Therefore, the business can adapt to new size data and raise its standards in terms of technical content. It will define the distributed storage in the cloud and test the cluster performance of the system once the system design and its effects on the system. In the end, the system operation time, load balancing and rows of data is tested experimentally. The results and its analysis demonstrated that the data processing with EC for AIS utilized is improved acceleration rate, operational efficiency and execution rate.

随着数字技术的飞速发展,学者和行业都在向信息时代迈进,其中数据处理是会计行业面临的主要挑战。本研究旨在分析利用这些数字技术实现战略绩效和中介会计信息系统(AIS)的情况。此外,本研究还探讨了数字技术与战略绩效联系的调节作用。在这个快速变化的时代,企业组织对于竞争至关重要。因此,技术是工业企业保持竞争力的关键因素,特别是在信息对管理决策起着至关重要作用的领域。会计软件是一种重要的工具,它能有效地收集数据并及时做出决策,从而宣布企业战略,对市场做出快速反应。然而,现有的会计软件价格昂贵,小型企业负担不起。因此,本文利用人工智能(AI)和边缘计算(EC)开发了一种数字会计系统,用于处理和存储会计数据。本文采用先进的数据处理方法,介绍了用于数字数据处理的新型边缘框架。随着物联网的发展,数据规模大幅增加。此外,传统的云平台也可以利用 EC 来处理收集到的大量数据。因此,企业可以适应新的数据规模,提高技术含量标准。它将定义云中的分布式存储,并在系统设计完成后测试系统的集群性能及其对系统的影响。最后,对系统运行时间、负载平衡和数据行数进行了实验测试。结果及其分析表明,利用 EC 进行 AIS 的数据处理提高了加速率、运行效率和执行率。
{"title":"Accounting Information Systems and Strategic Performance: The Interplay of Digital Technology and Edge Computing Devices","authors":"Xi Zhen, Li Zhen","doi":"10.1007/s10723-023-09720-8","DOIUrl":"https://doi.org/10.1007/s10723-023-09720-8","url":null,"abstract":"<p>With the rapid development of digital technologies, scholars and industries are pushing into the information age, where data processing is the accounting industry's major challenge. This study aimed to analyze the use of these digital technologies for strategic performance attainment and mediating the accounting information system (AIS). Further, this study also explores the moderation of the DT and strategic performance linkage. In this rapid change, the business organization is crucial to competition. Hence, technology is the key factor for maintaining the competitiveness of the industrialists, specifically where information plays a vital role in making management decisions. Accounting software is a significant tool that efficiently collects data and makes timely decisions to declare the business strategy to respond quickly to the market. However, the available accounting software is costly, and small-scale businesses cannot afford it. Therefore, this paper developed a digital accounting system using artificial intelligence (AI) and edge computing (EC) to process and store the accounting data. This article introduces novel edge framework for digital data processing with advanced data processing methods. The with the growth of IoT, the data sizes have increased significantly. Moreover, the traditional cloud platforms are enriched with EC to process the vast amount of data where it is collected. Therefore, the business can adapt to new size data and raise its standards in terms of technical content. It will define the distributed storage in the cloud and test the cluster performance of the system once the system design and its effects on the system. In the end, the system operation time, load balancing and rows of data is tested experimentally. The results and its analysis demonstrated that the data processing with EC for AIS utilized is improved acceleration rate, operational efficiency and execution rate.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"7 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2023-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139069792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of Analytical Offloading for Innovative Internet of Vehicles Based on Mobile Edge Computing 基于移动边缘计算的创新车联网分析卸载开发
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-12-28 DOI: 10.1007/s10723-023-09719-1
Ming Zhang

The current task offloading technique needs to be performed more effectively. Onboard terminals cannot execute efficient computation due to the explosive expansion of data flow, the quick increase in vehicle population, and the growing scarcity of spectrum resources. As a result, this study suggests a task-offloading technique based on reinforcement learning computing for the Internet of Vehicles edge computing architecture. The system framework for the Internet of Vehicles has been initially developed. Although the control centre gathers all vehicle information, the roadside unit collects vehicle data from the neighborhood and sends it to a mobile edge computing server for processing. Then, to guarantee that job dispatching in the Internet of Vehicles is logical, the computation model, communications approach, interfering approach, and concerns about confidentiality are established. This research examines the best way to analyze and design a computation offloading approach for a multiuser smart Internet of Vehicles (IoV) based on mobile edge computing (MEC). We present an analytical offloading strategy for various MEC networks, covering one-to-one, one-to-two, and two-to-one situations, as it is challenging to determine an analytical offloading proportion for a generic MEC-based IoV network. The suggested analytic offload strategy may match the brute force (BF) approach with the best performance of the Deep Deterministic Policy Gradient (DDPG). For the analytical offloading design for a general MEC-based IoV, the analytical results in this study can be a valuable source of information.

目前的任务卸载技术需要更有效地执行。由于数据流的爆炸式扩张、车辆数量的快速增长以及频谱资源的日益稀缺,车载终端无法执行高效计算。因此,本研究为车联网边缘计算架构提出了一种基于强化学习计算的任务卸载技术。车联网的系统框架已初步建立。虽然控制中心会收集所有车辆信息,但路边装置会收集附近的车辆数据并发送到移动边缘计算服务器进行处理。然后,为了保证车辆互联网中的工作调度符合逻辑,建立了计算模型、通信方法、干扰方法以及对保密性的关注。本研究探讨了分析和设计基于移动边缘计算(MEC)的多用户智能车联网(IoV)计算卸载方法的最佳途径。我们针对各种 MEC 网络提出了一种分析性卸载策略,涵盖一对一、一对二和二对一的情况,因为确定基于 MEC 的通用 IoV 网络的分析性卸载比例具有挑战性。建议的分析卸载策略可与蛮力(BF)方法和深度确定性策略梯度(DDPG)的最佳性能相匹配。对于一般基于 MEC 的 IoV 的分析卸载设计,本研究的分析结果可以作为宝贵的信息来源。
{"title":"Development of Analytical Offloading for Innovative Internet of Vehicles Based on Mobile Edge Computing","authors":"Ming Zhang","doi":"10.1007/s10723-023-09719-1","DOIUrl":"https://doi.org/10.1007/s10723-023-09719-1","url":null,"abstract":"<p>The current task offloading technique needs to be performed more effectively. Onboard terminals cannot execute efficient computation due to the explosive expansion of data flow, the quick increase in vehicle population, and the growing scarcity of spectrum resources. As a result, this study suggests a task-offloading technique based on reinforcement learning computing for the Internet of Vehicles edge computing architecture. The system framework for the Internet of Vehicles has been initially developed. Although the control centre gathers all vehicle information, the roadside unit collects vehicle data from the neighborhood and sends it to a mobile edge computing server for processing. Then, to guarantee that job dispatching in the Internet of Vehicles is logical, the computation model, communications approach, interfering approach, and concerns about confidentiality are established. This research examines the best way to analyze and design a computation offloading approach for a multiuser smart Internet of Vehicles (IoV) based on mobile edge computing (MEC). We present an analytical offloading strategy for various MEC networks, covering one-to-one, one-to-two, and two-to-one situations, as it is challenging to determine an analytical offloading proportion for a generic MEC-based IoV network. The suggested analytic offload strategy may match the brute force (BF) approach with the best performance of the Deep Deterministic Policy Gradient (DDPG). For the analytical offloading design for a general MEC-based IoV, the analytical results in this study can be a valuable source of information.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"67 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2023-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139051107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Edge Computing with Fog-cloud for Heart Data Processing using Particle Swarm Optimized Deep Learning Technique 使用粒子群优化深度学习技术,利用边缘计算和雾云处理心脏数据
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-12-23 DOI: 10.1007/s10723-023-09706-6
Sheng Chai, Lantian Guo

Chronic illnesses such as heart disease, diabetes, cancer, and respiratory diseases are complex and pose a significant threat to global health. Processing heart data is particularly challenging due to the variability of symptoms. However, advancements in smart wearable devices, computing technologies, and IoT solutions have made heart data processing easier. This proposed model integrates Edge-Fog-Cloud computing to provide rapid and accurate results, making it a promising solution for heart data processing. Patient data is collected using hardware components, and cardiac feature extraction is used to obtain crucial features from data signals. The Optimized Cascaded Convolution Neural Network (CCNN) processes these features, and the CCNN's hyperparameters are optimized using both PSO (Particle Swarm Optimization) and GSO(Galactic Swarm Optimization) techniques. The proposed system leverages the strengths of both optimization algorithms to improve the accuracy and efficiency of the heart data processing system. The GSO-CCNN optimizes the CCNN's hyperparameters, while the PSO-CCNN optimizes the feature selection process. Combining both algorithms enhances the system's ability to identify relevant features and optimize the CCNN's architecture. Performance analysis demonstrates that the proposed technique, which integrates Edge-Fog-Cloud computing with combined PSO-CCNN and GSO-CCNN techniques, outperforms traditional models such as PSO-CCNN, GSO-CCNN, WOA-CCNN, and DHOA-CCNN, which utilize traditional cloud and edge technologies. The proposed model is evaluated in terms of time, energy consumption, bandwidth, and the standard performance metrics of accuracy, precision, recall, specificity, and F1-score. Therefore, the proposed system's comparative analysis ensures its efficiency over conventional models for heart data processing.

心脏病、糖尿病、癌症和呼吸系统疾病等慢性疾病十分复杂,对全球健康构成了重大威胁。由于症状多变,处理心脏数据尤其具有挑战性。然而,智能可穿戴设备、计算技术和物联网解决方案的进步使心脏数据处理变得更加容易。本建议模型集成了边缘-雾-云计算,可提供快速、准确的结果,是一种很有前景的心脏数据处理解决方案。利用硬件组件收集患者数据,并通过心脏特征提取从数据信号中获取关键特征。优化级联卷积神经网络(CCNN)处理这些特征,并使用粒子群优化(PSO)和银河系群优化(GSO)技术优化 CCNN 的超参数。拟议的系统充分利用了这两种优化算法的优势,提高了心脏数据处理系统的准确性和效率。GSO-CCNN 优化了 CCNN 的超参数,而 PSO-CCNN 则优化了特征选择过程。这两种算法的结合增强了系统识别相关特征和优化 CCNN 架构的能力。性能分析表明,所提出的技术将边缘-雾-云计算与 PSO-CCNN 和 GSO-CCNN 技术相结合,性能优于 PSO-CCNN、GSO-CCNN、WOA-CCNN 和 DHOA-CCNN 等利用传统云技术和边缘技术的传统模型。我们从时间、能耗、带宽以及准确度、精确度、召回率、特异性和 F1 分数等标准性能指标方面对所提出的模型进行了评估。因此,拟议系统的对比分析确保了其在心脏数据处理方面比传统模型更高效。
{"title":"Edge Computing with Fog-cloud for Heart Data Processing using Particle Swarm Optimized Deep Learning Technique","authors":"Sheng Chai, Lantian Guo","doi":"10.1007/s10723-023-09706-6","DOIUrl":"https://doi.org/10.1007/s10723-023-09706-6","url":null,"abstract":"<p>Chronic illnesses such as heart disease, diabetes, cancer, and respiratory diseases are complex and pose a significant threat to global health. Processing heart data is particularly challenging due to the variability of symptoms. However, advancements in smart wearable devices, computing technologies, and IoT solutions have made heart data processing easier. This proposed model integrates Edge-Fog-Cloud computing to provide rapid and accurate results, making it a promising solution for heart data processing. Patient data is collected using hardware components, and cardiac feature extraction is used to obtain crucial features from data signals. The Optimized Cascaded Convolution Neural Network (CCNN) processes these features, and the CCNN's hyperparameters are optimized using both PSO (Particle Swarm Optimization) and GSO(Galactic Swarm Optimization) techniques. The proposed system leverages the strengths of both optimization algorithms to improve the accuracy and efficiency of the heart data processing system. The GSO-CCNN optimizes the CCNN's hyperparameters, while the PSO-CCNN optimizes the feature selection process. Combining both algorithms enhances the system's ability to identify relevant features and optimize the CCNN's architecture. Performance analysis demonstrates that the proposed technique, which integrates Edge-Fog-Cloud computing with combined PSO-CCNN and GSO-CCNN techniques, outperforms traditional models such as PSO-CCNN, GSO-CCNN, WOA-CCNN, and DHOA-CCNN, which utilize traditional cloud and edge technologies. The proposed model is evaluated in terms of time, energy consumption, bandwidth, and the standard performance metrics of accuracy, precision, recall, specificity, and F1-score. Therefore, the proposed system's comparative analysis ensures its efficiency over conventional models for heart data processing.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"35 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2023-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139030887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DEEPBIN: Deep Learning Based Garbage Classification for Households Using Sustainable Natural Technologies DEEPBIN:利用可持续自然技术为家庭进行基于深度学习的垃圾分类
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-12-19 DOI: 10.1007/s10723-023-09722-6
Yu Song, Xin He, Xiwang Tang, Bo Yin, Jie Du, Jiali Liu, Zhongbao Zhao, Shigang Geng

Today, things that are accessible worldwide are upgrading to innovative technology. In this research, an intelligent garbage system will be designed with State-of-the-art methods using deep learning technologies. Garbage is highly produced due to urbanization and the rising population in urban areas. It is essential to manage daily trash from homes and living environments. This research aims to provide an intelligent IoT-based garbage bin system, and classification is done using Deep learning techniques. This smart bin is capable of sensing more varieties of garbage from home. Though there are more technologies successfully implemented with IoT and machine learning, there is still a need for sustainable natural technologies to manage daily waste. The innovative IoT-based garbage system uses various sensors like humidity, temperature, gas, and liquid sensors to identify the garbage condition. Initially, the Smart Garbage Bin system is designed, and then the data are collected using a garbage annotation application. Next, the deep learning method is used for object detection and classification of garbage images. Arithmetic Optimization Algorithm (AOA) with Improved RefineDet (IRD) is used for object detection. Next, the EfficientNet-B0 model is used for the classification of garbage images. The garbage content is identified, and the content is prepared to train the deep learning model to perform efficient classification tasks. For result evaluation, smart bins are deployed in real-time, and accuracy is estimated. Furthermore, fine-tuning region-specific litter photos led to enhanced categorization.

如今,全球范围内可以接触到的事物都在向创新技术升级。本研究将利用深度学习技术,采用最先进的方法设计一个智能垃圾处理系统。由于城市化和城市人口的增加,垃圾大量产生。管理家庭和生活环境中的日常垃圾至关重要。本研究旨在提供一个基于物联网的智能垃圾桶系统,并使用深度学习技术进行分类。这种智能垃圾桶能够感知更多种类的家庭垃圾。尽管物联网和机器学习技术的成功应用越来越多,但仍需要可持续的自然技术来管理日常垃圾。基于物联网的创新型垃圾系统使用各种传感器,如湿度、温度、气体和液体传感器来识别垃圾状况。首先,设计智能垃圾桶系统,然后使用垃圾标注应用程序收集数据。接下来,使用深度学习方法对垃圾图像进行对象检测和分类。物体检测采用算术优化算法(AOA)和改进的 RefineDet 算法(IRD)。然后,使用 EfficientNet-B0 模型对垃圾图像进行分类。首先识别垃圾内容,然后对内容进行准备,以训练深度学习模型执行高效的分类任务。为了评估结果,实时部署了智能垃圾箱,并估算了准确率。此外,对特定区域的垃圾照片进行微调也增强了分类效果。
{"title":"DEEPBIN: Deep Learning Based Garbage Classification for Households Using Sustainable Natural Technologies","authors":"Yu Song, Xin He, Xiwang Tang, Bo Yin, Jie Du, Jiali Liu, Zhongbao Zhao, Shigang Geng","doi":"10.1007/s10723-023-09722-6","DOIUrl":"https://doi.org/10.1007/s10723-023-09722-6","url":null,"abstract":"<p>Today, things that are accessible worldwide are upgrading to innovative technology. In this research, an intelligent garbage system will be designed with State-of-the-art methods using deep learning technologies. Garbage is highly produced due to urbanization and the rising population in urban areas. It is essential to manage daily trash from homes and living environments. This research aims to provide an intelligent IoT-based garbage bin system, and classification is done using Deep learning techniques. This smart bin is capable of sensing more varieties of garbage from home. Though there are more technologies successfully implemented with IoT and machine learning, there is still a need for sustainable natural technologies to manage daily waste. The innovative IoT-based garbage system uses various sensors like humidity, temperature, gas, and liquid sensors to identify the garbage condition. Initially, the Smart Garbage Bin system is designed, and then the data are collected using a garbage annotation application. Next, the deep learning method is used for object detection and classification of garbage images. Arithmetic Optimization Algorithm (AOA) with Improved RefineDet (IRD) is used for object detection. Next, the EfficientNet-B0 model is used for the classification of garbage images. The garbage content is identified, and the content is prepared to train the deep learning model to perform efficient classification tasks. For result evaluation, smart bins are deployed in real-time, and accuracy is estimated. Furthermore, fine-tuning region-specific litter photos led to enhanced categorization.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"44 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138745456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Markovian with Federated Deep Recurrent Neural Network for Edge—IoMT to Improve Healthcare in Smart Cities 用于边缘物联网技术的马尔可夫与联合深度递归神经网络,改善智能城市的医疗保健水平
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-12-19 DOI: 10.1007/s10723-023-09709-3
Yuliang Gai, Yuxin Liu, Minghao Li, Shengcheng Yang

The architectural design of smart cities should prioritize the provision of critical medical services. This involves establishing improved connectivity and leveraging supercomputing capabilities to enhance the quality of services (QoS) offered to residents. Edge computing is vital in healthcare applications by enabling low network latencies necessary for real-time data processing. By implementing edge computing, smart cities can benefit from reduced latency, increased bandwidth, and improved power consumption efficiency. In the context of Mobile Edge Computing (MEC), the study proposes a novel approach called the Markovian Decision Process with Federated Deep Recurrent Neural Network (MDP-FDRNN) as the primary algorithm for managing resource allocation. MEC focuses on utilizing edge computing capabilities to process data and perform computations at the network's edges. The conducted tests demonstrate that the MDP-FDRNN algorithm is superior and well-suited for effectively resolving high-processing traffic at the network's edges. It significantly reduces processing time, particularly crucial for healthcare operations related to patients' health problems. By employing the MDP-FDRNN algorithm in resource allocation management, smart cities can efficiently utilize their edge computing infrastructure to handle complex processing tasks. The superior performance of this algorithm in reducing processing time showcases its potential to support critical healthcare operations within smart cities, thereby enhancing the overall quality of healthcare services provided to residents. This article underscores the significance of implementing appropriate technology, including edge computing and the IoM, in developing prosperous smart cities. It also highlights the effectiveness of the MDP-FDRNN algorithm in managing resource allocation and addressing processing challenges at the network's edges, particularly in healthcare operations.

智慧城市的建筑设计应优先考虑提供关键的医疗服务。这包括建立更好的连接和利用超级计算能力,以提高为居民提供的服务质量(QoS)。边缘计算可实现实时数据处理所需的低网络延迟,因此在医疗保健应用中至关重要。通过实施边缘计算,智慧城市可以从减少延迟、增加带宽和提高能耗效率中获益。在移动边缘计算(MEC)方面,该研究提出了一种名为 "马尔可夫决策过程与联合深度循环神经网络"(MDP-FDRNN)的新方法,作为管理资源分配的主要算法。MEC 主要利用边缘计算能力在网络边缘处理数据和执行计算。所进行的测试表明,MDP-FDRNN 算法非常优越,非常适合有效解决网络边缘的高处理流量问题。它大大缩短了处理时间,这对于与患者健康问题相关的医疗操作尤为重要。通过在资源分配管理中采用 MDP-FDRNN 算法,智慧城市可以有效利用其边缘计算基础设施来处理复杂的处理任务。该算法在缩短处理时间方面的卓越性能显示了其支持智慧城市关键医疗业务的潜力,从而提高了为居民提供的医疗服务的整体质量。本文强调了在发展繁荣的智慧城市过程中采用适当技术(包括边缘计算和物联网)的重要性。文章还强调了 MDP-FDRNN 算法在管理资源分配和解决网络边缘处理难题方面的有效性,特别是在医疗保健业务中。
{"title":"Markovian with Federated Deep Recurrent Neural Network for Edge—IoMT to Improve Healthcare in Smart Cities","authors":"Yuliang Gai, Yuxin Liu, Minghao Li, Shengcheng Yang","doi":"10.1007/s10723-023-09709-3","DOIUrl":"https://doi.org/10.1007/s10723-023-09709-3","url":null,"abstract":"<p>The architectural design of smart cities should prioritize the provision of critical medical services. This involves establishing improved connectivity and leveraging supercomputing capabilities to enhance the quality of services (QoS) offered to residents. Edge computing is vital in healthcare applications by enabling low network latencies necessary for real-time data processing. By implementing edge computing, smart cities can benefit from reduced latency, increased bandwidth, and improved power consumption efficiency. In the context of Mobile Edge Computing (MEC), the study proposes a novel approach called the Markovian Decision Process with Federated Deep Recurrent Neural Network (MDP-FDRNN) as the primary algorithm for managing resource allocation. MEC focuses on utilizing edge computing capabilities to process data and perform computations at the network's edges. The conducted tests demonstrate that the MDP-FDRNN algorithm is superior and well-suited for effectively resolving high-processing traffic at the network's edges. It significantly reduces processing time, particularly crucial for healthcare operations related to patients' health problems. By employing the MDP-FDRNN algorithm in resource allocation management, smart cities can efficiently utilize their edge computing infrastructure to handle complex processing tasks. The superior performance of this algorithm in reducing processing time showcases its potential to support critical healthcare operations within smart cities, thereby enhancing the overall quality of healthcare services provided to residents. This article underscores the significance of implementing appropriate technology, including edge computing and the IoM, in developing prosperous smart cities. It also highlights the effectiveness of the MDP-FDRNN algorithm in managing resource allocation and addressing processing challenges at the network's edges, particularly in healthcare operations.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"1 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138745508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integration of a Lightweight Customized 2D CNN Model to an Edge Computing System for Real-Time Multiple Gesture Recognition 将轻量级定制 2D CNN 模型集成到边缘计算系统,用于实时多手势识别
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-12-15 DOI: 10.1007/s10723-023-09715-5
Hulin Jin, Zhiran Jin, Yong-Guk Kim, Chunyang Fan

Abstract

The human-machine interface (HMI) collects electrophysiology signals incoming from the patient and utilizes them to operate the device. However, most applications are currently in the testing phase and are typically unavailable to everyone. Developing wearable HMI devices that are intelligent and more comfortable has been a focus of study in recent times. This work developed a portable, eight-channel electromyography (EMG) signal-based device that can distinguish 21 different types of motion. To identify the EMG signals, an analog front-end (AFE) integrated chip (IC) was created, and an integrated EMG signal acquisition device combining a stretchy wristband was made. Using the EMG movement signals of 10 volunteers, a SIAT database of 21 gestures was created. Using the SIAT dataset, a lightweight 2D CNN-LSTM model was developed and specialized training was given. The signal recognition accuracy is 96.4%, and the training process took a median of 14 min 13 s. The model may be used on lower-performance edge computing devices because of its compact size, and it is anticipated that it will eventually be applied to smartphone terminals.

摘要 人机界面(HMI)收集来自病人的电生理信号,并利用这些信号操作设备。然而,大多数应用目前还处于测试阶段,通常无法普及。开发更智能、更舒适的可穿戴人机界面设备是近期研究的重点。这项研究开发了一种基于八通道肌电图(EMG)信号的便携式设备,可以区分 21 种不同类型的运动。为了识别肌电信号,制作了一个模拟前端(AFE)集成芯片(IC),并结合弹力腕带制作了一个集成的肌电信号采集装置。利用 10 名志愿者的肌电信号,创建了包含 21 个手势的 SIAT 数据库。利用 SIAT 数据集,开发了一个轻量级 2D CNN-LSTM 模型,并进行了专门训练。该模型由于体积小巧,可用于性能较低的边缘计算设备,预计最终将应用于智能手机终端。
{"title":"Integration of a Lightweight Customized 2D CNN Model to an Edge Computing System for Real-Time Multiple Gesture Recognition","authors":"Hulin Jin, Zhiran Jin, Yong-Guk Kim, Chunyang Fan","doi":"10.1007/s10723-023-09715-5","DOIUrl":"https://doi.org/10.1007/s10723-023-09715-5","url":null,"abstract":"<h3>Abstract</h3> <p>The human-machine interface (HMI) collects electrophysiology signals incoming from the patient and utilizes them to operate the device. However, most applications are currently in the testing phase and are typically unavailable to everyone. Developing wearable HMI devices that are intelligent and more comfortable has been a focus of study in recent times. This work developed a portable, eight-channel electromyography (EMG) signal-based device that can distinguish 21 different types of motion. To identify the EMG signals, an analog front-end (AFE) integrated chip (IC) was created, and an integrated EMG signal acquisition device combining a stretchy wristband was made. Using the EMG movement signals of 10 volunteers, a SIAT database of 21 gestures was created. Using the SIAT dataset, a lightweight 2D CNN-LSTM model was developed and specialized training was given. The signal recognition accuracy is 96.4%, and the training process took a median of 14 min 13 s. The model may be used on lower-performance edge computing devices because of its compact size, and it is anticipated that it will eventually be applied to smartphone terminals.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"181 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138682052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Healthcare and Fitness Services: A Comprehensive Assessment of Blockchain, IoT, and Edge Computing in Smart Cities 医疗保健和健身服务:智能城市中的区块链、物联网和边缘计算综合评估
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-12-15 DOI: 10.1007/s10723-023-09712-8
Yang-Yang Liu, Ying Zhang, Yue Wu, Man Feng

Edge computing, blockchain technology, and the Internet of Things have all been identified as key enablers of innovative city initiatives. A comprehensive examination of the research found that IoT, blockchain, and edge computing are now major factors in how efficiently smart cities provide healthcare. IoT has been determined to be the most used of the three technologies. In this observation, edge computing and blockchain technology are more applicable to the healthcare industry for assessing intelligent and secured data. Edge computing has been touted as an important technology for low-cost remote access, cutting latency, and boosting efficiency. Smart cities are incorporated with intelligent devices to enhance the person's day-to-day life. Intelligent of Medical Things (IoMT) and Edge computing (EC) are these things’ bases. The increasing Quality of Services (QoS) of healthcare services requires supercomputing that connects IoMT with intelligent devices with edge processing. The healthcare applications of smart cities need reduced latencies. Therefore, EC is necessary to reduce latency, energy, bandwidth, and scalability. This paper developed a deep Q reinforcement learning algorithm with evolutionary optimization and compared it with the traditional deep learning approaches for process congestion to reduce the time and latency related to patient health monitoring. The energy consumption, latency computation, and cost computation of the proposed model is less when compared to existing techniques. Among 100 tasks, nearly 95% of the tasks are offloaded efficiently in the minimum time.

边缘计算、区块链技术和物联网都被认为是创新城市计划的关键推动因素。一项综合研究发现,物联网、区块链和边缘计算目前已成为影响智慧城市提供医疗保健服务效率的主要因素。物联网已被确定为这三种技术中使用最多的技术。据此观察,边缘计算和区块链技术更适用于医疗保健行业,以评估智能和安全数据。边缘计算被誉为低成本远程访问、减少延迟和提高效率的重要技术。智能城市融入了智能设备,以改善人们的日常生活。医疗物联网(IoMT)和边缘计算(EC)是这些设备的基础。医疗保健服务的服务质量(QoS)不断提高,需要超级计算将 IoMT 与具有边缘处理功能的智能设备连接起来。智慧城市的医疗保健应用需要减少延迟。因此,EC 有必要降低延迟、能耗、带宽和可扩展性。本文开发了一种具有进化优化功能的深度 Q 强化学习算法,并将其与传统的深度学习方法进行了比较,以减少患者健康监测相关的时间和延迟。与现有技术相比,所提模型的能耗、延迟计算和成本计算都更少。在 100 个任务中,近 95% 的任务都能在最短时间内高效卸载。
{"title":"Healthcare and Fitness Services: A Comprehensive Assessment of Blockchain, IoT, and Edge Computing in Smart Cities","authors":"Yang-Yang Liu, Ying Zhang, Yue Wu, Man Feng","doi":"10.1007/s10723-023-09712-8","DOIUrl":"https://doi.org/10.1007/s10723-023-09712-8","url":null,"abstract":"<p>Edge computing, blockchain technology, and the Internet of Things have all been identified as key enablers of innovative city initiatives. A comprehensive examination of the research found that IoT, blockchain, and edge computing are now major factors in how efficiently smart cities provide healthcare. IoT has been determined to be the most used of the three technologies. In this observation, edge computing and blockchain technology are more applicable to the healthcare industry for assessing intelligent and secured data. Edge computing has been touted as an important technology for low-cost remote access, cutting latency, and boosting efficiency. Smart cities are incorporated with intelligent devices to enhance the person's day-to-day life. Intelligent of Medical Things (IoMT) and Edge computing (EC) are these things’ bases. The increasing Quality of Services (QoS) of healthcare services requires supercomputing that connects IoMT with intelligent devices with edge processing. The healthcare applications of smart cities need reduced latencies. Therefore, EC is necessary to reduce latency, energy, bandwidth, and scalability. This paper developed a deep Q reinforcement learning algorithm with evolutionary optimization and compared it with the traditional deep learning approaches for process congestion to reduce the time and latency related to patient health monitoring. The energy consumption, latency computation, and cost computation of the proposed model is less when compared to existing techniques. Among 100 tasks, nearly 95% of the tasks are offloaded efficiently in the minimum time.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"83 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138682200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cost-Availability Aware Scaling: Towards Optimal Scaling of Cloud Services 成本-可用性意识扩展:实现云服务的优化扩展
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-12-07 DOI: 10.1007/s10723-023-09718-2
Andre Bento, Filipe Araujo, Raul Barbosa

Cloud services have become increasingly popular for developing large-scale applications due to the abundance of resources they offer. The scalability and accessibility of these resources have made it easier for organizations of all sizes to develop and implement sophisticated and demanding applications to meet demand instantly. As monetary fees are involved in the use of the cloud, one of the challenges for application developers and operators is to balance their budget constraints with crucial quality attributes, such as availability. Industry standards usually default to simplified solutions that cannot simultaneously consider competing objectives. Our research addresses this challenge by proposing a Cost-Availability Aware Scaling (CAAS) approach that uses multi-objective optimization of availability and cost. We evaluate CAAS using two open-source microservices applications, yielding improved results compared to the industry standard CPU-based Autoscaler (AS). CAAS can find optimal system configurations with higher availability, between 1 and 2 nines on average, and reduced costs, 6% on average, with the first application, and 1 nine of availability on average, and reduced costs up to 18% on average, with the second application. The gap in the results between our model and the default AS suggests that operators can significantly improve the operation of their applications.

云服务因其提供的丰富资源,在开发大型应用程序方面越来越受欢迎。这些资源的可扩展性和可访问性使各种规模的组织更容易开发和实施复杂、高要求的应用程序,以即时满足需求。由于云的使用涉及货币费用,应用程序开发人员和运营商面临的挑战之一是如何在预算限制和关键质量属性(如可用性)之间取得平衡。行业标准通常默认采用简化的解决方案,无法同时考虑相互竞争的目标。为了应对这一挑战,我们的研究提出了一种成本-可用性感知扩展(CAAS)方法,该方法使用可用性和成本的多目标优化。我们使用两个开源微服务应用对 CAAS 进行了评估,与基于 CPU 的行业标准自动分级器(AS)相比,结果有所改进。CAAS 可以找到最佳系统配置,第一个应用的可用性平均在 1 到 2 个 9 之间,成本平均降低了 6%;第二个应用的可用性平均为 1 个 9,成本平均降低了 18%。我们的模型与默认 AS 之间的结果差距表明,运营商可以显著改善其应用程序的运行。
{"title":"Cost-Availability Aware Scaling: Towards Optimal Scaling of Cloud Services","authors":"Andre Bento, Filipe Araujo, Raul Barbosa","doi":"10.1007/s10723-023-09718-2","DOIUrl":"https://doi.org/10.1007/s10723-023-09718-2","url":null,"abstract":"<p>Cloud services have become increasingly popular for developing large-scale applications due to the abundance of resources they offer. The scalability and accessibility of these resources have made it easier for organizations of all sizes to develop and implement sophisticated and demanding applications to meet demand instantly. As monetary fees are involved in the use of the cloud, one of the challenges for application developers and operators is to balance their budget constraints with crucial quality attributes, such as availability. Industry standards usually default to simplified solutions that cannot simultaneously consider competing objectives. Our research addresses this challenge by proposing a Cost-Availability Aware Scaling (CAAS) approach that uses multi-objective optimization of availability and cost. We evaluate CAAS using two open-source microservices applications, yielding improved results compared to the industry standard CPU-based Autoscaler (AS). CAAS can find optimal system configurations with higher availability, between 1 and 2 nines on average, and reduced costs, 6% on average, with the first application, and 1 nine of availability on average, and reduced costs up to 18% on average, with the second application. The gap in the results between our model and the default AS suggests that operators can significantly improve the operation of their applications.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"157 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138545561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Grid Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1