Pub Date : 2023-11-13DOI: 10.1186/s13677-023-00541-4
Minghao Zhang, Rui Song, Jun Zhang, Chenyuan Zhou, Guozheng Peng, Haoyang Tian, Tianyi Wu, Yunjia Li
Abstract With the deepening of the construction of the new type power system, the grid has become increasingly complex, and its safe and stable operation is facing more challenges. In order to improve the quality and efficiency of power grid management, State Grid Corporation continues to promote the digital transformation of the grid, proposing concepts such as cloud-edge-end collaborative architecture and power Internet of Things, for which comprehensive sensing of the grid is an important foundation. Power equipment is widely distributed and has a wide variety of types, and online monitoring of them involves the deployment and application of a large number of power sensors. However, there are various problems in implementing active power supplies for these sensors, which restrict their service life. In order to collect and utilize the vibration energy widely present in the grid to provide power for sensors, this paper proposes an electromagnetic vibration energy harvester and its design methodology based on a four-straight-beam structure, and carries out a trial production of prototype. The vibration pickup unit of the harvester is composed of polyimide cantilevers, a permanent magnet and a mass-adjusting spacer. The mass-adjusting spacer can control the vibration frequency of the vibration unit to match the target frequency. In this paper, a key novel method is proposed to increase the number of turns in a limited volume by stacking flexible coils, which can boost the output voltage of the energy harvester. A test system is built to conduct a performance test for the prototype harvester. According to the test results, the resonant frequency of the device is $$100 Hz$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mrow> <mml:mn>100</mml:mn> <mml:mspace /> <mml:mi>H</mml:mi> <mml:mi>z</mml:mi> </mml:mrow> </mml:math> , the output peak-to-peak voltage at the resonant frequency is $$2.56 V$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mrow> <mml:mn>2.56</mml:mn> <mml:mspace /> <mml:mi>V</mml:mi> </mml:mrow> </mml:math> at the acceleration of $$1 g$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mrow> <mml:mn>1</mml:mn> <mml:mspace /> <mml:mi>g</mml:mi> </mml:mrow> </mml:math> , and the maximum output power is around $$151.7 mu W$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mrow> <mml:mn>151.7</mml:mn> <mml:mspace /> <mml:mi>μ</mml:mi> <mml:mi>W</mml:mi> </mml:mrow> </mml:math> . The proposed four-straight-beam electromagnetic vibration energy harvester in this paper has obvious advantages in output voltage and power compared with state-of-the-art harvesters. It can provide sufficient power for various sensors, support the construction of cloud-edge-end architecture and the deployment of a massive number of power sensors. In the last part of this article, a self-powered transformer vibration monitor is presented, demonstrating the practicality of the proposed vibration energy
{"title":"Research on electromagnetic vibration energy harvester for cloud-edge-end collaborative architecture in power grid","authors":"Minghao Zhang, Rui Song, Jun Zhang, Chenyuan Zhou, Guozheng Peng, Haoyang Tian, Tianyi Wu, Yunjia Li","doi":"10.1186/s13677-023-00541-4","DOIUrl":"https://doi.org/10.1186/s13677-023-00541-4","url":null,"abstract":"Abstract With the deepening of the construction of the new type power system, the grid has become increasingly complex, and its safe and stable operation is facing more challenges. In order to improve the quality and efficiency of power grid management, State Grid Corporation continues to promote the digital transformation of the grid, proposing concepts such as cloud-edge-end collaborative architecture and power Internet of Things, for which comprehensive sensing of the grid is an important foundation. Power equipment is widely distributed and has a wide variety of types, and online monitoring of them involves the deployment and application of a large number of power sensors. However, there are various problems in implementing active power supplies for these sensors, which restrict their service life. In order to collect and utilize the vibration energy widely present in the grid to provide power for sensors, this paper proposes an electromagnetic vibration energy harvester and its design methodology based on a four-straight-beam structure, and carries out a trial production of prototype. The vibration pickup unit of the harvester is composed of polyimide cantilevers, a permanent magnet and a mass-adjusting spacer. The mass-adjusting spacer can control the vibration frequency of the vibration unit to match the target frequency. In this paper, a key novel method is proposed to increase the number of turns in a limited volume by stacking flexible coils, which can boost the output voltage of the energy harvester. A test system is built to conduct a performance test for the prototype harvester. According to the test results, the resonant frequency of the device is $$100 Hz$$ <mml:math xmlns:mml=\"http://www.w3.org/1998/Math/MathML\"> <mml:mrow> <mml:mn>100</mml:mn> <mml:mspace /> <mml:mi>H</mml:mi> <mml:mi>z</mml:mi> </mml:mrow> </mml:math> , the output peak-to-peak voltage at the resonant frequency is $$2.56 V$$ <mml:math xmlns:mml=\"http://www.w3.org/1998/Math/MathML\"> <mml:mrow> <mml:mn>2.56</mml:mn> <mml:mspace /> <mml:mi>V</mml:mi> </mml:mrow> </mml:math> at the acceleration of $$1 g$$ <mml:math xmlns:mml=\"http://www.w3.org/1998/Math/MathML\"> <mml:mrow> <mml:mn>1</mml:mn> <mml:mspace /> <mml:mi>g</mml:mi> </mml:mrow> </mml:math> , and the maximum output power is around $$151.7 mu W$$ <mml:math xmlns:mml=\"http://www.w3.org/1998/Math/MathML\"> <mml:mrow> <mml:mn>151.7</mml:mn> <mml:mspace /> <mml:mi>μ</mml:mi> <mml:mi>W</mml:mi> </mml:mrow> </mml:math> . The proposed four-straight-beam electromagnetic vibration energy harvester in this paper has obvious advantages in output voltage and power compared with state-of-the-art harvesters. It can provide sufficient power for various sensors, support the construction of cloud-edge-end architecture and the deployment of a massive number of power sensors. In the last part of this article, a self-powered transformer vibration monitor is presented, demonstrating the practicality of the proposed vibration energy","PeriodicalId":56007,"journal":{"name":"Journal of Cloud Computing-Advances Systems and Applications","volume":"8 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136352059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-09DOI: 10.1186/s13677-023-00535-2
Wei Gu, Yifan Zhang
Abstract Federated learning is a mechanism for model training in distributed systems, aiming to protect data privacy while achieving collective intelligence. In traditional synchronous federated learning, all participants must update the model synchronously, which may result in a decrease in the overall model update frequency due to lagging participants. In order to solve this problem, asynchronous federated learning introduces an asynchronous aggregation mechanism, allowing participants to update models at their own time and rate, and then aggregate each updated edge model on the cloud, thus speeding up the training process. However, under the asynchronous aggregation mechanism, federated learning faces new challenges such as convergence difficulties and unfair model accuracy. This paper first proposes a fairness-based asynchronous federated learning mechanism, which reduces the adverse effects of device and data heterogeneity on the convergence process by using outdatedness and interference-aware weight aggregation, and promotes model personalization and fairness through an early exit mechanism. Mathematical analysis derives the upper bound of convergence speed and the necessary conditions for hyperparameters. Experimental results demonstrate the advantages of the proposed method compared to baseline algorithms, indicating the effectiveness of the proposed method in promoting convergence speed and fairness in federated learning.
{"title":"FedEem: a fairness-based asynchronous federated learning mechanism","authors":"Wei Gu, Yifan Zhang","doi":"10.1186/s13677-023-00535-2","DOIUrl":"https://doi.org/10.1186/s13677-023-00535-2","url":null,"abstract":"Abstract Federated learning is a mechanism for model training in distributed systems, aiming to protect data privacy while achieving collective intelligence. In traditional synchronous federated learning, all participants must update the model synchronously, which may result in a decrease in the overall model update frequency due to lagging participants. In order to solve this problem, asynchronous federated learning introduces an asynchronous aggregation mechanism, allowing participants to update models at their own time and rate, and then aggregate each updated edge model on the cloud, thus speeding up the training process. However, under the asynchronous aggregation mechanism, federated learning faces new challenges such as convergence difficulties and unfair model accuracy. This paper first proposes a fairness-based asynchronous federated learning mechanism, which reduces the adverse effects of device and data heterogeneity on the convergence process by using outdatedness and interference-aware weight aggregation, and promotes model personalization and fairness through an early exit mechanism. Mathematical analysis derives the upper bound of convergence speed and the necessary conditions for hyperparameters. Experimental results demonstrate the advantages of the proposed method compared to baseline algorithms, indicating the effectiveness of the proposed method in promoting convergence speed and fairness in federated learning.","PeriodicalId":56007,"journal":{"name":"Journal of Cloud Computing-Advances Systems and Applications","volume":" 33","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135242621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-03DOI: 10.1186/s13677-023-00515-6
Deyu Zhang, Wang Sun, Zi-Ang Zheng, Wenxin Chen, Shiwen He
Abstract As a new approach to machine learning, Federated learning enables distributned traiing on edge devices and aggregates local models into a global model. The edge devices that participate in federated learning are highly heterogeneous in terms of computing power, device state, and data distribution, making it challenging to converge models efficiently. In this paper, we propose FedState, which is an adaptive device sampling and deadline determination technique for cloud-based heterogeneous federated learning. Specifically, we consider the cloud as a central server that orchestrates federated learning on a large pool of edge devices. To improve the efficiency of model convergence in heterogeneous federated learning, our approach adaptively samples devices to join each round of training and determines the deadline for result submission based on device state. We analyze existing device usage traces to build device state models in different scenarios and design a dynamic importance measurement mechanism based on device availability, data utility, and computing power. We also propose a deadline determination module that dynamically sets the deadline according to the availability of all sampled devices, local training time, and communication time, enabling more clients to submit local models more efficiently. Due to the variability of device state, we design an experience-driven algorithm based on Deep Reinforcement Learning (DRL) that can dynamically adjust our sampling and deadline policies according to the current environment state. We demonstrate the effectiveness of our approach through a series of experiments with the FMNIST dataset and show that our method outperforms current state-of-the-art approaches in terms of model accuracy and convergence speed.
{"title":"Adaptive device sampling and deadline determination for cloud-based heterogeneous federated learning","authors":"Deyu Zhang, Wang Sun, Zi-Ang Zheng, Wenxin Chen, Shiwen He","doi":"10.1186/s13677-023-00515-6","DOIUrl":"https://doi.org/10.1186/s13677-023-00515-6","url":null,"abstract":"Abstract As a new approach to machine learning, Federated learning enables distributned traiing on edge devices and aggregates local models into a global model. The edge devices that participate in federated learning are highly heterogeneous in terms of computing power, device state, and data distribution, making it challenging to converge models efficiently. In this paper, we propose FedState, which is an adaptive device sampling and deadline determination technique for cloud-based heterogeneous federated learning. Specifically, we consider the cloud as a central server that orchestrates federated learning on a large pool of edge devices. To improve the efficiency of model convergence in heterogeneous federated learning, our approach adaptively samples devices to join each round of training and determines the deadline for result submission based on device state. We analyze existing device usage traces to build device state models in different scenarios and design a dynamic importance measurement mechanism based on device availability, data utility, and computing power. We also propose a deadline determination module that dynamically sets the deadline according to the availability of all sampled devices, local training time, and communication time, enabling more clients to submit local models more efficiently. Due to the variability of device state, we design an experience-driven algorithm based on Deep Reinforcement Learning (DRL) that can dynamically adjust our sampling and deadline policies according to the current environment state. We demonstrate the effectiveness of our approach through a series of experiments with the FMNIST dataset and show that our method outperforms current state-of-the-art approaches in terms of model accuracy and convergence speed.","PeriodicalId":56007,"journal":{"name":"Journal of Cloud Computing-Advances Systems and Applications","volume":"48 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135869031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-02DOI: 10.1186/s13677-023-00531-6
Lei Xiao, Yang Cao, Yihe Gai, Juntong Liu, Ping Zhong, Mohammad Mahdi Moghimi
Abstract The transformative impact of cloud computing has permeated various industries, reshaping traditional business models and accelerating digital transformations. In the sports industry, the adoption of cloud computing is burgeoning, significantly enhancing efficiency and unlocking new potentials. This paper provides a comprehensive review of the applications of cloud computing in the sports industry, focusing on areas such as athlete performance tracking, fan engagement, operations management, sports marketing, and event hosting. Moreover, the challenges and potential future developments of cloud computing applications in this industry are also discussed. The purpose of this review is to provide a thorough understanding of the state-of-the-art applications of cloud computing in the sports industry and to inspire further research and development in this field.
{"title":"Review on the application of cloud computing in the sports industry","authors":"Lei Xiao, Yang Cao, Yihe Gai, Juntong Liu, Ping Zhong, Mohammad Mahdi Moghimi","doi":"10.1186/s13677-023-00531-6","DOIUrl":"https://doi.org/10.1186/s13677-023-00531-6","url":null,"abstract":"Abstract The transformative impact of cloud computing has permeated various industries, reshaping traditional business models and accelerating digital transformations. In the sports industry, the adoption of cloud computing is burgeoning, significantly enhancing efficiency and unlocking new potentials. This paper provides a comprehensive review of the applications of cloud computing in the sports industry, focusing on areas such as athlete performance tracking, fan engagement, operations management, sports marketing, and event hosting. Moreover, the challenges and potential future developments of cloud computing applications in this industry are also discussed. The purpose of this review is to provide a thorough understanding of the state-of-the-art applications of cloud computing in the sports industry and to inspire further research and development in this field.","PeriodicalId":56007,"journal":{"name":"Journal of Cloud Computing-Advances Systems and Applications","volume":"33 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135973261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-30DOI: 10.1186/s13677-023-00523-6
Haibo Yi
Abstract As digital transformation progresses across industries, digital twins have emerged as an important technology. In healthcare, digital twins are created by digitizing patient parameters, medical records, and treatment plans to enable personalized care, assist diagnosis, and improve planning. Data is core to digital twins, originating from physical and virtual entities as well as services. Once processed and integrated, data drives various components. Medical records are critical healthcare data but present unique challenges for digital twins. However, directly storing or encrypting medical records has issues. Plaintext risks privacy leaks while encryption hinders retrieval. To address this, we present a cloud-based solution combining post-quantum searchable encryption. Our system includes key generation using Physical Unable Functions (PUF). It encrypts medical records in cloud storage, verifies records using blockchain, and retrieves records via cloud. By integrating cloud encryption, blockchain verification and cloud retrieval, we propose a secure and efficient cloud-based medical records system for digital twins. Our implementation demonstrates the system provides users efficient and secure medical record services, compared to related designs. This highlights digital twins’ potential to transform healthcare through secure data-driven personalized care, diagnosis and planning.
{"title":"Improving cloud storage and privacy security for digital twin based medical records","authors":"Haibo Yi","doi":"10.1186/s13677-023-00523-6","DOIUrl":"https://doi.org/10.1186/s13677-023-00523-6","url":null,"abstract":"Abstract As digital transformation progresses across industries, digital twins have emerged as an important technology. In healthcare, digital twins are created by digitizing patient parameters, medical records, and treatment plans to enable personalized care, assist diagnosis, and improve planning. Data is core to digital twins, originating from physical and virtual entities as well as services. Once processed and integrated, data drives various components. Medical records are critical healthcare data but present unique challenges for digital twins. However, directly storing or encrypting medical records has issues. Plaintext risks privacy leaks while encryption hinders retrieval. To address this, we present a cloud-based solution combining post-quantum searchable encryption. Our system includes key generation using Physical Unable Functions (PUF). It encrypts medical records in cloud storage, verifies records using blockchain, and retrieves records via cloud. By integrating cloud encryption, blockchain verification and cloud retrieval, we propose a secure and efficient cloud-based medical records system for digital twins. Our implementation demonstrates the system provides users efficient and secure medical record services, compared to related designs. This highlights digital twins’ potential to transform healthcare through secure data-driven personalized care, diagnosis and planning.","PeriodicalId":56007,"journal":{"name":"Journal of Cloud Computing-Advances Systems and Applications","volume":"134 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136104087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-25DOI: 10.1186/s13677-023-00521-8
Zequn Jia, Qiang Liu, Yantao Sun
Abstract The performance of the data center network is critical for lowering costs and increasing efficiency. The software-defined networks (SDN) technique has been adopted in data center networks due to the recent emergence of advanced network control and flexibility demand. However, the rapid growth of data centers increases the complexity of control and management processes. With the rapid adoption of SDN, the following critical challenges arise in large-scale data center networks: 1) extra packet delay on the separated control plane and 2) controller bottleneck in large-scale topology. We propose sRetor in this paper, a topology-description-language-based routing approach for regular data center networks that leverages data center networks’ regularity. sRetor aims to reduce the packet waiting time and controller workload in software-defined data center networking. We propose to move partial forwarding decision-making from the controller to switches to eliminate unnecessary control plane delay and reduce controller workload. Therefore the sRetor controller is only responsible for troubleshooting complicated failures and on-demand traffic scheduling. Our numerical and experimental results show that sRetor reduces the flow start time by over 68% and the fail-over time by over 84%.
{"title":"sRetor: a semi-centralized regular topology routing scheme for data center networking","authors":"Zequn Jia, Qiang Liu, Yantao Sun","doi":"10.1186/s13677-023-00521-8","DOIUrl":"https://doi.org/10.1186/s13677-023-00521-8","url":null,"abstract":"Abstract The performance of the data center network is critical for lowering costs and increasing efficiency. The software-defined networks (SDN) technique has been adopted in data center networks due to the recent emergence of advanced network control and flexibility demand. However, the rapid growth of data centers increases the complexity of control and management processes. With the rapid adoption of SDN, the following critical challenges arise in large-scale data center networks: 1) extra packet delay on the separated control plane and 2) controller bottleneck in large-scale topology. We propose sRetor in this paper, a topology-description-language-based routing approach for regular data center networks that leverages data center networks’ regularity. sRetor aims to reduce the packet waiting time and controller workload in software-defined data center networking. We propose to move partial forwarding decision-making from the controller to switches to eliminate unnecessary control plane delay and reduce controller workload. Therefore the sRetor controller is only responsible for troubleshooting complicated failures and on-demand traffic scheduling. Our numerical and experimental results show that sRetor reduces the flow start time by over 68% and the fail-over time by over 84%.","PeriodicalId":56007,"journal":{"name":"Journal of Cloud Computing-Advances Systems and Applications","volume":"64 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135111953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-23DOI: 10.1186/s13677-023-00529-0
Mingzeng Zhu, Mingzhen Liang, Hefeng Li, Ying Lu, Min Pang
Abstract The investigation into intelligent acceptance systems for distribution automation terminals has spanned over a decade, furnishing indispensable assistance to the power industry. The integration of cutting-edge edge computing technologies into these systems has presented efficacious, low-latency, and energy-efficient remedies. This paper provides a comprehensive review and synthesis of research achievements in the field of intelligent acceptance systems for distribution automation terminals over the past few years. Firstly, this paper introduces the definition, composition, functions, and significance of distribution automation terminals, analyzes the advantages of employing edge computing in this domain, and elaborates on the design and implementation of intelligent acceptance systems based on edge computing technology. Additionally, this paper examines the technical challenges, security, and privacy issues associated with the application of edge computing in intelligent acceptance systems and proposes practical solutions. Finally, this paper summarizes the contributions and significance of this paper and provides an outlook on future research directions. It is evident from the review that the integration of edge computing has effectively alleviated these challenges, but new issues await resolution.
{"title":"Intelligent acceptance systems for distribution automation terminals: an overview of edge computing technologies and applications","authors":"Mingzeng Zhu, Mingzhen Liang, Hefeng Li, Ying Lu, Min Pang","doi":"10.1186/s13677-023-00529-0","DOIUrl":"https://doi.org/10.1186/s13677-023-00529-0","url":null,"abstract":"Abstract The investigation into intelligent acceptance systems for distribution automation terminals has spanned over a decade, furnishing indispensable assistance to the power industry. The integration of cutting-edge edge computing technologies into these systems has presented efficacious, low-latency, and energy-efficient remedies. This paper provides a comprehensive review and synthesis of research achievements in the field of intelligent acceptance systems for distribution automation terminals over the past few years. Firstly, this paper introduces the definition, composition, functions, and significance of distribution automation terminals, analyzes the advantages of employing edge computing in this domain, and elaborates on the design and implementation of intelligent acceptance systems based on edge computing technology. Additionally, this paper examines the technical challenges, security, and privacy issues associated with the application of edge computing in intelligent acceptance systems and proposes practical solutions. Finally, this paper summarizes the contributions and significance of this paper and provides an outlook on future research directions. It is evident from the review that the integration of edge computing has effectively alleviated these challenges, but new issues await resolution.","PeriodicalId":56007,"journal":{"name":"Journal of Cloud Computing-Advances Systems and Applications","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135366367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract With the speedy advancement and accelerated popularization of 5G networks, the provision and request of services through mobile smart terminals have become a hot topic in the development of mobile service computing. In this scenario, an efficient and reasonable edge server deployment solution can effectively reduce the deployment cost and communication latency of mobile smart terminals, while significantly improving investment efficiency and resource utilization. Focusing on the issue of edge server placement in mobile service computing environment, this paper proposes an edge server deployment method based on optimal benefit quantity and genetic algorithm. This method is firstly, based on a channel selection strategy for optimal communication impact benefits, it calculates the quantity of edge servers which can achieve optimal benefit. Then, the issue of edge server deployment is converted to a dual-objective optimization problem under three constraints to find the best locations to deploy edge servers, according to balancing the workload of edge servers and minimizing the communication delay among clients and edge servers. Finally, the genetic algorithm is utilized to iteratively optimize for finding the optimal resolution of edge server deployment. A series of experiments are performed on the Mobile Communication Base Station Data Set of Shanghai Telecom, and the experimental results verify that beneath the limit of the optimal benefit quantity of edge servers, the proposed method outperforms MIP, K-means, ESPHA, Top-K, and Random in terms of effectively reducing communication delays and balancing workloads.
{"title":"An edge server deployment method based on optimal benefit and genetic algorithm","authors":"Hongfan Ye, Buqing Cao, Jianxun Liu, Pei Li, Bing Tang, Zhenlian Peng","doi":"10.1186/s13677-023-00524-5","DOIUrl":"https://doi.org/10.1186/s13677-023-00524-5","url":null,"abstract":"Abstract With the speedy advancement and accelerated popularization of 5G networks, the provision and request of services through mobile smart terminals have become a hot topic in the development of mobile service computing. In this scenario, an efficient and reasonable edge server deployment solution can effectively reduce the deployment cost and communication latency of mobile smart terminals, while significantly improving investment efficiency and resource utilization. Focusing on the issue of edge server placement in mobile service computing environment, this paper proposes an edge server deployment method based on optimal benefit quantity and genetic algorithm. This method is firstly, based on a channel selection strategy for optimal communication impact benefits, it calculates the quantity of edge servers which can achieve optimal benefit. Then, the issue of edge server deployment is converted to a dual-objective optimization problem under three constraints to find the best locations to deploy edge servers, according to balancing the workload of edge servers and minimizing the communication delay among clients and edge servers. Finally, the genetic algorithm is utilized to iteratively optimize for finding the optimal resolution of edge server deployment. A series of experiments are performed on the Mobile Communication Base Station Data Set of Shanghai Telecom, and the experimental results verify that beneath the limit of the optimal benefit quantity of edge servers, the proposed method outperforms MIP, K-means, ESPHA, Top-K, and Random in terms of effectively reducing communication delays and balancing workloads.","PeriodicalId":56007,"journal":{"name":"Journal of Cloud Computing-Advances Systems and Applications","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135824251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-17DOI: 10.1186/s13677-023-00502-x
Li Zhu, Qingheng Zhuang, Hailin Jiang, Hao Liang, Xinjun Gao, Wei Wang
Abstract As urban rail transit construction advances with information technology, modernization, information, and intelligence have become the direction of development. A growing number of cloud platforms are being developed for transit in urban areas. However, the increasing scale of urban rail cloud platforms, coupled with the deployment of urban rail safety applications on the cloud platform, present a huge challenge to cloud reliability.One of the key components of urban rail transit cloud platforms is Automatic Train Supervision (ATS). The failure of the ATS cloud service would result in less punctual trains and decreased traffic efficiency, making it essential to research fault tolerance methods based on cloud computing to improve the reliability of ATS cloud services. This paper proposes a proactive, reliability-aware failure recovery method for ATS cloud services based on reinforcement learning. We formulate the problem of penalty error decision and resource-efficient optimization using the advanced actor-critic (A2C) algorithm. To maintain the freshness of the information, we use Age of Information (AoI) to train the agent, and construct the agent using Long Short-Term Memory (LSTM) to improve its sensitivity to fault events. Simulation results demonstrate that our proposed approach, LSTM-A2C, can effectively identify and correct faults in ATS cloud services, improving service reliability.
随着城市轨道交通建设信息化的推进,现代化、信息化、智能化已成为城市轨道交通建设的发展方向。越来越多的云平台正在为城市地区的交通开发。然而,随着城市轨道云平台规模的不断扩大,加上城市轨道安全应用在云平台上的部署,对云可靠性提出了巨大的挑战。城市轨道交通云平台的关键组成部分之一是列车自动监控(ATS)。ATS云服务发生故障,列车准点率下降,交通效率下降,研究基于云计算的容错方法,提高ATS云服务的可靠性至关重要。提出了一种基于强化学习的ATS云服务主动、可靠性感知故障恢复方法。我们使用先进的行动者-评论家(A2C)算法来制定惩罚错误决策和资源效率优化问题。为了保持信息的新鲜度,我们使用信息年龄(Age of information, AoI)来训练智能体,并使用长短期记忆(Long - short - short Memory, LSTM)来构建智能体,以提高其对故障事件的敏感性。仿真结果表明,本文提出的LSTM-A2C方法能够有效地识别和纠正ATS云服务中的故障,提高业务可靠性。
{"title":"Reliability-aware failure recovery for cloud computing based automatic train supervision systems in urban rail transit using deep reinforcement learning","authors":"Li Zhu, Qingheng Zhuang, Hailin Jiang, Hao Liang, Xinjun Gao, Wei Wang","doi":"10.1186/s13677-023-00502-x","DOIUrl":"https://doi.org/10.1186/s13677-023-00502-x","url":null,"abstract":"Abstract As urban rail transit construction advances with information technology, modernization, information, and intelligence have become the direction of development. A growing number of cloud platforms are being developed for transit in urban areas. However, the increasing scale of urban rail cloud platforms, coupled with the deployment of urban rail safety applications on the cloud platform, present a huge challenge to cloud reliability.One of the key components of urban rail transit cloud platforms is Automatic Train Supervision (ATS). The failure of the ATS cloud service would result in less punctual trains and decreased traffic efficiency, making it essential to research fault tolerance methods based on cloud computing to improve the reliability of ATS cloud services. This paper proposes a proactive, reliability-aware failure recovery method for ATS cloud services based on reinforcement learning. We formulate the problem of penalty error decision and resource-efficient optimization using the advanced actor-critic (A2C) algorithm. To maintain the freshness of the information, we use Age of Information (AoI) to train the agent, and construct the agent using Long Short-Term Memory (LSTM) to improve its sensitivity to fault events. Simulation results demonstrate that our proposed approach, LSTM-A2C, can effectively identify and correct faults in ATS cloud services, improving service reliability.","PeriodicalId":56007,"journal":{"name":"Journal of Cloud Computing-Advances Systems and Applications","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135993355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-16DOI: 10.1186/s13677-023-00527-2
Hongwei Li, Qiyuan Xu, Qilin Wang, Bin Tang
Abstract Artificial intelligence (AI) plays a key role in the distribution automation system (DAS). By using artificial intelligence technology, it is possible to intelligently verify and monitor distribution automation terminals, improve their safety and reliability, and reduce power system operating and maintenance costs. At present, researchers are exploring a variety of application methods and algorithms of the distribution automation terminal intelligent acceptance system based on artificial intelligence, such as machine learning, deep learning and expert systems, and have made significant progress. This paper comprehensively reviews the existing research on the application of artificial intelligence technology in distribution automation systems, including fault detection, network reconfiguration, load forecasting, and network security. It undertakes a thorough examination and summarization of the major research achievements in the field of distribution automation systems over the past few years, while also analyzing the challenges that this field confronts. Moreover, this study elaborates extensively on the diverse applications of AI technology within distribution automation systems, providing a detailed comparative analysis of various algorithms and methodologies from multiple classification perspectives. The primary aim of this endeavor is to furnish valuable insights for researchers and practitioners in this domain, thereby fostering the advancement and innovation of distribution automation systems.
{"title":"A review of intelligent verification system distributiontautomationtterminalinal based on artificial intelligealgorithmsthms","authors":"Hongwei Li, Qiyuan Xu, Qilin Wang, Bin Tang","doi":"10.1186/s13677-023-00527-2","DOIUrl":"https://doi.org/10.1186/s13677-023-00527-2","url":null,"abstract":"Abstract Artificial intelligence (AI) plays a key role in the distribution automation system (DAS). By using artificial intelligence technology, it is possible to intelligently verify and monitor distribution automation terminals, improve their safety and reliability, and reduce power system operating and maintenance costs. At present, researchers are exploring a variety of application methods and algorithms of the distribution automation terminal intelligent acceptance system based on artificial intelligence, such as machine learning, deep learning and expert systems, and have made significant progress. This paper comprehensively reviews the existing research on the application of artificial intelligence technology in distribution automation systems, including fault detection, network reconfiguration, load forecasting, and network security. It undertakes a thorough examination and summarization of the major research achievements in the field of distribution automation systems over the past few years, while also analyzing the challenges that this field confronts. Moreover, this study elaborates extensively on the diverse applications of AI technology within distribution automation systems, providing a detailed comparative analysis of various algorithms and methodologies from multiple classification perspectives. The primary aim of this endeavor is to furnish valuable insights for researchers and practitioners in this domain, thereby fostering the advancement and innovation of distribution automation systems.","PeriodicalId":56007,"journal":{"name":"Journal of Cloud Computing-Advances Systems and Applications","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136113558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}