Federated learning trains a model on a centralized server using datasets distributed over a massive amount of edge devices. Since federated learning does not send local data from edge devices to the server, it preserves data privacy. It transfers the local models from edge devices instead of the local data. However, communication costs are frequently a problem in federated learning. This paper proposes a novel method to reduce the required communication cost for federated learning by transferring only top updated parameters in neural network models. The proposed method allows adjusting the criteria of updated parameters to trade-off the reduction of communication costs and the loss of model accuracy. We evaluated the proposed method using diverse models and datasets and found that it can achieve comparable performance to transfer original models for federated learning. As a result, the proposed method has achieved a reduction of the required communication costs around 90% when compared to the conventional method for VGG16. Furthermore, we found out that the proposed method is able to reduce the communication cost of a large model more than of a small model due to the different threshold of updated parameters in each model architecture.
{"title":"Sparse Communication for Federated Learning","authors":"Kundjanasith Thonglek, Keichi Takahashi, Koheix Ichikawa, Chawanat Nakasan, P. Leelaprute, Hajimu Iida","doi":"10.1109/icfec54809.2022.00008","DOIUrl":"https://doi.org/10.1109/icfec54809.2022.00008","url":null,"abstract":"Federated learning trains a model on a centralized server using datasets distributed over a massive amount of edge devices. Since federated learning does not send local data from edge devices to the server, it preserves data privacy. It transfers the local models from edge devices instead of the local data. However, communication costs are frequently a problem in federated learning. This paper proposes a novel method to reduce the required communication cost for federated learning by transferring only top updated parameters in neural network models. The proposed method allows adjusting the criteria of updated parameters to trade-off the reduction of communication costs and the loss of model accuracy. We evaluated the proposed method using diverse models and datasets and found that it can achieve comparable performance to transfer original models for federated learning. As a result, the proposed method has achieved a reduction of the required communication costs around 90% when compared to the conventional method for VGG16. Furthermore, we found out that the proposed method is able to reduce the communication cost of a large model more than of a small model due to the different threshold of updated parameters in each model architecture.","PeriodicalId":423599,"journal":{"name":"2022 IEEE 6th International Conference on Fog and Edge Computing (ICFEC)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134221865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-01DOI: 10.1109/icfec54809.2022.00012
Klervie Toczé, Norbert Schmitt, Ulf Kargén, Atakan Aral, I. Brandić
The emerging field of edge computing is suffering from a lack of representative data to evaluate rapidly introduced new algorithms or techniques. That is a critical issue as this complex paradigm has numerous different use cases which translate into a highly diverse set of workload types.In this work, within the context of the edge computing activity of SPEC RG Cloud, we continue working towards an edge benchmark by defining high-level workload classes as well as collecting and analyzing traces for three real-world edge applications, which, according to the existing literature, are the representatives of those classes. Moreover, we propose a practical and generic methodology for workload definition and gathering. The traces and gathering tool are provided open-source.In the analysis of the collected workloads, we detect discrepancies between the literature and the traces obtained, thus highlighting the need for a continuing effort into gathering and providing data from real applications, which can be done using the proposed trace gathering methodology. Additionally, we discuss various insights and future directions that rise to the surface through our analysis.
{"title":"Edge Workload Trace Gathering and Analysis for Benchmarking","authors":"Klervie Toczé, Norbert Schmitt, Ulf Kargén, Atakan Aral, I. Brandić","doi":"10.1109/icfec54809.2022.00012","DOIUrl":"https://doi.org/10.1109/icfec54809.2022.00012","url":null,"abstract":"The emerging field of edge computing is suffering from a lack of representative data to evaluate rapidly introduced new algorithms or techniques. That is a critical issue as this complex paradigm has numerous different use cases which translate into a highly diverse set of workload types.In this work, within the context of the edge computing activity of SPEC RG Cloud, we continue working towards an edge benchmark by defining high-level workload classes as well as collecting and analyzing traces for three real-world edge applications, which, according to the existing literature, are the representatives of those classes. Moreover, we propose a practical and generic methodology for workload definition and gathering. The traces and gathering tool are provided open-source.In the analysis of the collected workloads, we detect discrepancies between the literature and the traces obtained, thus highlighting the need for a continuing effort into gathering and providing data from real applications, which can be done using the proposed trace gathering methodology. Additionally, we discuss various insights and future directions that rise to the surface through our analysis.","PeriodicalId":423599,"journal":{"name":"2022 IEEE 6th International Conference on Fog and Edge Computing (ICFEC)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134239723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-01DOI: 10.1109/icfec54809.2022.00021
Abdullah A. Al-khatib, Faisal Al-Khateeb, Abdelmajid Khelil, K. Moessner
Bandwidth is a valuable and scarce resource in mobile networks. Therefore, bandwidth reservation may become necessary to support time-sensitive and safety-critical networked vehicular applications such as autonomous driving. Such applications require individual and deterministic approaches for reservations. This is challenging as vehicles usually have insufficient information to reason about future driving paths as well as future network resources availability and costs. In particular, the optimal time for a vehicle to place a cost-efficient reservation request is crucial. If a reservation is conducted too early, the uncertainty in path prediction may become high resulting in frequent cancellations with high costs. If a reservation is requested too late, resources may no longer be available. In this paper, we study the optimal timing for a given vehicle to place a bandwidth reservation request for an upcoming trip. Our proposal is based on predicting bandwidth costs using well-selected temporal machine learning techniques while achieving high accuracy levels. The proposed reservation scheme relies on a corpus of real-world traffic data. The experimental results prove that the model can effectively learn to find an optimized timing for bandwidth reservation. In addition, our model may allow vehicles to save considerably costs compared to the baseline of an immediate reservation scheme.
{"title":"Optimal Timing for Bandwidth Reservation for Time-Sensitive Vehicular Applications","authors":"Abdullah A. Al-khatib, Faisal Al-Khateeb, Abdelmajid Khelil, K. Moessner","doi":"10.1109/icfec54809.2022.00021","DOIUrl":"https://doi.org/10.1109/icfec54809.2022.00021","url":null,"abstract":"Bandwidth is a valuable and scarce resource in mobile networks. Therefore, bandwidth reservation may become necessary to support time-sensitive and safety-critical networked vehicular applications such as autonomous driving. Such applications require individual and deterministic approaches for reservations. This is challenging as vehicles usually have insufficient information to reason about future driving paths as well as future network resources availability and costs. In particular, the optimal time for a vehicle to place a cost-efficient reservation request is crucial. If a reservation is conducted too early, the uncertainty in path prediction may become high resulting in frequent cancellations with high costs. If a reservation is requested too late, resources may no longer be available. In this paper, we study the optimal timing for a given vehicle to place a bandwidth reservation request for an upcoming trip. Our proposal is based on predicting bandwidth costs using well-selected temporal machine learning techniques while achieving high accuracy levels. The proposed reservation scheme relies on a corpus of real-world traffic data. The experimental results prove that the model can effectively learn to find an optimized timing for bandwidth reservation. In addition, our model may allow vehicles to save considerably costs compared to the baseline of an immediate reservation scheme.","PeriodicalId":423599,"journal":{"name":"2022 IEEE 6th International Conference on Fog and Edge Computing (ICFEC)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116767682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-01DOI: 10.1109/icfec54809.2022.00019
Haorui Peng, William Tarneberg, Emma Fitzgerald, F. Tufvesson, M. Kihl
Mission-critical applications such as industrial control processes are evolving towards a new development paradigm by offloading their heavy computations to the edge of the emerging Fifth Generation Wireless Specifications (5G) network. In this manner, the applications can gain the economical and efficiency benefits of cloud computing, as well as reliable communication from the 5G network. However, the limited access to a configurable infrastructure of the 5G network and its edge computing infrastructure has restrained academic researchers from experimenting and validating their mission-critical application design under reasonable communication and computation scenarios. In this paper, we present a configurable mid-band 5G Stand-Alone (SA) deployment and demonstrate a control process that is running over the edge of the 5G network. We show in this paper a complete system setup for Control over the Edge (CoE) of the 5G network, and validate the feasibility of deploying similar mission-critical applications over the edge of 5G network.
{"title":"Evaluation of Control over the Edge of a Configurable Mid-band 5G Base Station","authors":"Haorui Peng, William Tarneberg, Emma Fitzgerald, F. Tufvesson, M. Kihl","doi":"10.1109/icfec54809.2022.00019","DOIUrl":"https://doi.org/10.1109/icfec54809.2022.00019","url":null,"abstract":"Mission-critical applications such as industrial control processes are evolving towards a new development paradigm by offloading their heavy computations to the edge of the emerging Fifth Generation Wireless Specifications (5G) network. In this manner, the applications can gain the economical and efficiency benefits of cloud computing, as well as reliable communication from the 5G network. However, the limited access to a configurable infrastructure of the 5G network and its edge computing infrastructure has restrained academic researchers from experimenting and validating their mission-critical application design under reasonable communication and computation scenarios. In this paper, we present a configurable mid-band 5G Stand-Alone (SA) deployment and demonstrate a control process that is running over the edge of the 5G network. We show in this paper a complete system setup for Control over the Edge (CoE) of the 5G network, and validate the feasibility of deploying similar mission-critical applications over the edge of 5G network.","PeriodicalId":423599,"journal":{"name":"2022 IEEE 6th International Conference on Fog and Edge Computing (ICFEC)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133110973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-10DOI: 10.48550/arXiv.2203.05362
Soeren Becker, Dominik Scheinert, Florian Schmidt, O. Kao
In highly distributed environments such as cloud, edge and fog computing, the application of machine learning for automating and optimizing processes is on the rise. Machine learning jobs are frequently applied in streaming conditions, where models are used to analyze data streams originating from e.g. sensory data. Often the results for particular data samples need to be provided in time before the arrival of next data. Thus, enough resources must be provided to ensure the just-in-time processing for the specific data stream.This paper focuses on proposing a runtime modeling strategy for containerized machine learning jobs, which enables the optimization and adaptive adjustment of resources per job and component. Our black-box approach assembles multiple techniques into an efficient runtime profiling method, while making no assumptions about underlying hardware, data streams, or applied machine learning jobs. The results show that our method is able to capture the general runtime behaviour of different machine learning jobs already after a short profiling phase.
{"title":"Efficient Runtime Profiling for Black-box Machine Learning Services on Sensor Streams","authors":"Soeren Becker, Dominik Scheinert, Florian Schmidt, O. Kao","doi":"10.48550/arXiv.2203.05362","DOIUrl":"https://doi.org/10.48550/arXiv.2203.05362","url":null,"abstract":"In highly distributed environments such as cloud, edge and fog computing, the application of machine learning for automating and optimizing processes is on the rise. Machine learning jobs are frequently applied in streaming conditions, where models are used to analyze data streams originating from e.g. sensory data. Often the results for particular data samples need to be provided in time before the arrival of next data. Thus, enough resources must be provided to ensure the just-in-time processing for the specific data stream.This paper focuses on proposing a runtime modeling strategy for containerized machine learning jobs, which enables the optimization and adaptive adjustment of resources per job and component. Our black-box approach assembles multiple techniques into an efficient runtime profiling method, while making no assumptions about underlying hardware, data streams, or applied machine learning jobs. The results show that our method is able to capture the general runtime behaviour of different machine learning jobs already after a short profiling phase.","PeriodicalId":423599,"journal":{"name":"2022 IEEE 6th International Conference on Fog and Edge Computing (ICFEC)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131830508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-15DOI: 10.1109/icfec54809.2022.00016
Tobias Pfandzelter, David Bermbach
With the advent of large LEO satellite communication networks to provide global broadband Internet access, interest in providing edge computing resources within LEO networks has emerged. The LEO Edge promises low-latency, high-bandwidth access to compute and storage resources for a global base of clients and IoT devices regardless of their geographical location.Current proposals assume compute resources or service replicas at every LEO satellite, which requires high upfront investments and can lead to over-provisioning. To implement and use the LEO Edge efficiently, methods for server and service placement are required that help select an optimal subset of satellites as server or service replica locations. In this paper, we show how the existing research on resource placement on a 2D torus can be applied to this problem by leveraging the unique topology of LEO satellite networks. Further, we extend the existing discrete resource placement methods to allow placement with QoS constraints. In simulation of proposed LEO satellite communication networks, we show how QoS depends on orbital parameters and that our proposed method can take these effects into account where the existing approach cannot.
{"title":"QoS-Aware Resource Placement for LEO Satellite Edge Computing","authors":"Tobias Pfandzelter, David Bermbach","doi":"10.1109/icfec54809.2022.00016","DOIUrl":"https://doi.org/10.1109/icfec54809.2022.00016","url":null,"abstract":"With the advent of large LEO satellite communication networks to provide global broadband Internet access, interest in providing edge computing resources within LEO networks has emerged. The LEO Edge promises low-latency, high-bandwidth access to compute and storage resources for a global base of clients and IoT devices regardless of their geographical location.Current proposals assume compute resources or service replicas at every LEO satellite, which requires high upfront investments and can lead to over-provisioning. To implement and use the LEO Edge efficiently, methods for server and service placement are required that help select an optimal subset of satellites as server or service replica locations. In this paper, we show how the existing research on resource placement on a 2D torus can be applied to this problem by leveraging the unique topology of LEO satellite networks. Further, we extend the existing discrete resource placement methods to allow placement with QoS constraints. In simulation of proposed LEO satellite communication networks, we show how QoS depends on orbital parameters and that our proposed method can take these effects into account where the existing approach cannot.","PeriodicalId":423599,"journal":{"name":"2022 IEEE 6th International Conference on Fog and Edge Computing (ICFEC)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131048777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}