Pub Date : 2022-05-01DOI: 10.1109/icfec54809.2022.00010
Christopher Peter Smith, Anshul Jindal, Mohak Chadha, M. Gerndt, S. Benedict
Function-as-a-Service (FaaS) is an attractive cloud computing model that simplifies application development and deployment. However, current serverless compute platforms do not consider data placement when scheduling functions. With the growing demand for edge-cloud continuum, multi-cloud, and multi-serverless applications, this flaw means serverless technologies are still ill-suited to latency-sensitive operations like media streaming. This work proposes a solution by presenting a tool called FaDO: FaaS Functions and Data Orchestrator, designed to allow data-aware functions scheduling across multi-serverless compute clusters present at different locations, such as at the edge and in the cloud. FaDO works through header-based HTTP reverse proxying and uses three load-balancing algorithms: 1) The Least Connections, 2) Round Robin, and 3) Random for load balancing the invocations of the function across the suitable serverless compute clusters based on the set storage policies. FaDO further provides users with an abstraction of the serverless compute cluster’s storage, allowing users to interact with data across different storage services through a unified interface. In addition, users can configure automatic and policy-aware granular data replications, causing FaDO to spread data across the clusters while respecting location constraints. Load testing results show that it is capable of load balancing high-throughput workloads, placing functions near their data without contributing any significant performance overhead.
{"title":"FaDO: FaaS Functions and Data Orchestrator for Multiple Serverless Edge-Cloud Clusters","authors":"Christopher Peter Smith, Anshul Jindal, Mohak Chadha, M. Gerndt, S. Benedict","doi":"10.1109/icfec54809.2022.00010","DOIUrl":"https://doi.org/10.1109/icfec54809.2022.00010","url":null,"abstract":"Function-as-a-Service (FaaS) is an attractive cloud computing model that simplifies application development and deployment. However, current serverless compute platforms do not consider data placement when scheduling functions. With the growing demand for edge-cloud continuum, multi-cloud, and multi-serverless applications, this flaw means serverless technologies are still ill-suited to latency-sensitive operations like media streaming. This work proposes a solution by presenting a tool called FaDO: FaaS Functions and Data Orchestrator, designed to allow data-aware functions scheduling across multi-serverless compute clusters present at different locations, such as at the edge and in the cloud. FaDO works through header-based HTTP reverse proxying and uses three load-balancing algorithms: 1) The Least Connections, 2) Round Robin, and 3) Random for load balancing the invocations of the function across the suitable serverless compute clusters based on the set storage policies. FaDO further provides users with an abstraction of the serverless compute cluster’s storage, allowing users to interact with data across different storage services through a unified interface. In addition, users can configure automatic and policy-aware granular data replications, causing FaDO to spread data across the clusters while respecting location constraints. Load testing results show that it is capable of load balancing high-throughput workloads, placing functions near their data without contributing any significant performance overhead.","PeriodicalId":423599,"journal":{"name":"2022 IEEE 6th International Conference on Fog and Edge Computing (ICFEC)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117098751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-01DOI: 10.1109/icfec54809.2022.00014
Lydia Ait-Oucheggou, Mohammed Islam Naas, Y. H. Aoul, Jalil Boukhobza
IoT and video streaming are the main driving applications for digital data generation today. The traditional way of storing and processing data in the Cloud cannot satisfy many latency critical applications. This is why Fog computing emerged as a continuum infrastructure from the Cloud to end-user devices. Misplacing data in such an infrastructure results in high latency, and consequently increases the penalty for Internet Service Providers (ISPs) incurred by violating the service level agreement (SLA). In past studies, two issues have been investigated separately: the IoT data placement and the streaming cache placement. However, both placements rely on the same Fog distributed storage system. In this paper, we address those issues in a unique model with the aim to minimize the penalty for ISPs incurred by the SLA violation and maximize storage resources usage. We subdivided each Fog node storage space into a storage part and a cache part. First, our model consists in placing IoT data in the storage part of Fog nodes, and then placing streaming data in the cache part of these nodes. The novelty of our model is the flexibility it offers for managing the cache volume, which can, adaptively, spill on the free part dedicated to IoT data. Experiments show that using our model makes it possible to reduce the streaming data penalty of the ISP’s SLA violation by more than 47% on average.
{"title":"When IoT Data Meets Streaming in the Fog","authors":"Lydia Ait-Oucheggou, Mohammed Islam Naas, Y. H. Aoul, Jalil Boukhobza","doi":"10.1109/icfec54809.2022.00014","DOIUrl":"https://doi.org/10.1109/icfec54809.2022.00014","url":null,"abstract":"IoT and video streaming are the main driving applications for digital data generation today. The traditional way of storing and processing data in the Cloud cannot satisfy many latency critical applications. This is why Fog computing emerged as a continuum infrastructure from the Cloud to end-user devices. Misplacing data in such an infrastructure results in high latency, and consequently increases the penalty for Internet Service Providers (ISPs) incurred by violating the service level agreement (SLA). In past studies, two issues have been investigated separately: the IoT data placement and the streaming cache placement. However, both placements rely on the same Fog distributed storage system. In this paper, we address those issues in a unique model with the aim to minimize the penalty for ISPs incurred by the SLA violation and maximize storage resources usage. We subdivided each Fog node storage space into a storage part and a cache part. First, our model consists in placing IoT data in the storage part of Fog nodes, and then placing streaming data in the cache part of these nodes. The novelty of our model is the flexibility it offers for managing the cache volume, which can, adaptively, spill on the free part dedicated to IoT data. Experiments show that using our model makes it possible to reduce the streaming data penalty of the ISP’s SLA violation by more than 47% on average.","PeriodicalId":423599,"journal":{"name":"2022 IEEE 6th International Conference on Fog and Edge Computing (ICFEC)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125766048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-01DOI: 10.1109/icfec54809.2022.00006
{"title":"ICFEC 2022 Committees","authors":"","doi":"10.1109/icfec54809.2022.00006","DOIUrl":"https://doi.org/10.1109/icfec54809.2022.00006","url":null,"abstract":"","PeriodicalId":423599,"journal":{"name":"2022 IEEE 6th International Conference on Fog and Edge Computing (ICFEC)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123393247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-01DOI: 10.1109/icfec54809.2022.00009
M. Kamalian, Paulo Ferreira
A user’s transport mode (e.g., walk, car, etc.) can be detected by using a smartphone. Such devices exist in a great number with enough computation power and sensors to run a classifier (i.e., for transport mode detection). Using a smartphone in a fog environment ensures low latency, high generalization, high accuracy, and low battery consumption. We propose a fog-based real-time (at human time scale) transport mode detection, called FogTMDetector; it consists of a Random Forest classifier trained with magnetometer, accelerometer, and GPS data. The overall accuracy achieved by our system is 93% when detecting 8 different modes (i.e., stationary, walk, bicycle, car, bus, train, tram, and subway). We compared FogTMDetector with another recent system (called EdgeTrans). The comparison results suggest that our solution achieves 10% higher motorized accuracy (i.e., 94.4%) with more fine-grained motorized transport modes (i.e., subway, tram, etc.) thanks to the magnetometer sensor readings. FogTMDetector uses a low sampling rate (1Hz) for logging accelerometer and magnetometer and (every 10 seconds) for GPS to ensure low battery consumption. FogTMDetector is also generalizable as it is robust against variation of users and smartphone positions.
{"title":"FogTMDetector - Fog Based Transport Mode Detection using Smartphones","authors":"M. Kamalian, Paulo Ferreira","doi":"10.1109/icfec54809.2022.00009","DOIUrl":"https://doi.org/10.1109/icfec54809.2022.00009","url":null,"abstract":"A user’s transport mode (e.g., walk, car, etc.) can be detected by using a smartphone. Such devices exist in a great number with enough computation power and sensors to run a classifier (i.e., for transport mode detection). Using a smartphone in a fog environment ensures low latency, high generalization, high accuracy, and low battery consumption. We propose a fog-based real-time (at human time scale) transport mode detection, called FogTMDetector; it consists of a Random Forest classifier trained with magnetometer, accelerometer, and GPS data. The overall accuracy achieved by our system is 93% when detecting 8 different modes (i.e., stationary, walk, bicycle, car, bus, train, tram, and subway). We compared FogTMDetector with another recent system (called EdgeTrans). The comparison results suggest that our solution achieves 10% higher motorized accuracy (i.e., 94.4%) with more fine-grained motorized transport modes (i.e., subway, tram, etc.) thanks to the magnetometer sensor readings. FogTMDetector uses a low sampling rate (1Hz) for logging accelerometer and magnetometer and (every 10 seconds) for GPS to ensure low battery consumption. FogTMDetector is also generalizable as it is robust against variation of users and smartphone positions.","PeriodicalId":423599,"journal":{"name":"2022 IEEE 6th International Conference on Fog and Edge Computing (ICFEC)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127910827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-01DOI: 10.1109/icfec54809.2022.00018
Boris Sedlak, Ilir Murturi, S. Dustdar
The growing number of Internet of Things (IoT) devices generates massive amounts of diverse data, including personal or confidential information (i.e., sensory, images, etc.) that is not intended for public view. Traditionally, predefined privacy policies are usually enforced in resource-rich environments such as the cloud to protect sensitive information from being released. However, the massive amount of data streams, heterogeneous devices, and networks involved affects latency, and the possibility of having data intercepted grows as it travels away from the data source. Therefore, such data streams must be transformed on the IoT device or within available devices (i.e., edge devices) in its vicinity to ensure privacy. In this paper, we present a privacy-enforcing framework that transforms data streams on edge networks. We treat privacy close to the data source, using powerful edge devices to perform various operations to ensure privacy. Whenever an IoT device captures personal or confidential data, an edge gateway in the device’s vicinity analyzes and transforms data streams according to a predefined set of rules. How and when data is modified is defined precisely by a set of triggers and transformations - a privacy model - that directly represents a stakeholder’s privacy policies. Our work answered how to represent such privacy policies in a model and enforce transformations on the edge.
{"title":"Specification and Operation of Privacy Models for Data Streams on the Edge","authors":"Boris Sedlak, Ilir Murturi, S. Dustdar","doi":"10.1109/icfec54809.2022.00018","DOIUrl":"https://doi.org/10.1109/icfec54809.2022.00018","url":null,"abstract":"The growing number of Internet of Things (IoT) devices generates massive amounts of diverse data, including personal or confidential information (i.e., sensory, images, etc.) that is not intended for public view. Traditionally, predefined privacy policies are usually enforced in resource-rich environments such as the cloud to protect sensitive information from being released. However, the massive amount of data streams, heterogeneous devices, and networks involved affects latency, and the possibility of having data intercepted grows as it travels away from the data source. Therefore, such data streams must be transformed on the IoT device or within available devices (i.e., edge devices) in its vicinity to ensure privacy. In this paper, we present a privacy-enforcing framework that transforms data streams on edge networks. We treat privacy close to the data source, using powerful edge devices to perform various operations to ensure privacy. Whenever an IoT device captures personal or confidential data, an edge gateway in the device’s vicinity analyzes and transforms data streams according to a predefined set of rules. How and when data is modified is defined precisely by a set of triggers and transformations - a privacy model - that directly represents a stakeholder’s privacy policies. Our work answered how to represent such privacy policies in a model and enforce transformations on the edge.","PeriodicalId":423599,"journal":{"name":"2022 IEEE 6th International Conference on Fog and Edge Computing (ICFEC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129579636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-01DOI: 10.1109/icfec54809.2022.00015
H. Imani, Jeff Anderson, T. El-Ghazawi
The pervasiveness of AI in society has made machine learning (ML) an invaluable tool for mobile and internet-of-things (IoT) devices. While the aggregate amount of data yielded by those devices is sufficient for training an accurate model, the data available to any one device is limited. Therefore, augmenting the learning at any of the devices with the experience from observations associated with the rest of the devices will be necessary. This, however, can dramatically increase the bandwidth requirements. Prior work has led to the development of Federated Learning (FL), where instead of exchanging data, client devices can only share weights to learn from one another. However, het-erogeneity in device resource availability and network conditions still impose limitations on training performance. In order to improve performance while maintaining good levels of accuracy, we introduce iSample. iSample, an intelligent sampling technique, selects clients by jointly considering known network performance and model quality parameters, allowing the minimization of training time. We compare iSample with other federated learning approaches and show that iSample improves the performance of the global model, especially in the earlier stages of training, while decreasing the training time for both CNN and VGG by 27% and 39%, respectively.
{"title":"iSample: Intelligent Client Sampling in Federated Learning","authors":"H. Imani, Jeff Anderson, T. El-Ghazawi","doi":"10.1109/icfec54809.2022.00015","DOIUrl":"https://doi.org/10.1109/icfec54809.2022.00015","url":null,"abstract":"The pervasiveness of AI in society has made machine learning (ML) an invaluable tool for mobile and internet-of-things (IoT) devices. While the aggregate amount of data yielded by those devices is sufficient for training an accurate model, the data available to any one device is limited. Therefore, augmenting the learning at any of the devices with the experience from observations associated with the rest of the devices will be necessary. This, however, can dramatically increase the bandwidth requirements. Prior work has led to the development of Federated Learning (FL), where instead of exchanging data, client devices can only share weights to learn from one another. However, het-erogeneity in device resource availability and network conditions still impose limitations on training performance. In order to improve performance while maintaining good levels of accuracy, we introduce iSample. iSample, an intelligent sampling technique, selects clients by jointly considering known network performance and model quality parameters, allowing the minimization of training time. We compare iSample with other federated learning approaches and show that iSample improves the performance of the global model, especially in the earlier stages of training, while decreasing the training time for both CNN and VGG by 27% and 39%, respectively.","PeriodicalId":423599,"journal":{"name":"2022 IEEE 6th International Conference on Fog and Edge Computing (ICFEC)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131935940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-01DOI: 10.1109/icfec54809.2022.00011
P. S. Junior, D. Miorandi, G. Pierre
Container technology has become a very popular choice for easing and managing the deployment of cloud applications and services. Container orchestration systems such as Kubernetes can automate to a large extent the deployment, scaling, and operations for containers across clusters of nodes, reducing human errors and saving cost and time. Designed with "traditional" cloud environments in mind (i.e., large datacenters with close-by machines connected by high-speed networks), systems like Kubernetes present some limitations in geo-distributed environments where computational workloads are moved to the edges of the network, close to where data is being generated/consumed. In geo-distributed environments, moving around containers, either to follow moving data sources/sinks or due to unpredictable changes in the network substrate, is a rather common operation. We present MyceDrive, a stateful resource migration solution natively integrated with the Kubernetes orchestrator. We show that geo-distributed Kubernetes pod migration is feasible while remaining fully transparent to the migrated application as well as its clients, while reducing downtimes up to 7x compared to state-of-the-art solutions.
{"title":"Good Shepherds Care For Their Cattle: Seamless Pod Migration in Geo-Distributed Kubernetes","authors":"P. S. Junior, D. Miorandi, G. Pierre","doi":"10.1109/icfec54809.2022.00011","DOIUrl":"https://doi.org/10.1109/icfec54809.2022.00011","url":null,"abstract":"Container technology has become a very popular choice for easing and managing the deployment of cloud applications and services. Container orchestration systems such as Kubernetes can automate to a large extent the deployment, scaling, and operations for containers across clusters of nodes, reducing human errors and saving cost and time. Designed with \"traditional\" cloud environments in mind (i.e., large datacenters with close-by machines connected by high-speed networks), systems like Kubernetes present some limitations in geo-distributed environments where computational workloads are moved to the edges of the network, close to where data is being generated/consumed. In geo-distributed environments, moving around containers, either to follow moving data sources/sinks or due to unpredictable changes in the network substrate, is a rather common operation. We present MyceDrive, a stateful resource migration solution natively integrated with the Kubernetes orchestrator. We show that geo-distributed Kubernetes pod migration is feasible while remaining fully transparent to the migrated application as well as its clients, while reducing downtimes up to 7x compared to state-of-the-art solutions.","PeriodicalId":423599,"journal":{"name":"2022 IEEE 6th International Conference on Fog and Edge Computing (ICFEC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132867385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-01DOI: 10.1109/icfec54809.2022.00022
A.H. Ghorab, Mohammed A. Abuibaid, M. St-Hilaire
The ever-growing number of connected User Equipment (UE), e.g., Internet of Things (IoT) devices, Connected Autonomous Vehicles (CAVs), has driven the evolution of Software-Defined Networks (SDN) and Fifth-Generation (5G) Networks to push the computing resources closer to the UE. Towards that end, Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) are promising solutions to facilitate the deployment of the required user’s service instances within an approximately one-hop communication range. However, discovering and assigning such service instances to the UE to maintain a high service availability is still an open challenge in the 3rd Generation Partnership Project (3GPP) standards due to the UE mobility and Telco network heterogeneity. This paper proposes an SDN-based dynamic service discovery and assignment framework for a distributed MEC infrastructure. The proposed framework considers various decision parameters such as UE’s location, the service instance’s demand (i.e., resource utilization), the network link status, and the service instance performance requirements (i.e., service profile) to offer a generic solution for discovering and assigning the service instances to the UE. The framework implementation results show an enhancement of the packet delivery ratio and a lower users’ perceived latency.
{"title":"SDN-based Service Discovery and Assignment Framework to Preserve Service Availability in Telco-based Multi-Access Edge Computing","authors":"A.H. Ghorab, Mohammed A. Abuibaid, M. St-Hilaire","doi":"10.1109/icfec54809.2022.00022","DOIUrl":"https://doi.org/10.1109/icfec54809.2022.00022","url":null,"abstract":"The ever-growing number of connected User Equipment (UE), e.g., Internet of Things (IoT) devices, Connected Autonomous Vehicles (CAVs), has driven the evolution of Software-Defined Networks (SDN) and Fifth-Generation (5G) Networks to push the computing resources closer to the UE. Towards that end, Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) are promising solutions to facilitate the deployment of the required user’s service instances within an approximately one-hop communication range. However, discovering and assigning such service instances to the UE to maintain a high service availability is still an open challenge in the 3rd Generation Partnership Project (3GPP) standards due to the UE mobility and Telco network heterogeneity. This paper proposes an SDN-based dynamic service discovery and assignment framework for a distributed MEC infrastructure. The proposed framework considers various decision parameters such as UE’s location, the service instance’s demand (i.e., resource utilization), the network link status, and the service instance performance requirements (i.e., service profile) to offer a generic solution for discovering and assigning the service instances to the UE. The framework implementation results show an enhancement of the packet delivery ratio and a lower users’ perceived latency.","PeriodicalId":423599,"journal":{"name":"2022 IEEE 6th International Conference on Fog and Edge Computing (ICFEC)","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122386398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-01DOI: 10.1109/icfec54809.2022.00013
Benjamin Warnke, Yuri Cotrado Sehgelmeble, Johannes Mantler, Sven Groppe, S. Fischer
In low-power networks, sending data over the network is one of the largest energy consumers. Therefore, any application running in such an environment must reduce the volume of communication caused by their presence. All the state of the art network simulators focus on specific parts of the network stack. Due to this specialization, the application interface is simplified in a way, that often only fictive and abstract applications can be simulated. However, we want to gain insights about the interoperability between routing and real application for the purpose of reducing communication costs. For this purpose, we propose our new simulator SIMORA in this contribution. In order to demonstrate the possibilities of SIMORA, we deploy a simple distributed application. Then, we develop advanced techniques like the dynamic content multicast to reduce the number of messages and the volume of data sent. In our experiments, we achieve reductions of up to 94% in number of messages and 29% in bytes transferred over the network compared to the traditional multicast approach.
{"title":"SIMORA: SIMulating Open Routing protocols for Application interoperability on edge devices","authors":"Benjamin Warnke, Yuri Cotrado Sehgelmeble, Johannes Mantler, Sven Groppe, S. Fischer","doi":"10.1109/icfec54809.2022.00013","DOIUrl":"https://doi.org/10.1109/icfec54809.2022.00013","url":null,"abstract":"In low-power networks, sending data over the network is one of the largest energy consumers. Therefore, any application running in such an environment must reduce the volume of communication caused by their presence. All the state of the art network simulators focus on specific parts of the network stack. Due to this specialization, the application interface is simplified in a way, that often only fictive and abstract applications can be simulated. However, we want to gain insights about the interoperability between routing and real application for the purpose of reducing communication costs. For this purpose, we propose our new simulator SIMORA in this contribution. In order to demonstrate the possibilities of SIMORA, we deploy a simple distributed application. Then, we develop advanced techniques like the dynamic content multicast to reduce the number of messages and the volume of data sent. In our experiments, we achieve reductions of up to 94% in number of messages and 29% in bytes transferred over the network compared to the traditional multicast approach.","PeriodicalId":423599,"journal":{"name":"2022 IEEE 6th International Conference on Fog and Edge Computing (ICFEC)","volume":"414 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122198125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-01DOI: 10.1109/icfec54809.2022.00017
Nicolò Bartelucci, P. Bellavista, Thomas W. Pusztai, Andrea Morichetta, S. Dustdar
With the increasing complexity, requirements, and variability of cloud services, it is not always easy to find the right static/dynamic thresholds for the optimal configuration of low-level metrics for autoscaling resource management decisions. A Service Level Objective (SLO) is a high-level commitment to maintaining a specific state of a service in a given period, within a Service Level Agreement (SLA): the goal is to respect a given metric, like uptime or response time within given time or accuracy constraints. In this paper, we show the advantages and present the progress of an original SLO-aware autoscaler for the Polaris framework. In addition, the paper contributes to the literature in the field by proposing novel experimental results comparing the Polaris autoscaling performance, based on highlevel latency SLO, and the performance of a low-level average CPU-based SLO, implemented by the Kubernetes Horizontal Pod Autoscaler.
随着云服务的复杂性、需求和可变性的增加,为自动伸缩资源管理决策的底层指标的最佳配置找到合适的静态/动态阈值并不总是那么容易。服务水平目标(Service Level Objective, SLO)是在服务水平协议(Service Level Agreement, SLA)中在给定时间段内维护服务的特定状态的高级承诺:目标是尊重给定的度量,如在给定时间或精度约束下的正常运行时间或响应时间。在本文中,我们展示了其优点,并介绍了一种用于Polaris框架的原始慢速感知自动缩放器的进展。此外,本文通过提出新颖的实验结果来比较基于高级别延迟SLO的Polaris自动缩放性能和由Kubernetes Horizontal Pod Autoscaler实现的基于cpu的低级别平均SLO的性能,从而为该领域的文献做出贡献。
{"title":"High-Level Metrics for Service Level Objective-aware Autoscaling in Polaris: a Performance Evaluation","authors":"Nicolò Bartelucci, P. Bellavista, Thomas W. Pusztai, Andrea Morichetta, S. Dustdar","doi":"10.1109/icfec54809.2022.00017","DOIUrl":"https://doi.org/10.1109/icfec54809.2022.00017","url":null,"abstract":"With the increasing complexity, requirements, and variability of cloud services, it is not always easy to find the right static/dynamic thresholds for the optimal configuration of low-level metrics for autoscaling resource management decisions. A Service Level Objective (SLO) is a high-level commitment to maintaining a specific state of a service in a given period, within a Service Level Agreement (SLA): the goal is to respect a given metric, like uptime or response time within given time or accuracy constraints. In this paper, we show the advantages and present the progress of an original SLO-aware autoscaler for the Polaris framework. In addition, the paper contributes to the literature in the field by proposing novel experimental results comparing the Polaris autoscaling performance, based on highlevel latency SLO, and the performance of a low-level average CPU-based SLO, implemented by the Kubernetes Horizontal Pod Autoscaler.","PeriodicalId":423599,"journal":{"name":"2022 IEEE 6th International Conference on Fog and Edge Computing (ICFEC)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116743050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}