首页 > 最新文献

2022 IEEE 6th International Conference on Fog and Edge Computing (ICFEC)最新文献

英文 中文
FaDO: FaaS Functions and Data Orchestrator for Multiple Serverless Edge-Cloud Clusters FaDO:多个无服务器边缘云集群的FaaS功能和数据编排器
Pub Date : 2022-05-01 DOI: 10.1109/icfec54809.2022.00010
Christopher Peter Smith, Anshul Jindal, Mohak Chadha, M. Gerndt, S. Benedict
Function-as-a-Service (FaaS) is an attractive cloud computing model that simplifies application development and deployment. However, current serverless compute platforms do not consider data placement when scheduling functions. With the growing demand for edge-cloud continuum, multi-cloud, and multi-serverless applications, this flaw means serverless technologies are still ill-suited to latency-sensitive operations like media streaming. This work proposes a solution by presenting a tool called FaDO: FaaS Functions and Data Orchestrator, designed to allow data-aware functions scheduling across multi-serverless compute clusters present at different locations, such as at the edge and in the cloud. FaDO works through header-based HTTP reverse proxying and uses three load-balancing algorithms: 1) The Least Connections, 2) Round Robin, and 3) Random for load balancing the invocations of the function across the suitable serverless compute clusters based on the set storage policies. FaDO further provides users with an abstraction of the serverless compute cluster’s storage, allowing users to interact with data across different storage services through a unified interface. In addition, users can configure automatic and policy-aware granular data replications, causing FaDO to spread data across the clusters while respecting location constraints. Load testing results show that it is capable of load balancing high-throughput workloads, placing functions near their data without contributing any significant performance overhead.
功能即服务(FaaS)是一种很有吸引力的云计算模型,它简化了应用程序的开发和部署。但是,当前的无服务器计算平台在调度功能时不考虑数据放置。随着对边缘云连续体、多云和多无服务器应用程序的需求不断增长,这一缺陷意味着无服务器技术仍然不适合对延迟敏感的操作,如媒体流。这项工作提出了一个解决方案,提出了一个名为FaDO的工具:FaaS功能和数据编排器,旨在允许跨不同位置(如边缘和云中)的多服务器计算集群进行数据感知功能调度。FaDO通过基于报头的HTTP反向代理工作,并使用三种负载均衡算法:1)最小连接(Least Connections)、2)轮询(Round Robin)和3)随机(Random),根据设置的存储策略在合适的无服务器计算集群上对函数的调用进行负载均衡。FaDO进一步为用户提供了无服务器计算集群存储的抽象,允许用户通过统一的接口与不同存储服务的数据进行交互。此外,用户可以配置自动和策略感知的粒度数据复制,从而使FaDO在尊重位置约束的情况下跨集群传播数据。负载测试结果表明,它能够负载平衡高吞吐量的工作负载,将功能放在数据附近,而不会造成任何显著的性能开销。
{"title":"FaDO: FaaS Functions and Data Orchestrator for Multiple Serverless Edge-Cloud Clusters","authors":"Christopher Peter Smith, Anshul Jindal, Mohak Chadha, M. Gerndt, S. Benedict","doi":"10.1109/icfec54809.2022.00010","DOIUrl":"https://doi.org/10.1109/icfec54809.2022.00010","url":null,"abstract":"Function-as-a-Service (FaaS) is an attractive cloud computing model that simplifies application development and deployment. However, current serverless compute platforms do not consider data placement when scheduling functions. With the growing demand for edge-cloud continuum, multi-cloud, and multi-serverless applications, this flaw means serverless technologies are still ill-suited to latency-sensitive operations like media streaming. This work proposes a solution by presenting a tool called FaDO: FaaS Functions and Data Orchestrator, designed to allow data-aware functions scheduling across multi-serverless compute clusters present at different locations, such as at the edge and in the cloud. FaDO works through header-based HTTP reverse proxying and uses three load-balancing algorithms: 1) The Least Connections, 2) Round Robin, and 3) Random for load balancing the invocations of the function across the suitable serverless compute clusters based on the set storage policies. FaDO further provides users with an abstraction of the serverless compute cluster’s storage, allowing users to interact with data across different storage services through a unified interface. In addition, users can configure automatic and policy-aware granular data replications, causing FaDO to spread data across the clusters while respecting location constraints. Load testing results show that it is capable of load balancing high-throughput workloads, placing functions near their data without contributing any significant performance overhead.","PeriodicalId":423599,"journal":{"name":"2022 IEEE 6th International Conference on Fog and Edge Computing (ICFEC)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117098751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
When IoT Data Meets Streaming in the Fog 当物联网数据在雾中遇到流时
Pub Date : 2022-05-01 DOI: 10.1109/icfec54809.2022.00014
Lydia Ait-Oucheggou, Mohammed Islam Naas, Y. H. Aoul, Jalil Boukhobza
IoT and video streaming are the main driving applications for digital data generation today. The traditional way of storing and processing data in the Cloud cannot satisfy many latency critical applications. This is why Fog computing emerged as a continuum infrastructure from the Cloud to end-user devices. Misplacing data in such an infrastructure results in high latency, and consequently increases the penalty for Internet Service Providers (ISPs) incurred by violating the service level agreement (SLA). In past studies, two issues have been investigated separately: the IoT data placement and the streaming cache placement. However, both placements rely on the same Fog distributed storage system. In this paper, we address those issues in a unique model with the aim to minimize the penalty for ISPs incurred by the SLA violation and maximize storage resources usage. We subdivided each Fog node storage space into a storage part and a cache part. First, our model consists in placing IoT data in the storage part of Fog nodes, and then placing streaming data in the cache part of these nodes. The novelty of our model is the flexibility it offers for managing the cache volume, which can, adaptively, spill on the free part dedicated to IoT data. Experiments show that using our model makes it possible to reduce the streaming data penalty of the ISP’s SLA violation by more than 47% on average.
物联网和视频流是当今数字数据生成的主要驱动应用。传统的在云中存储和处理数据的方式不能满足许多延迟关键型应用程序。这就是为什么雾计算作为从云到终端用户设备的连续基础设施出现的原因。在这样的基础设施中错误地放置数据会导致高延迟,并因此增加了因违反服务级别协议(SLA)而对互联网服务提供商(isp)造成的惩罚。在过去的研究中,分别研究了两个问题:物联网数据放置和流缓存放置。然而,这两个位置都依赖于相同的Fog分布式存储系统。在本文中,我们在一个独特的模型中解决了这些问题,目的是尽量减少因违反SLA而对isp造成的惩罚,并最大限度地利用存储资源。我们将每个Fog节点的存储空间细分为存储部分和缓存部分。首先,我们的模型包括将物联网数据放在Fog节点的存储部分,然后将流数据放在这些节点的缓存部分。我们模型的新颖之处在于它为管理缓存量提供了灵活性,它可以自适应地溢出专用于物联网数据的空闲部分。实验表明,使用我们的模型可以将ISP违反SLA的流数据惩罚平均减少47%以上。
{"title":"When IoT Data Meets Streaming in the Fog","authors":"Lydia Ait-Oucheggou, Mohammed Islam Naas, Y. H. Aoul, Jalil Boukhobza","doi":"10.1109/icfec54809.2022.00014","DOIUrl":"https://doi.org/10.1109/icfec54809.2022.00014","url":null,"abstract":"IoT and video streaming are the main driving applications for digital data generation today. The traditional way of storing and processing data in the Cloud cannot satisfy many latency critical applications. This is why Fog computing emerged as a continuum infrastructure from the Cloud to end-user devices. Misplacing data in such an infrastructure results in high latency, and consequently increases the penalty for Internet Service Providers (ISPs) incurred by violating the service level agreement (SLA). In past studies, two issues have been investigated separately: the IoT data placement and the streaming cache placement. However, both placements rely on the same Fog distributed storage system. In this paper, we address those issues in a unique model with the aim to minimize the penalty for ISPs incurred by the SLA violation and maximize storage resources usage. We subdivided each Fog node storage space into a storage part and a cache part. First, our model consists in placing IoT data in the storage part of Fog nodes, and then placing streaming data in the cache part of these nodes. The novelty of our model is the flexibility it offers for managing the cache volume, which can, adaptively, spill on the free part dedicated to IoT data. Experiments show that using our model makes it possible to reduce the streaming data penalty of the ISP’s SLA violation by more than 47% on average.","PeriodicalId":423599,"journal":{"name":"2022 IEEE 6th International Conference on Fog and Edge Computing (ICFEC)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125766048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ICFEC 2022 Committees
Pub Date : 2022-05-01 DOI: 10.1109/icfec54809.2022.00006
{"title":"ICFEC 2022 Committees","authors":"","doi":"10.1109/icfec54809.2022.00006","DOIUrl":"https://doi.org/10.1109/icfec54809.2022.00006","url":null,"abstract":"","PeriodicalId":423599,"journal":{"name":"2022 IEEE 6th International Conference on Fog and Edge Computing (ICFEC)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123393247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FogTMDetector - Fog Based Transport Mode Detection using Smartphones FogTMDetector -基于雾的传输模式检测使用智能手机
Pub Date : 2022-05-01 DOI: 10.1109/icfec54809.2022.00009
M. Kamalian, Paulo Ferreira
A user’s transport mode (e.g., walk, car, etc.) can be detected by using a smartphone. Such devices exist in a great number with enough computation power and sensors to run a classifier (i.e., for transport mode detection). Using a smartphone in a fog environment ensures low latency, high generalization, high accuracy, and low battery consumption. We propose a fog-based real-time (at human time scale) transport mode detection, called FogTMDetector; it consists of a Random Forest classifier trained with magnetometer, accelerometer, and GPS data. The overall accuracy achieved by our system is 93% when detecting 8 different modes (i.e., stationary, walk, bicycle, car, bus, train, tram, and subway). We compared FogTMDetector with another recent system (called EdgeTrans). The comparison results suggest that our solution achieves 10% higher motorized accuracy (i.e., 94.4%) with more fine-grained motorized transport modes (i.e., subway, tram, etc.) thanks to the magnetometer sensor readings. FogTMDetector uses a low sampling rate (1Hz) for logging accelerometer and magnetometer and (every 10 seconds) for GPS to ensure low battery consumption. FogTMDetector is also generalizable as it is robust against variation of users and smartphone positions.
用户的交通方式(例如,步行,汽车等)可以通过使用智能手机来检测。这样的设备大量存在,具有足够的计算能力和传感器来运行分类器(即用于传输模式检测)。在雾环境中使用智能手机可以确保低延迟、高泛化、高精度和低电池消耗。我们提出了一种基于雾的实时(人类时间尺度)传输模式检测,称为FogTMDetector;它由一个随机森林分类器组成,该分类器由磁力计、加速度计和GPS数据训练而成。在检测8种不同的模式(即静止、步行、自行车、汽车、公共汽车、火车、有轨电车和地铁)时,我们的系统实现的总体准确率为93%。我们将FogTMDetector与另一个最新的系统(称为EdgeTrans)进行了比较。对比结果表明,由于磁力计传感器读数,我们的解决方案在更细粒度的机动运输方式(即地铁,电车等)下实现了10%的机动精度提高(即94.4%)。FogTMDetector使用低采样率(1Hz)用于记录加速度计和磁力计,并(每10秒)用于GPS,以确保低电池消耗。FogTMDetector还具有通用性,因为它对用户和智能手机位置的变化具有鲁棒性。
{"title":"FogTMDetector - Fog Based Transport Mode Detection using Smartphones","authors":"M. Kamalian, Paulo Ferreira","doi":"10.1109/icfec54809.2022.00009","DOIUrl":"https://doi.org/10.1109/icfec54809.2022.00009","url":null,"abstract":"A user’s transport mode (e.g., walk, car, etc.) can be detected by using a smartphone. Such devices exist in a great number with enough computation power and sensors to run a classifier (i.e., for transport mode detection). Using a smartphone in a fog environment ensures low latency, high generalization, high accuracy, and low battery consumption. We propose a fog-based real-time (at human time scale) transport mode detection, called FogTMDetector; it consists of a Random Forest classifier trained with magnetometer, accelerometer, and GPS data. The overall accuracy achieved by our system is 93% when detecting 8 different modes (i.e., stationary, walk, bicycle, car, bus, train, tram, and subway). We compared FogTMDetector with another recent system (called EdgeTrans). The comparison results suggest that our solution achieves 10% higher motorized accuracy (i.e., 94.4%) with more fine-grained motorized transport modes (i.e., subway, tram, etc.) thanks to the magnetometer sensor readings. FogTMDetector uses a low sampling rate (1Hz) for logging accelerometer and magnetometer and (every 10 seconds) for GPS to ensure low battery consumption. FogTMDetector is also generalizable as it is robust against variation of users and smartphone positions.","PeriodicalId":423599,"journal":{"name":"2022 IEEE 6th International Conference on Fog and Edge Computing (ICFEC)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127910827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Specification and Operation of Privacy Models for Data Streams on the Edge 边缘数据流隐私模型的规范与运行
Pub Date : 2022-05-01 DOI: 10.1109/icfec54809.2022.00018
Boris Sedlak, Ilir Murturi, S. Dustdar
The growing number of Internet of Things (IoT) devices generates massive amounts of diverse data, including personal or confidential information (i.e., sensory, images, etc.) that is not intended for public view. Traditionally, predefined privacy policies are usually enforced in resource-rich environments such as the cloud to protect sensitive information from being released. However, the massive amount of data streams, heterogeneous devices, and networks involved affects latency, and the possibility of having data intercepted grows as it travels away from the data source. Therefore, such data streams must be transformed on the IoT device or within available devices (i.e., edge devices) in its vicinity to ensure privacy. In this paper, we present a privacy-enforcing framework that transforms data streams on edge networks. We treat privacy close to the data source, using powerful edge devices to perform various operations to ensure privacy. Whenever an IoT device captures personal or confidential data, an edge gateway in the device’s vicinity analyzes and transforms data streams according to a predefined set of rules. How and when data is modified is defined precisely by a set of triggers and transformations - a privacy model - that directly represents a stakeholder’s privacy policies. Our work answered how to represent such privacy policies in a model and enforce transformations on the edge.
越来越多的物联网(IoT)设备产生了大量不同的数据,包括个人或机密信息(即感官、图像等),这些数据不供公众查看。传统上,预定义的隐私策略通常在资源丰富的环境(如云)中执行,以保护敏感信息不被泄露。但是,涉及的大量数据流、异构设备和网络会影响延迟,并且数据在离开数据源时被截获的可能性会增加。因此,这些数据流必须在物联网设备或其附近的可用设备(即边缘设备)上进行转换,以确保隐私。在本文中,我们提出了一个隐私执行框架,用于转换边缘网络上的数据流。我们贴近数据源对待隐私,利用强大的边缘设备进行各种操作,确保隐私。每当物联网设备捕获个人或机密数据时,设备附近的边缘网关都会根据一组预定义的规则分析和转换数据流。数据修改的方式和时间由一组触发器和转换(隐私模型)精确定义,该模型直接表示利益相关者的隐私策略。我们的工作回答了如何在模型中表示此类隐私策略并在边缘上执行转换。
{"title":"Specification and Operation of Privacy Models for Data Streams on the Edge","authors":"Boris Sedlak, Ilir Murturi, S. Dustdar","doi":"10.1109/icfec54809.2022.00018","DOIUrl":"https://doi.org/10.1109/icfec54809.2022.00018","url":null,"abstract":"The growing number of Internet of Things (IoT) devices generates massive amounts of diverse data, including personal or confidential information (i.e., sensory, images, etc.) that is not intended for public view. Traditionally, predefined privacy policies are usually enforced in resource-rich environments such as the cloud to protect sensitive information from being released. However, the massive amount of data streams, heterogeneous devices, and networks involved affects latency, and the possibility of having data intercepted grows as it travels away from the data source. Therefore, such data streams must be transformed on the IoT device or within available devices (i.e., edge devices) in its vicinity to ensure privacy. In this paper, we present a privacy-enforcing framework that transforms data streams on edge networks. We treat privacy close to the data source, using powerful edge devices to perform various operations to ensure privacy. Whenever an IoT device captures personal or confidential data, an edge gateway in the device’s vicinity analyzes and transforms data streams according to a predefined set of rules. How and when data is modified is defined precisely by a set of triggers and transformations - a privacy model - that directly represents a stakeholder’s privacy policies. Our work answered how to represent such privacy policies in a model and enforce transformations on the edge.","PeriodicalId":423599,"journal":{"name":"2022 IEEE 6th International Conference on Fog and Edge Computing (ICFEC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129579636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
iSample: Intelligent Client Sampling in Federated Learning iSample:联邦学习中的智能客户端抽样
Pub Date : 2022-05-01 DOI: 10.1109/icfec54809.2022.00015
H. Imani, Jeff Anderson, T. El-Ghazawi
The pervasiveness of AI in society has made machine learning (ML) an invaluable tool for mobile and internet-of-things (IoT) devices. While the aggregate amount of data yielded by those devices is sufficient for training an accurate model, the data available to any one device is limited. Therefore, augmenting the learning at any of the devices with the experience from observations associated with the rest of the devices will be necessary. This, however, can dramatically increase the bandwidth requirements. Prior work has led to the development of Federated Learning (FL), where instead of exchanging data, client devices can only share weights to learn from one another. However, het-erogeneity in device resource availability and network conditions still impose limitations on training performance. In order to improve performance while maintaining good levels of accuracy, we introduce iSample. iSample, an intelligent sampling technique, selects clients by jointly considering known network performance and model quality parameters, allowing the minimization of training time. We compare iSample with other federated learning approaches and show that iSample improves the performance of the global model, especially in the earlier stages of training, while decreasing the training time for both CNN and VGG by 27% and 39%, respectively.
人工智能在社会中的普及使得机器学习(ML)成为移动和物联网(IoT)设备的宝贵工具。虽然这些设备产生的数据总量足以训练一个准确的模型,但任何一台设备可用的数据都是有限的。因此,通过与其他设备相关的观察经验来增强任何设备上的学习将是必要的。然而,这会极大地增加带宽需求。先前的工作导致了联邦学习(FL)的发展,其中客户端设备只能共享权重以相互学习,而不是交换数据。然而,设备资源可用性和网络条件的异构性仍然限制了训练性能。为了在保持良好精度的同时提高性能,我们引入了issample。iSample是一种智能采样技术,通过联合考虑已知的网络性能和模型质量参数来选择客户端,从而使训练时间最小化。我们将iSample与其他联邦学习方法进行了比较,结果表明iSample提高了全局模型的性能,特别是在训练的早期阶段,同时将CNN和VGG的训练时间分别减少了27%和39%。
{"title":"iSample: Intelligent Client Sampling in Federated Learning","authors":"H. Imani, Jeff Anderson, T. El-Ghazawi","doi":"10.1109/icfec54809.2022.00015","DOIUrl":"https://doi.org/10.1109/icfec54809.2022.00015","url":null,"abstract":"The pervasiveness of AI in society has made machine learning (ML) an invaluable tool for mobile and internet-of-things (IoT) devices. While the aggregate amount of data yielded by those devices is sufficient for training an accurate model, the data available to any one device is limited. Therefore, augmenting the learning at any of the devices with the experience from observations associated with the rest of the devices will be necessary. This, however, can dramatically increase the bandwidth requirements. Prior work has led to the development of Federated Learning (FL), where instead of exchanging data, client devices can only share weights to learn from one another. However, het-erogeneity in device resource availability and network conditions still impose limitations on training performance. In order to improve performance while maintaining good levels of accuracy, we introduce iSample. iSample, an intelligent sampling technique, selects clients by jointly considering known network performance and model quality parameters, allowing the minimization of training time. We compare iSample with other federated learning approaches and show that iSample improves the performance of the global model, especially in the earlier stages of training, while decreasing the training time for both CNN and VGG by 27% and 39%, respectively.","PeriodicalId":423599,"journal":{"name":"2022 IEEE 6th International Conference on Fog and Edge Computing (ICFEC)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131935940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Good Shepherds Care For Their Cattle: Seamless Pod Migration in Geo-Distributed Kubernetes 好牧羊人照顾他们的牛:地理分布式Kubernetes中的无缝Pod迁移
Pub Date : 2022-05-01 DOI: 10.1109/icfec54809.2022.00011
P. S. Junior, D. Miorandi, G. Pierre
Container technology has become a very popular choice for easing and managing the deployment of cloud applications and services. Container orchestration systems such as Kubernetes can automate to a large extent the deployment, scaling, and operations for containers across clusters of nodes, reducing human errors and saving cost and time. Designed with "traditional" cloud environments in mind (i.e., large datacenters with close-by machines connected by high-speed networks), systems like Kubernetes present some limitations in geo-distributed environments where computational workloads are moved to the edges of the network, close to where data is being generated/consumed. In geo-distributed environments, moving around containers, either to follow moving data sources/sinks or due to unpredictable changes in the network substrate, is a rather common operation. We present MyceDrive, a stateful resource migration solution natively integrated with the Kubernetes orchestrator. We show that geo-distributed Kubernetes pod migration is feasible while remaining fully transparent to the migrated application as well as its clients, while reducing downtimes up to 7x compared to state-of-the-art solutions.
容器技术已经成为简化和管理云应用程序和服务部署的一种非常流行的选择。Kubernetes等容器编排系统可以在很大程度上自动化跨节点集群的容器部署、扩展和操作,减少人为错误,节省成本和时间。考虑到“传统”的云环境(例如,大型数据中心与高速网络连接的近距离机器),像Kubernetes这样的系统在地理分布式环境中存在一些限制,在这种环境中,计算工作负载被移动到网络的边缘,靠近数据生成/消费的地方。在地理分布式环境中,在容器周围移动是一种相当常见的操作,或者是为了跟随移动的数据源/接收器,或者是由于网络基板中不可预测的变化。我们介绍MyceDrive,一个与Kubernetes编排器本地集成的有状态资源迁移解决方案。我们展示了地理分布式Kubernetes pod迁移是可行的,同时对迁移的应用程序及其客户端保持完全透明,同时与最先进的解决方案相比,停机时间减少了7倍。
{"title":"Good Shepherds Care For Their Cattle: Seamless Pod Migration in Geo-Distributed Kubernetes","authors":"P. S. Junior, D. Miorandi, G. Pierre","doi":"10.1109/icfec54809.2022.00011","DOIUrl":"https://doi.org/10.1109/icfec54809.2022.00011","url":null,"abstract":"Container technology has become a very popular choice for easing and managing the deployment of cloud applications and services. Container orchestration systems such as Kubernetes can automate to a large extent the deployment, scaling, and operations for containers across clusters of nodes, reducing human errors and saving cost and time. Designed with \"traditional\" cloud environments in mind (i.e., large datacenters with close-by machines connected by high-speed networks), systems like Kubernetes present some limitations in geo-distributed environments where computational workloads are moved to the edges of the network, close to where data is being generated/consumed. In geo-distributed environments, moving around containers, either to follow moving data sources/sinks or due to unpredictable changes in the network substrate, is a rather common operation. We present MyceDrive, a stateful resource migration solution natively integrated with the Kubernetes orchestrator. We show that geo-distributed Kubernetes pod migration is feasible while remaining fully transparent to the migrated application as well as its clients, while reducing downtimes up to 7x compared to state-of-the-art solutions.","PeriodicalId":423599,"journal":{"name":"2022 IEEE 6th International Conference on Fog and Edge Computing (ICFEC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132867385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
SDN-based Service Discovery and Assignment Framework to Preserve Service Availability in Telco-based Multi-Access Edge Computing 电信多接入边缘计算中基于sdn的服务发现与分配框架
Pub Date : 2022-05-01 DOI: 10.1109/icfec54809.2022.00022
A.H. Ghorab, Mohammed A. Abuibaid, M. St-Hilaire
The ever-growing number of connected User Equipment (UE), e.g., Internet of Things (IoT) devices, Connected Autonomous Vehicles (CAVs), has driven the evolution of Software-Defined Networks (SDN) and Fifth-Generation (5G) Networks to push the computing resources closer to the UE. Towards that end, Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) are promising solutions to facilitate the deployment of the required user’s service instances within an approximately one-hop communication range. However, discovering and assigning such service instances to the UE to maintain a high service availability is still an open challenge in the 3rd Generation Partnership Project (3GPP) standards due to the UE mobility and Telco network heterogeneity. This paper proposes an SDN-based dynamic service discovery and assignment framework for a distributed MEC infrastructure. The proposed framework considers various decision parameters such as UE’s location, the service instance’s demand (i.e., resource utilization), the network link status, and the service instance performance requirements (i.e., service profile) to offer a generic solution for discovering and assigning the service instances to the UE. The framework implementation results show an enhancement of the packet delivery ratio and a lower users’ perceived latency.
联网用户设备(UE)数量的不断增长,例如物联网(IoT)设备、联网自动驾驶汽车(cav),推动了软件定义网络(SDN)和第五代(5G)网络的发展,使计算资源更接近UE。为此,多接入边缘计算(MEC)和网络功能虚拟化(NFV)是很有前途的解决方案,可以在大约一跳的通信范围内方便地部署所需用户的服务实例。然而,由于终端的移动性和电信网络的异质性,在第三代合作伙伴计划(3GPP)标准中,发现和分配这样的服务实例以保持高服务可用性仍然是一个公开的挑战。提出了一种基于sdn的分布式MEC动态服务发现与分配框架。所提出的框架考虑了各种决策参数,如终端的位置、服务实例的需求(即资源利用率)、网络链路状态和服务实例性能需求(即服务概要),以提供发现服务实例并将其分配给终端的通用解决方案。该框架的实现结果表明,该框架提高了数据包的传输率,降低了用户的感知延迟。
{"title":"SDN-based Service Discovery and Assignment Framework to Preserve Service Availability in Telco-based Multi-Access Edge Computing","authors":"A.H. Ghorab, Mohammed A. Abuibaid, M. St-Hilaire","doi":"10.1109/icfec54809.2022.00022","DOIUrl":"https://doi.org/10.1109/icfec54809.2022.00022","url":null,"abstract":"The ever-growing number of connected User Equipment (UE), e.g., Internet of Things (IoT) devices, Connected Autonomous Vehicles (CAVs), has driven the evolution of Software-Defined Networks (SDN) and Fifth-Generation (5G) Networks to push the computing resources closer to the UE. Towards that end, Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) are promising solutions to facilitate the deployment of the required user’s service instances within an approximately one-hop communication range. However, discovering and assigning such service instances to the UE to maintain a high service availability is still an open challenge in the 3rd Generation Partnership Project (3GPP) standards due to the UE mobility and Telco network heterogeneity. This paper proposes an SDN-based dynamic service discovery and assignment framework for a distributed MEC infrastructure. The proposed framework considers various decision parameters such as UE’s location, the service instance’s demand (i.e., resource utilization), the network link status, and the service instance performance requirements (i.e., service profile) to offer a generic solution for discovering and assigning the service instances to the UE. The framework implementation results show an enhancement of the packet delivery ratio and a lower users’ perceived latency.","PeriodicalId":423599,"journal":{"name":"2022 IEEE 6th International Conference on Fog and Edge Computing (ICFEC)","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122386398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
SIMORA: SIMulating Open Routing protocols for Application interoperability on edge devices SIMORA:模拟边缘设备上应用互操作性的开放路由协议
Pub Date : 2022-05-01 DOI: 10.1109/icfec54809.2022.00013
Benjamin Warnke, Yuri Cotrado Sehgelmeble, Johannes Mantler, Sven Groppe, S. Fischer
In low-power networks, sending data over the network is one of the largest energy consumers. Therefore, any application running in such an environment must reduce the volume of communication caused by their presence. All the state of the art network simulators focus on specific parts of the network stack. Due to this specialization, the application interface is simplified in a way, that often only fictive and abstract applications can be simulated. However, we want to gain insights about the interoperability between routing and real application for the purpose of reducing communication costs. For this purpose, we propose our new simulator SIMORA in this contribution. In order to demonstrate the possibilities of SIMORA, we deploy a simple distributed application. Then, we develop advanced techniques like the dynamic content multicast to reduce the number of messages and the volume of data sent. In our experiments, we achieve reductions of up to 94% in number of messages and 29% in bytes transferred over the network compared to the traditional multicast approach.
在低功耗网络中,通过网络发送数据是最大的能源消耗者之一。因此,在这种环境中运行的任何应用程序都必须减少由于它们的存在而导致的通信量。所有最先进的网络模拟器都专注于网络堆栈的特定部分。由于这种专门化,应用程序接口在某种程度上得到了简化,因此通常只能模拟有效和抽象的应用程序。但是,为了降低通信成本,我们希望深入了解路由和实际应用程序之间的互操作性。为此,我们在本文中提出了我们的新模拟器SIMORA。为了演示SIMORA的可能性,我们部署了一个简单的分布式应用程序。然后,我们开发了动态内容组播等先进技术来减少消息的数量和发送的数据量。在我们的实验中,与传统的多播方法相比,我们在网络上减少了94%的消息数量和29%的字节传输。
{"title":"SIMORA: SIMulating Open Routing protocols for Application interoperability on edge devices","authors":"Benjamin Warnke, Yuri Cotrado Sehgelmeble, Johannes Mantler, Sven Groppe, S. Fischer","doi":"10.1109/icfec54809.2022.00013","DOIUrl":"https://doi.org/10.1109/icfec54809.2022.00013","url":null,"abstract":"In low-power networks, sending data over the network is one of the largest energy consumers. Therefore, any application running in such an environment must reduce the volume of communication caused by their presence. All the state of the art network simulators focus on specific parts of the network stack. Due to this specialization, the application interface is simplified in a way, that often only fictive and abstract applications can be simulated. However, we want to gain insights about the interoperability between routing and real application for the purpose of reducing communication costs. For this purpose, we propose our new simulator SIMORA in this contribution. In order to demonstrate the possibilities of SIMORA, we deploy a simple distributed application. Then, we develop advanced techniques like the dynamic content multicast to reduce the number of messages and the volume of data sent. In our experiments, we achieve reductions of up to 94% in number of messages and 29% in bytes transferred over the network compared to the traditional multicast approach.","PeriodicalId":423599,"journal":{"name":"2022 IEEE 6th International Conference on Fog and Edge Computing (ICFEC)","volume":"414 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122198125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
High-Level Metrics for Service Level Objective-aware Autoscaling in Polaris: a Performance Evaluation Polaris中服务水平目标感知自动伸缩的高级度量:性能评估
Pub Date : 2022-05-01 DOI: 10.1109/icfec54809.2022.00017
Nicolò Bartelucci, P. Bellavista, Thomas W. Pusztai, Andrea Morichetta, S. Dustdar
With the increasing complexity, requirements, and variability of cloud services, it is not always easy to find the right static/dynamic thresholds for the optimal configuration of low-level metrics for autoscaling resource management decisions. A Service Level Objective (SLO) is a high-level commitment to maintaining a specific state of a service in a given period, within a Service Level Agreement (SLA): the goal is to respect a given metric, like uptime or response time within given time or accuracy constraints. In this paper, we show the advantages and present the progress of an original SLO-aware autoscaler for the Polaris framework. In addition, the paper contributes to the literature in the field by proposing novel experimental results comparing the Polaris autoscaling performance, based on highlevel latency SLO, and the performance of a low-level average CPU-based SLO, implemented by the Kubernetes Horizontal Pod Autoscaler.
随着云服务的复杂性、需求和可变性的增加,为自动伸缩资源管理决策的底层指标的最佳配置找到合适的静态/动态阈值并不总是那么容易。服务水平目标(Service Level Objective, SLO)是在服务水平协议(Service Level Agreement, SLA)中在给定时间段内维护服务的特定状态的高级承诺:目标是尊重给定的度量,如在给定时间或精度约束下的正常运行时间或响应时间。在本文中,我们展示了其优点,并介绍了一种用于Polaris框架的原始慢速感知自动缩放器的进展。此外,本文通过提出新颖的实验结果来比较基于高级别延迟SLO的Polaris自动缩放性能和由Kubernetes Horizontal Pod Autoscaler实现的基于cpu的低级别平均SLO的性能,从而为该领域的文献做出贡献。
{"title":"High-Level Metrics for Service Level Objective-aware Autoscaling in Polaris: a Performance Evaluation","authors":"Nicolò Bartelucci, P. Bellavista, Thomas W. Pusztai, Andrea Morichetta, S. Dustdar","doi":"10.1109/icfec54809.2022.00017","DOIUrl":"https://doi.org/10.1109/icfec54809.2022.00017","url":null,"abstract":"With the increasing complexity, requirements, and variability of cloud services, it is not always easy to find the right static/dynamic thresholds for the optimal configuration of low-level metrics for autoscaling resource management decisions. A Service Level Objective (SLO) is a high-level commitment to maintaining a specific state of a service in a given period, within a Service Level Agreement (SLA): the goal is to respect a given metric, like uptime or response time within given time or accuracy constraints. In this paper, we show the advantages and present the progress of an original SLO-aware autoscaler for the Polaris framework. In addition, the paper contributes to the literature in the field by proposing novel experimental results comparing the Polaris autoscaling performance, based on highlevel latency SLO, and the performance of a low-level average CPU-based SLO, implemented by the Kubernetes Horizontal Pod Autoscaler.","PeriodicalId":423599,"journal":{"name":"2022 IEEE 6th International Conference on Fog and Edge Computing (ICFEC)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116743050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2022 IEEE 6th International Conference on Fog and Edge Computing (ICFEC)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1