Pub Date : 2021-11-08DOI: 10.1109/CloudNet53349.2021.9657119
David Balla, M. Maliosz, Csaba Simon
Function as a Service (FaaS) is the newest stage of application virtualization. Several public cloud providers offer FaaS solutions, however, the open source community also embraced this technology. In this paper we introduce a Python based function run-time, applicable in open source FaaS platforms for latency sensitive compute intensive applications that reduces the maximum completion times by taking into account the number of CPU cores. We also present our simulator that estimates the distribution of the completion times for compute intensive functions, when our proposed function run-time is in use. We present the results of our simulator by using two compute intensive functions. We also show a scenario when the user function is not purely compute intensive.
{"title":"Estimating Function Completion Time Distribution in Open Source FaaS","authors":"David Balla, M. Maliosz, Csaba Simon","doi":"10.1109/CloudNet53349.2021.9657119","DOIUrl":"https://doi.org/10.1109/CloudNet53349.2021.9657119","url":null,"abstract":"Function as a Service (FaaS) is the newest stage of application virtualization. Several public cloud providers offer FaaS solutions, however, the open source community also embraced this technology. In this paper we introduce a Python based function run-time, applicable in open source FaaS platforms for latency sensitive compute intensive applications that reduces the maximum completion times by taking into account the number of CPU cores. We also present our simulator that estimates the distribution of the completion times for compute intensive functions, when our proposed function run-time is in use. We present the results of our simulator by using two compute intensive functions. We also show a scenario when the user function is not purely compute intensive.","PeriodicalId":369247,"journal":{"name":"2021 IEEE 10th International Conference on Cloud Networking (CloudNet)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114541775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-08DOI: 10.1109/CloudNet53349.2021.9657146
Koichiro Amemiya, A. Nakao
IoT services collect sensor and multimedia data from edge devices for capturing the status of the physical world. Delay-sensitive traffic, especially for monitoring and controlling the edge devices, should be transferred and processed in a priority manner even if congestion in the network occurs because of the system resource sharing with data-intensive and delay-tolerant traffic. A local 5G system is promising for achieving delay-sensitive IoT services because it enables the local 5G operator to control the programmable local 5G system and service level by themselves. But it isn’t easy to install novel congestion control protocols to the non-programmable system components other than the local 5G system, such as proprietary IoT devices and wide-area networks operated by network carriers. Our contribution is three-fold: First, we propose a traffic control method for delay-sensitive IoT services installed only in 5G UPF or edge routers in DN without modifying IoT devices or controlling the wide-area network. It controls the latency of delay-sensitive traffic by classifying the delay-sensitive and delay-tolerant traffic, adding delays to, and modifying the receive window size in the packets of the delay-tolerant traffic. Second, we propose an implementation architecture for the programmable Whitebox switches utilizing BPF/XDP functionality. Finally, we evaluate our proposed method. The evaluation result shows that our proposed method keeps the latency of delay-sensitive traffic within the required latency for single and multiple Local 5G locations that share the obscure wide-area network.
{"title":"Enabling Delay-Sensitive IoT Application by Programmable Local 5G Edge","authors":"Koichiro Amemiya, A. Nakao","doi":"10.1109/CloudNet53349.2021.9657146","DOIUrl":"https://doi.org/10.1109/CloudNet53349.2021.9657146","url":null,"abstract":"IoT services collect sensor and multimedia data from edge devices for capturing the status of the physical world. Delay-sensitive traffic, especially for monitoring and controlling the edge devices, should be transferred and processed in a priority manner even if congestion in the network occurs because of the system resource sharing with data-intensive and delay-tolerant traffic. A local 5G system is promising for achieving delay-sensitive IoT services because it enables the local 5G operator to control the programmable local 5G system and service level by themselves. But it isn’t easy to install novel congestion control protocols to the non-programmable system components other than the local 5G system, such as proprietary IoT devices and wide-area networks operated by network carriers. Our contribution is three-fold: First, we propose a traffic control method for delay-sensitive IoT services installed only in 5G UPF or edge routers in DN without modifying IoT devices or controlling the wide-area network. It controls the latency of delay-sensitive traffic by classifying the delay-sensitive and delay-tolerant traffic, adding delays to, and modifying the receive window size in the packets of the delay-tolerant traffic. Second, we propose an implementation architecture for the programmable Whitebox switches utilizing BPF/XDP functionality. Finally, we evaluate our proposed method. The evaluation result shows that our proposed method keeps the latency of delay-sensitive traffic within the required latency for single and multiple Local 5G locations that share the obscure wide-area network.","PeriodicalId":369247,"journal":{"name":"2021 IEEE 10th International Conference on Cloud Networking (CloudNet)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129325711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-08DOI: 10.1109/CloudNet53349.2021.9657114
F. Paganelli, P. Cappanera, Antonio Brogi, Riccardo Falco
Network Function Virtualization (NFV) is a promising approach for network operators to cope with the increasing demand for network services in a flexible and cost-efficient way. How to place Virtualized Network Function (VNF) chains across the network infrastructure to achieve providers’ goals is a relevant research problem. Several emerging aspects, such as possible resource shortage at edge locations and the demand for accelerated infrastructural resources for high-performance deployments, make this problem even more challenging. In such cases, downgrading a service request to an alternative flavour (with less stringent resource requirements and/or fewer offered features) might help increasing the acceptance rate and, to a certain extent, the network service provider’s profit. In this work we formalize the problem of placing network services specified as multi-flavoured VNF chains and present an Integer Linear Programming (ILP) approach for optimally solving it. Simulation results demonstrate the feasibility and potential benefit of the proposed approach, both in online and offline placement scenarios, with an improvement in profit of up to 16% and 18%, respectively, with respect to the case where requests are specified in a single flavour.
网络功能虚拟化(Network Function Virtualization, NFV)是网络运营商应对日益增长的网络业务需求的一种灵活、经济的方式。如何将虚拟网络功能(VNF)链跨网络基础设施来实现供应商的目标是一个相关的研究问题。一些新出现的方面,如边缘位置可能出现的资源短缺,以及对高性能部署的加速基础设施资源的需求,使这个问题更具挑战性。在这种情况下,将服务请求降级为另一种风格(资源要求不那么严格和/或提供的功能更少)可能有助于提高接受率,并在一定程度上提高网络服务提供商的利润。在这项工作中,我们形式化了将网络服务指定为多口味VNF链的问题,并提出了一种整数线性规划(ILP)方法来最优解决该问题。仿真结果证明了所提出的方法在在线和离线放置场景中的可行性和潜在效益,相对于以单一风味指定请求的情况,利润分别提高了16%和18%。
{"title":"Profit-aware placement of multi-flavoured VNF chains","authors":"F. Paganelli, P. Cappanera, Antonio Brogi, Riccardo Falco","doi":"10.1109/CloudNet53349.2021.9657114","DOIUrl":"https://doi.org/10.1109/CloudNet53349.2021.9657114","url":null,"abstract":"Network Function Virtualization (NFV) is a promising approach for network operators to cope with the increasing demand for network services in a flexible and cost-efficient way. How to place Virtualized Network Function (VNF) chains across the network infrastructure to achieve providers’ goals is a relevant research problem. Several emerging aspects, such as possible resource shortage at edge locations and the demand for accelerated infrastructural resources for high-performance deployments, make this problem even more challenging. In such cases, downgrading a service request to an alternative flavour (with less stringent resource requirements and/or fewer offered features) might help increasing the acceptance rate and, to a certain extent, the network service provider’s profit. In this work we formalize the problem of placing network services specified as multi-flavoured VNF chains and present an Integer Linear Programming (ILP) approach for optimally solving it. Simulation results demonstrate the feasibility and potential benefit of the proposed approach, both in online and offline placement scenarios, with an improvement in profit of up to 16% and 18%, respectively, with respect to the case where requests are specified in a single flavour.","PeriodicalId":369247,"journal":{"name":"2021 IEEE 10th International Conference on Cloud Networking (CloudNet)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129890029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-08DOI: 10.1109/CloudNet53349.2021.9657127
Derek Phanekham, Troy Walker, S. Nair, Mike Truty, Manasa Chalasani, Rick Jones
This paper proposes a framework for efficiently running benchmarks on one or multiple cloud environments. It is essential that users and businesses utilize benchmarks to understand the performance of their network or the cloud environment where they host their virtual network and machines. When performing a large number of benchmarks, we have found it necessary to construct a tool that allows us to automatically and efficiently schedule the benchmarks we need to run without exceeding our resource limits or interfering with other benchmarks. We have developed a benchmarking tool called PKB_scheduler that accepts benchmark configuration files, and will construct an optimal graph of virtual machine usage. PKB_scheduler will use this graph to dynamically schedule batches of benchmarks to run, seeking to reduce the total number of batches and thus the overall time of benchmark execution.
{"title":"Efficient Batch Scheduling of Large Numbers of Cloud Benchmarks","authors":"Derek Phanekham, Troy Walker, S. Nair, Mike Truty, Manasa Chalasani, Rick Jones","doi":"10.1109/CloudNet53349.2021.9657127","DOIUrl":"https://doi.org/10.1109/CloudNet53349.2021.9657127","url":null,"abstract":"This paper proposes a framework for efficiently running benchmarks on one or multiple cloud environments. It is essential that users and businesses utilize benchmarks to understand the performance of their network or the cloud environment where they host their virtual network and machines. When performing a large number of benchmarks, we have found it necessary to construct a tool that allows us to automatically and efficiently schedule the benchmarks we need to run without exceeding our resource limits or interfering with other benchmarks. We have developed a benchmarking tool called PKB_scheduler that accepts benchmark configuration files, and will construct an optimal graph of virtual machine usage. PKB_scheduler will use this graph to dynamically schedule batches of benchmarks to run, seeking to reduce the total number of batches and thus the overall time of benchmark execution.","PeriodicalId":369247,"journal":{"name":"2021 IEEE 10th International Conference on Cloud Networking (CloudNet)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124271211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-08DOI: 10.1109/CloudNet53349.2021.9657122
Nikos Kostopoulos, Stavros Korentis, D. Kalogeras, B. Maglaris
Water Torture is a DDoS attack vector that exhausts the processing resources of victim Authoritative DNS Servers. By crafting DNS requests involving names that appear once and are unknown to the victim, attackers bypass the DNS caches of intermediary Recursive DNS Servers (Resolvers), hence forwarding the entire attack traffic to the victim. As a countermeasure, machine learning algorithms have been proposed to filter attack traffic on Resolvers.Our proposed schema implements via programmable data plane methods efficient machine learning algorithms that differentiate between legitimate and DDoS attack traffic within cloud infrastructures. Specifically, we leverage on XDP to implement data plane Naive Bayes Classifier inference and effectively mitigate Water Torture attacks within data center Resolvers. DNS requests regarded as invalid by the Naive Bayes Classifier are dropped within the Linux kernel before any resources are allocated to them, while valid ones are forwarded to the user space to be resolved.Our schema was assessed via a proof of concept setup within a virtualized environment, with learning and testing performed via legitimate and malicious DNS data records with statistical properties consistent with datasets widely reported in the literature. Our experiments mainly focused on evaluating the filtering throughput of the proposed mitigation schema given the constraints imposed by XDP. We conclude that our XDP-based Naive Bayes Classifier significantly decreases the volume of attack traffic within the data plane, thus efficiently safeguarding Resolvers.
{"title":"Mitigation of DNS Water Torture Attacks within the Data Plane via XDP-Based Naive Bayes Classifiers","authors":"Nikos Kostopoulos, Stavros Korentis, D. Kalogeras, B. Maglaris","doi":"10.1109/CloudNet53349.2021.9657122","DOIUrl":"https://doi.org/10.1109/CloudNet53349.2021.9657122","url":null,"abstract":"Water Torture is a DDoS attack vector that exhausts the processing resources of victim Authoritative DNS Servers. By crafting DNS requests involving names that appear once and are unknown to the victim, attackers bypass the DNS caches of intermediary Recursive DNS Servers (Resolvers), hence forwarding the entire attack traffic to the victim. As a countermeasure, machine learning algorithms have been proposed to filter attack traffic on Resolvers.Our proposed schema implements via programmable data plane methods efficient machine learning algorithms that differentiate between legitimate and DDoS attack traffic within cloud infrastructures. Specifically, we leverage on XDP to implement data plane Naive Bayes Classifier inference and effectively mitigate Water Torture attacks within data center Resolvers. DNS requests regarded as invalid by the Naive Bayes Classifier are dropped within the Linux kernel before any resources are allocated to them, while valid ones are forwarded to the user space to be resolved.Our schema was assessed via a proof of concept setup within a virtualized environment, with learning and testing performed via legitimate and malicious DNS data records with statistical properties consistent with datasets widely reported in the literature. Our experiments mainly focused on evaluating the filtering throughput of the proposed mitigation schema given the constraints imposed by XDP. We conclude that our XDP-based Naive Bayes Classifier significantly decreases the volume of attack traffic within the data plane, thus efficiently safeguarding Resolvers.","PeriodicalId":369247,"journal":{"name":"2021 IEEE 10th International Conference on Cloud Networking (CloudNet)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123474532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-08DOI: 10.1109/CloudNet53349.2021.9657130
Arne Aarts, Wil Michiels, Peter Roelse
Companies deploy deep learning models in the cloud and offer black-box access to them as a pay as you go service. It has been shown that with enough queries those models can be extracted. This paper presents a new cloning scheme using uncertainty quantification, enabling the adversary to leverage partial model extractions. First, a relatively small number of queries is spent to extract part of the target’s model. Second, for every query directed at the adversary, the uncertainty of the output of the extracted model is computed; when below a given threshold, the adversary will return the output. Otherwise, the query is delegated to the target’s model and its output returned. In this way the adversary is able to monetize knowledge that has successfully been extracted. We propose methods to determine thresholds such that the accuracy of the new scheme is close to the target network’s accuracy. The new scheme has been implemented, and experiments were conducted on the Caltech-256 and indoor datasets using multiple uncertainty quantification methods. The results show that the rate of delegation decreases logarithmically with the initial number of queries spent on extraction. Compared to conventional cloning techniques, the main advantages of the new scheme are that the total costs in terms of queries to the target model can be lower while achieving the same accuracy, and that the accuracy of the new scheme can be arbitrarily close to the target model’s accuracy by selecting a suitable value of the threshold.
{"title":"Leveraging Partial Model Extractions using Uncertainty Quantification","authors":"Arne Aarts, Wil Michiels, Peter Roelse","doi":"10.1109/CloudNet53349.2021.9657130","DOIUrl":"https://doi.org/10.1109/CloudNet53349.2021.9657130","url":null,"abstract":"Companies deploy deep learning models in the cloud and offer black-box access to them as a pay as you go service. It has been shown that with enough queries those models can be extracted. This paper presents a new cloning scheme using uncertainty quantification, enabling the adversary to leverage partial model extractions. First, a relatively small number of queries is spent to extract part of the target’s model. Second, for every query directed at the adversary, the uncertainty of the output of the extracted model is computed; when below a given threshold, the adversary will return the output. Otherwise, the query is delegated to the target’s model and its output returned. In this way the adversary is able to monetize knowledge that has successfully been extracted. We propose methods to determine thresholds such that the accuracy of the new scheme is close to the target network’s accuracy. The new scheme has been implemented, and experiments were conducted on the Caltech-256 and indoor datasets using multiple uncertainty quantification methods. The results show that the rate of delegation decreases logarithmically with the initial number of queries spent on extraction. Compared to conventional cloning techniques, the main advantages of the new scheme are that the total costs in terms of queries to the target model can be lower while achieving the same accuracy, and that the accuracy of the new scheme can be arbitrarily close to the target model’s accuracy by selecting a suitable value of the threshold.","PeriodicalId":369247,"journal":{"name":"2021 IEEE 10th International Conference on Cloud Networking (CloudNet)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122189825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-08DOI: 10.1109/CloudNet53349.2021.9657155
L. Almeida, R. Pasquini, F. Verdi
Data plane programmable devices used together with In-band Network Telemetry (INT) enable the collection of data regarding networks’ operation at a level of granularity never achieved before. Based on the fact that Machine Learning (ML) has been widely adopted in networking, the scenario investigated in this paper opens up the opportunity to advance the state of the art by applying such vast amount of data to the management of networks and the services offered on top of it. This paper feeds ML algorithms with data piped directly from INT - essentially statistics associated to buffers at network devices’ interfaces - with the objective of estimating services’ metrics. The service running on our testbed is DASH (Dynamic Adaptive Streaming over HTTP) - the most used protocol for video streaming nowadays - which brings great challenges to our investigations since it is capable of automatically adapting the quality of the videos due to oscillations in networks’ conditions. By using well established load patterns from the literature - sinusoid, flashcrowd and a mix of both at the same time - we emulate oscillations in the network, i.e., realistic dynamics at all buffers in the interfaces, which are captured by using INT capabilities. While estimating the quality of video being streamed towards our clients, we observed an NMAE (Normalized Mean Absolute Error) below 10% when Random Forest is used, which is better than current related works.
{"title":"Using Machine Learning and In-band Network Telemetry for Service Metrics Estimation","authors":"L. Almeida, R. Pasquini, F. Verdi","doi":"10.1109/CloudNet53349.2021.9657155","DOIUrl":"https://doi.org/10.1109/CloudNet53349.2021.9657155","url":null,"abstract":"Data plane programmable devices used together with In-band Network Telemetry (INT) enable the collection of data regarding networks’ operation at a level of granularity never achieved before. Based on the fact that Machine Learning (ML) has been widely adopted in networking, the scenario investigated in this paper opens up the opportunity to advance the state of the art by applying such vast amount of data to the management of networks and the services offered on top of it. This paper feeds ML algorithms with data piped directly from INT - essentially statistics associated to buffers at network devices’ interfaces - with the objective of estimating services’ metrics. The service running on our testbed is DASH (Dynamic Adaptive Streaming over HTTP) - the most used protocol for video streaming nowadays - which brings great challenges to our investigations since it is capable of automatically adapting the quality of the videos due to oscillations in networks’ conditions. By using well established load patterns from the literature - sinusoid, flashcrowd and a mix of both at the same time - we emulate oscillations in the network, i.e., realistic dynamics at all buffers in the interfaces, which are captured by using INT capabilities. While estimating the quality of video being streamed towards our clients, we observed an NMAE (Normalized Mean Absolute Error) below 10% when Random Forest is used, which is better than current related works.","PeriodicalId":369247,"journal":{"name":"2021 IEEE 10th International Conference on Cloud Networking (CloudNet)","volume":"131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115422318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-08DOI: 10.1109/CloudNet53349.2021.9657141
M. Ciavotta, Davide Motterlini, Marco Savi, Alessandro Tundo
Edge Computing pushes cloud capabilities to the edge of the network, closer to the users, to address stringent Quality-of-Service requirements and ensure more efficient bandwidth usage. Function-as-a-Service appears to be the most natural service model solution to enhance Edge Computing applications’ deployment and responsiveness. Unfortunately, the conventional FaaS model does not fit well in distributed and heterogeneous edge environments, where traffic demands arrive to (and are served by) edge nodes that may get overloaded under certain traffic conditions or where the access points of the network might frequently change, as for mobile applications. This short paper tries to fill this gap by proposing DFaaS, a novel decentralized FaaS-based architecture designed to autonomously balance the traffic load across edge nodes belonging to federated Edge Computing ecosystems. DFaaS implementation relies on an overlay peer-to-peer network and a distributed control plane that takes decisions on load redistribution. Although preliminary, results confirm the feasibility of the approach, showing that the system can transparently redistribute the load across edge nodes when they become overloaded.
{"title":"DFaaS: Decentralized Function-as-a-Service for Federated Edge Computing","authors":"M. Ciavotta, Davide Motterlini, Marco Savi, Alessandro Tundo","doi":"10.1109/CloudNet53349.2021.9657141","DOIUrl":"https://doi.org/10.1109/CloudNet53349.2021.9657141","url":null,"abstract":"Edge Computing pushes cloud capabilities to the edge of the network, closer to the users, to address stringent Quality-of-Service requirements and ensure more efficient bandwidth usage. Function-as-a-Service appears to be the most natural service model solution to enhance Edge Computing applications’ deployment and responsiveness. Unfortunately, the conventional FaaS model does not fit well in distributed and heterogeneous edge environments, where traffic demands arrive to (and are served by) edge nodes that may get overloaded under certain traffic conditions or where the access points of the network might frequently change, as for mobile applications. This short paper tries to fill this gap by proposing DFaaS, a novel decentralized FaaS-based architecture designed to autonomously balance the traffic load across edge nodes belonging to federated Edge Computing ecosystems. DFaaS implementation relies on an overlay peer-to-peer network and a distributed control plane that takes decisions on load redistribution. Although preliminary, results confirm the feasibility of the approach, showing that the system can transparently redistribute the load across edge nodes when they become overloaded.","PeriodicalId":369247,"journal":{"name":"2021 IEEE 10th International Conference on Cloud Networking (CloudNet)","volume":"17 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123208923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-08DOI: 10.1109/CloudNet53349.2021.9657113
Alessandro Silva, Michel S. Bonfim, P. Rego
Video Analytics has played an essential role in the most varied public safety sectors, mainly when applied to Intelligent Video Surveillance Systems. In this scenario, Edge Video Analytics seeks to migrate part of the workload of the Video Analysis process to devices close to the data source to reduce transmission overhead on the network and overall latency. Therefore, this work proposes an Edge Video Analytics architecture for real-time video monitoring systems. Such architecture divides the analysis process into functional and independent modules, being flexible to support analytics or network functions. We developed a proof of concept to validate the proposed architecture, focusing on detecting and recognizing license plate characters in the edge. In this scenario, between the detection and recognition modules, we used Deep Learning to implement a module responsible for discard plates with distorted identification text to reduce network utilization. Conducted experiments demonstrate that the architecture meets its objectives by reducing an average of 25.64% of the network traffic in its frames flow due to a resolution quality control and 56.65% in its license plate flow due to the filtering step proposed.
视频分析在各种公共安全领域发挥着至关重要的作用,主要应用于智能视频监控系统。在这种情况下,Edge Video Analytics试图将视频分析过程的部分工作负载迁移到靠近数据源的设备上,以减少网络上的传输开销和总体延迟。因此,本研究提出了一种用于实时视频监控系统的边缘视频分析架构。这种架构将分析过程划分为功能模块和独立模块,可以灵活地支持分析或网络功能。我们开发了一个概念证明来验证所提出的架构,重点是检测和识别边缘的车牌字符。在这个场景中,在检测和识别模块之间,我们使用深度学习实现了一个模块,负责识别文本扭曲的丢弃车牌,以降低网络利用率。实验表明,该架构达到了预期的目标,由于分辨率质量控制,帧流平均减少了25.64%的网络流量,而由于所提出的过滤步骤,车牌流平均减少了56.65%的网络流量。
{"title":"An Edge Video Analysis Solution For Intelligent Real-Time Video Surveillance Systems","authors":"Alessandro Silva, Michel S. Bonfim, P. Rego","doi":"10.1109/CloudNet53349.2021.9657113","DOIUrl":"https://doi.org/10.1109/CloudNet53349.2021.9657113","url":null,"abstract":"Video Analytics has played an essential role in the most varied public safety sectors, mainly when applied to Intelligent Video Surveillance Systems. In this scenario, Edge Video Analytics seeks to migrate part of the workload of the Video Analysis process to devices close to the data source to reduce transmission overhead on the network and overall latency. Therefore, this work proposes an Edge Video Analytics architecture for real-time video monitoring systems. Such architecture divides the analysis process into functional and independent modules, being flexible to support analytics or network functions. We developed a proof of concept to validate the proposed architecture, focusing on detecting and recognizing license plate characters in the edge. In this scenario, between the detection and recognition modules, we used Deep Learning to implement a module responsible for discard plates with distorted identification text to reduce network utilization. Conducted experiments demonstrate that the architecture meets its objectives by reducing an average of 25.64% of the network traffic in its frames flow due to a resolution quality control and 56.65% in its license plate flow due to the filtering step proposed.","PeriodicalId":369247,"journal":{"name":"2021 IEEE 10th International Conference on Cloud Networking (CloudNet)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121681473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-08DOI: 10.1109/CloudNet53349.2021.9657120
Tobias Mahn, Anja Klein
Multi-access edge computing (MEC) enables mobile units (MUs) to offload computation tasks to edge servers nearby. This translates in energy savings for the MUs, but creates a joint problem of offloading decision making and allocation of the shared communication and computation resources. In a MEC scenario with multiple MUs, multiple access points and multiple cloudlets the complexity of this joint problem grows rapidly with the number of entities in the network. The complexity increases even further when some MUs have a higher incentive to offload tasks due to a low battery level and are willing to pay in exchange for more resources. Our proposed energy-minimization approach with a flexible maximum offloading time constraint is based on matching theory. A global orchestrator (GO) collects all the system state information and coordinates the offloading preferences of the MUs. A MU can lower the maximum time constraint by a payment. The GO allocates the shared communication and computation resources accordingly to satisfy the time constraint. The computation load of the algorithm at each MU is reduced to a minimum as each MU only has to take a simple offloading decision based on its task properties and payment willingness. In numerical simulations, the proposed matching approach and flexible resource allocation scheme is tested for fast and reliable convergence, even in large networks with hundreds of MUs. Furthermore, the matching algorithm, tested with different resource allocation strategies, shows a significant improvement in terms of energy-efficiency over the considered reference schemes.
{"title":"A Global Orchestration Matching Framework for Energy-Efficient Multi-Access Edge Computing","authors":"Tobias Mahn, Anja Klein","doi":"10.1109/CloudNet53349.2021.9657120","DOIUrl":"https://doi.org/10.1109/CloudNet53349.2021.9657120","url":null,"abstract":"Multi-access edge computing (MEC) enables mobile units (MUs) to offload computation tasks to edge servers nearby. This translates in energy savings for the MUs, but creates a joint problem of offloading decision making and allocation of the shared communication and computation resources. In a MEC scenario with multiple MUs, multiple access points and multiple cloudlets the complexity of this joint problem grows rapidly with the number of entities in the network. The complexity increases even further when some MUs have a higher incentive to offload tasks due to a low battery level and are willing to pay in exchange for more resources. Our proposed energy-minimization approach with a flexible maximum offloading time constraint is based on matching theory. A global orchestrator (GO) collects all the system state information and coordinates the offloading preferences of the MUs. A MU can lower the maximum time constraint by a payment. The GO allocates the shared communication and computation resources accordingly to satisfy the time constraint. The computation load of the algorithm at each MU is reduced to a minimum as each MU only has to take a simple offloading decision based on its task properties and payment willingness. In numerical simulations, the proposed matching approach and flexible resource allocation scheme is tested for fast and reliable convergence, even in large networks with hundreds of MUs. Furthermore, the matching algorithm, tested with different resource allocation strategies, shows a significant improvement in terms of energy-efficiency over the considered reference schemes.","PeriodicalId":369247,"journal":{"name":"2021 IEEE 10th International Conference on Cloud Networking (CloudNet)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129311672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}