首页 > 最新文献

2021 IEEE 10th International Conference on Cloud Networking (CloudNet)最新文献

英文 中文
Estimating Function Completion Time Distribution in Open Source FaaS 开源FaaS中功能完成时间分布的估计
Pub Date : 2021-11-08 DOI: 10.1109/CloudNet53349.2021.9657119
David Balla, M. Maliosz, Csaba Simon
Function as a Service (FaaS) is the newest stage of application virtualization. Several public cloud providers offer FaaS solutions, however, the open source community also embraced this technology. In this paper we introduce a Python based function run-time, applicable in open source FaaS platforms for latency sensitive compute intensive applications that reduces the maximum completion times by taking into account the number of CPU cores. We also present our simulator that estimates the distribution of the completion times for compute intensive functions, when our proposed function run-time is in use. We present the results of our simulator by using two compute intensive functions. We also show a scenario when the user function is not purely compute intensive.
功能即服务(FaaS)是应用程序虚拟化的最新阶段。一些公共云提供商提供FaaS解决方案,然而,开源社区也接受了这项技术。在本文中,我们介绍了一个基于Python的函数运行时,适用于开源FaaS平台,用于延迟敏感的计算密集型应用程序,通过考虑CPU内核的数量来减少最大完成时间。我们还展示了我们的模拟器,当我们建议的函数运行时使用时,它可以估计计算密集型函数的完成时间分布。我们通过使用两个计算密集型函数给出了我们的模拟器的结果。我们还展示了用户函数不是纯粹计算密集型的场景。
{"title":"Estimating Function Completion Time Distribution in Open Source FaaS","authors":"David Balla, M. Maliosz, Csaba Simon","doi":"10.1109/CloudNet53349.2021.9657119","DOIUrl":"https://doi.org/10.1109/CloudNet53349.2021.9657119","url":null,"abstract":"Function as a Service (FaaS) is the newest stage of application virtualization. Several public cloud providers offer FaaS solutions, however, the open source community also embraced this technology. In this paper we introduce a Python based function run-time, applicable in open source FaaS platforms for latency sensitive compute intensive applications that reduces the maximum completion times by taking into account the number of CPU cores. We also present our simulator that estimates the distribution of the completion times for compute intensive functions, when our proposed function run-time is in use. We present the results of our simulator by using two compute intensive functions. We also show a scenario when the user function is not purely compute intensive.","PeriodicalId":369247,"journal":{"name":"2021 IEEE 10th International Conference on Cloud Networking (CloudNet)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114541775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Enabling Delay-Sensitive IoT Application by Programmable Local 5G Edge 通过可编程本地5G边缘实现延迟敏感物联网应用
Pub Date : 2021-11-08 DOI: 10.1109/CloudNet53349.2021.9657146
Koichiro Amemiya, A. Nakao
IoT services collect sensor and multimedia data from edge devices for capturing the status of the physical world. Delay-sensitive traffic, especially for monitoring and controlling the edge devices, should be transferred and processed in a priority manner even if congestion in the network occurs because of the system resource sharing with data-intensive and delay-tolerant traffic. A local 5G system is promising for achieving delay-sensitive IoT services because it enables the local 5G operator to control the programmable local 5G system and service level by themselves. But it isn’t easy to install novel congestion control protocols to the non-programmable system components other than the local 5G system, such as proprietary IoT devices and wide-area networks operated by network carriers. Our contribution is three-fold: First, we propose a traffic control method for delay-sensitive IoT services installed only in 5G UPF or edge routers in DN without modifying IoT devices or controlling the wide-area network. It controls the latency of delay-sensitive traffic by classifying the delay-sensitive and delay-tolerant traffic, adding delays to, and modifying the receive window size in the packets of the delay-tolerant traffic. Second, we propose an implementation architecture for the programmable Whitebox switches utilizing BPF/XDP functionality. Finally, we evaluate our proposed method. The evaluation result shows that our proposed method keeps the latency of delay-sensitive traffic within the required latency for single and multiple Local 5G locations that share the obscure wide-area network.
物联网服务从边缘设备收集传感器和多媒体数据,以捕获物理世界的状态。即使由于系统资源与数据密集型、容忍延迟的流量共享而导致网络拥塞,也应优先传输和处理对延迟敏感的流量,特别是对边缘设备的监控和控制。本地5G系统有望实现延迟敏感物联网服务,因为它使本地5G运营商能够自行控制可编程本地5G系统和服务水平。但在本地5G系统以外的非可编程系统组件(如专有物联网设备和网络运营商运营的广域网)上安装新型拥塞控制协议并不容易。我们的贡献有三个方面:首先,我们提出了一种仅安装在5G UPF或DN中的边缘路由器中的延迟敏感物联网服务的流量控制方法,而无需修改物联网设备或控制广域网。它通过对延迟敏感流量和延迟容忍流量进行分类,增加延迟,修改延迟容忍流量报文的接收窗口大小来控制延迟敏感流量的时延。其次,我们提出了一个利用BPF/XDP功能的可编程白盒交换机的实现架构。最后,我们对所提出的方法进行了评估。评估结果表明,对于共享模糊广域网的单个和多个Local 5G位置,我们提出的方法将延迟敏感流量的延迟保持在所需的延迟范围内。
{"title":"Enabling Delay-Sensitive IoT Application by Programmable Local 5G Edge","authors":"Koichiro Amemiya, A. Nakao","doi":"10.1109/CloudNet53349.2021.9657146","DOIUrl":"https://doi.org/10.1109/CloudNet53349.2021.9657146","url":null,"abstract":"IoT services collect sensor and multimedia data from edge devices for capturing the status of the physical world. Delay-sensitive traffic, especially for monitoring and controlling the edge devices, should be transferred and processed in a priority manner even if congestion in the network occurs because of the system resource sharing with data-intensive and delay-tolerant traffic. A local 5G system is promising for achieving delay-sensitive IoT services because it enables the local 5G operator to control the programmable local 5G system and service level by themselves. But it isn’t easy to install novel congestion control protocols to the non-programmable system components other than the local 5G system, such as proprietary IoT devices and wide-area networks operated by network carriers. Our contribution is three-fold: First, we propose a traffic control method for delay-sensitive IoT services installed only in 5G UPF or edge routers in DN without modifying IoT devices or controlling the wide-area network. It controls the latency of delay-sensitive traffic by classifying the delay-sensitive and delay-tolerant traffic, adding delays to, and modifying the receive window size in the packets of the delay-tolerant traffic. Second, we propose an implementation architecture for the programmable Whitebox switches utilizing BPF/XDP functionality. Finally, we evaluate our proposed method. The evaluation result shows that our proposed method keeps the latency of delay-sensitive traffic within the required latency for single and multiple Local 5G locations that share the obscure wide-area network.","PeriodicalId":369247,"journal":{"name":"2021 IEEE 10th International Conference on Cloud Networking (CloudNet)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129325711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Profit-aware placement of multi-flavoured VNF chains 多口味VNF连锁店的利润意识
Pub Date : 2021-11-08 DOI: 10.1109/CloudNet53349.2021.9657114
F. Paganelli, P. Cappanera, Antonio Brogi, Riccardo Falco
Network Function Virtualization (NFV) is a promising approach for network operators to cope with the increasing demand for network services in a flexible and cost-efficient way. How to place Virtualized Network Function (VNF) chains across the network infrastructure to achieve providers’ goals is a relevant research problem. Several emerging aspects, such as possible resource shortage at edge locations and the demand for accelerated infrastructural resources for high-performance deployments, make this problem even more challenging. In such cases, downgrading a service request to an alternative flavour (with less stringent resource requirements and/or fewer offered features) might help increasing the acceptance rate and, to a certain extent, the network service provider’s profit. In this work we formalize the problem of placing network services specified as multi-flavoured VNF chains and present an Integer Linear Programming (ILP) approach for optimally solving it. Simulation results demonstrate the feasibility and potential benefit of the proposed approach, both in online and offline placement scenarios, with an improvement in profit of up to 16% and 18%, respectively, with respect to the case where requests are specified in a single flavour.
网络功能虚拟化(Network Function Virtualization, NFV)是网络运营商应对日益增长的网络业务需求的一种灵活、经济的方式。如何将虚拟网络功能(VNF)链跨网络基础设施来实现供应商的目标是一个相关的研究问题。一些新出现的方面,如边缘位置可能出现的资源短缺,以及对高性能部署的加速基础设施资源的需求,使这个问题更具挑战性。在这种情况下,将服务请求降级为另一种风格(资源要求不那么严格和/或提供的功能更少)可能有助于提高接受率,并在一定程度上提高网络服务提供商的利润。在这项工作中,我们形式化了将网络服务指定为多口味VNF链的问题,并提出了一种整数线性规划(ILP)方法来最优解决该问题。仿真结果证明了所提出的方法在在线和离线放置场景中的可行性和潜在效益,相对于以单一风味指定请求的情况,利润分别提高了16%和18%。
{"title":"Profit-aware placement of multi-flavoured VNF chains","authors":"F. Paganelli, P. Cappanera, Antonio Brogi, Riccardo Falco","doi":"10.1109/CloudNet53349.2021.9657114","DOIUrl":"https://doi.org/10.1109/CloudNet53349.2021.9657114","url":null,"abstract":"Network Function Virtualization (NFV) is a promising approach for network operators to cope with the increasing demand for network services in a flexible and cost-efficient way. How to place Virtualized Network Function (VNF) chains across the network infrastructure to achieve providers’ goals is a relevant research problem. Several emerging aspects, such as possible resource shortage at edge locations and the demand for accelerated infrastructural resources for high-performance deployments, make this problem even more challenging. In such cases, downgrading a service request to an alternative flavour (with less stringent resource requirements and/or fewer offered features) might help increasing the acceptance rate and, to a certain extent, the network service provider’s profit. In this work we formalize the problem of placing network services specified as multi-flavoured VNF chains and present an Integer Linear Programming (ILP) approach for optimally solving it. Simulation results demonstrate the feasibility and potential benefit of the proposed approach, both in online and offline placement scenarios, with an improvement in profit of up to 16% and 18%, respectively, with respect to the case where requests are specified in a single flavour.","PeriodicalId":369247,"journal":{"name":"2021 IEEE 10th International Conference on Cloud Networking (CloudNet)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129890029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Efficient Batch Scheduling of Large Numbers of Cloud Benchmarks 大量云基准的高效批调度
Pub Date : 2021-11-08 DOI: 10.1109/CloudNet53349.2021.9657127
Derek Phanekham, Troy Walker, S. Nair, Mike Truty, Manasa Chalasani, Rick Jones
This paper proposes a framework for efficiently running benchmarks on one or multiple cloud environments. It is essential that users and businesses utilize benchmarks to understand the performance of their network or the cloud environment where they host their virtual network and machines. When performing a large number of benchmarks, we have found it necessary to construct a tool that allows us to automatically and efficiently schedule the benchmarks we need to run without exceeding our resource limits or interfering with other benchmarks. We have developed a benchmarking tool called PKB_scheduler that accepts benchmark configuration files, and will construct an optimal graph of virtual machine usage. PKB_scheduler will use this graph to dynamically schedule batches of benchmarks to run, seeking to reduce the total number of batches and thus the overall time of benchmark execution.
本文提出了一个框架,用于在一个或多个云环境中高效运行基准测试。用户和企业必须利用基准测试来了解其网络或托管其虚拟网络和机器的云环境的性能。在执行大量基准测试时,我们发现有必要构建一个工具,使我们能够自动有效地调度需要运行的基准测试,而不会超出资源限制或干扰其他基准测试。我们已经开发了一个名为PKB_scheduler的基准测试工具,它接受基准测试配置文件,并将构建虚拟机使用情况的最佳图。PKB_scheduler将使用此图动态调度要运行的基准批次,寻求减少批的总数,从而减少基准执行的总时间。
{"title":"Efficient Batch Scheduling of Large Numbers of Cloud Benchmarks","authors":"Derek Phanekham, Troy Walker, S. Nair, Mike Truty, Manasa Chalasani, Rick Jones","doi":"10.1109/CloudNet53349.2021.9657127","DOIUrl":"https://doi.org/10.1109/CloudNet53349.2021.9657127","url":null,"abstract":"This paper proposes a framework for efficiently running benchmarks on one or multiple cloud environments. It is essential that users and businesses utilize benchmarks to understand the performance of their network or the cloud environment where they host their virtual network and machines. When performing a large number of benchmarks, we have found it necessary to construct a tool that allows us to automatically and efficiently schedule the benchmarks we need to run without exceeding our resource limits or interfering with other benchmarks. We have developed a benchmarking tool called PKB_scheduler that accepts benchmark configuration files, and will construct an optimal graph of virtual machine usage. PKB_scheduler will use this graph to dynamically schedule batches of benchmarks to run, seeking to reduce the total number of batches and thus the overall time of benchmark execution.","PeriodicalId":369247,"journal":{"name":"2021 IEEE 10th International Conference on Cloud Networking (CloudNet)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124271211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Mitigation of DNS Water Torture Attacks within the Data Plane via XDP-Based Naive Bayes Classifiers 基于xdp的朴素贝叶斯分类器缓解数据平面内DNS水刑攻击
Pub Date : 2021-11-08 DOI: 10.1109/CloudNet53349.2021.9657122
Nikos Kostopoulos, Stavros Korentis, D. Kalogeras, B. Maglaris
Water Torture is a DDoS attack vector that exhausts the processing resources of victim Authoritative DNS Servers. By crafting DNS requests involving names that appear once and are unknown to the victim, attackers bypass the DNS caches of intermediary Recursive DNS Servers (Resolvers), hence forwarding the entire attack traffic to the victim. As a countermeasure, machine learning algorithms have been proposed to filter attack traffic on Resolvers.Our proposed schema implements via programmable data plane methods efficient machine learning algorithms that differentiate between legitimate and DDoS attack traffic within cloud infrastructures. Specifically, we leverage on XDP to implement data plane Naive Bayes Classifier inference and effectively mitigate Water Torture attacks within data center Resolvers. DNS requests regarded as invalid by the Naive Bayes Classifier are dropped within the Linux kernel before any resources are allocated to them, while valid ones are forwarded to the user space to be resolved.Our schema was assessed via a proof of concept setup within a virtualized environment, with learning and testing performed via legitimate and malicious DNS data records with statistical properties consistent with datasets widely reported in the literature. Our experiments mainly focused on evaluating the filtering throughput of the proposed mitigation schema given the constraints imposed by XDP. We conclude that our XDP-based Naive Bayes Classifier significantly decreases the volume of attack traffic within the data plane, thus efficiently safeguarding Resolvers.
水刑是一种DDoS攻击向量,耗尽受害者权威DNS服务器的处理资源。通过制作包含出现一次且受害者不知道的名称的DNS请求,攻击者绕过中间递归DNS服务器(解析器)的DNS缓存,从而将整个攻击流量转发给受害者。作为一种对策,机器学习算法被提出来过滤解析器上的攻击流量。我们提出的模式通过可编程数据平面方法实现高效的机器学习算法,区分云基础设施中的合法和DDoS攻击流量。具体来说,我们利用XDP来实现数据平面朴素贝叶斯分类器推理,并有效地减轻数据中心解析器中的水酷刑攻击。被朴素贝叶斯分类器视为无效的DNS请求在分配任何资源之前被丢弃在Linux内核中,而有效的DNS请求则被转发到用户空间进行解析。我们的模式通过虚拟环境中的概念验证设置进行评估,并通过合法和恶意DNS数据记录进行学习和测试,这些数据记录具有与文献中广泛报道的数据集一致的统计属性。我们的实验主要集中在评估在XDP施加约束的情况下所提出的缓解方案的过滤吞吐量。我们得出结论,我们基于xdp的朴素贝叶斯分类器显著减少了数据平面内的攻击流量,从而有效地保护了解析器。
{"title":"Mitigation of DNS Water Torture Attacks within the Data Plane via XDP-Based Naive Bayes Classifiers","authors":"Nikos Kostopoulos, Stavros Korentis, D. Kalogeras, B. Maglaris","doi":"10.1109/CloudNet53349.2021.9657122","DOIUrl":"https://doi.org/10.1109/CloudNet53349.2021.9657122","url":null,"abstract":"Water Torture is a DDoS attack vector that exhausts the processing resources of victim Authoritative DNS Servers. By crafting DNS requests involving names that appear once and are unknown to the victim, attackers bypass the DNS caches of intermediary Recursive DNS Servers (Resolvers), hence forwarding the entire attack traffic to the victim. As a countermeasure, machine learning algorithms have been proposed to filter attack traffic on Resolvers.Our proposed schema implements via programmable data plane methods efficient machine learning algorithms that differentiate between legitimate and DDoS attack traffic within cloud infrastructures. Specifically, we leverage on XDP to implement data plane Naive Bayes Classifier inference and effectively mitigate Water Torture attacks within data center Resolvers. DNS requests regarded as invalid by the Naive Bayes Classifier are dropped within the Linux kernel before any resources are allocated to them, while valid ones are forwarded to the user space to be resolved.Our schema was assessed via a proof of concept setup within a virtualized environment, with learning and testing performed via legitimate and malicious DNS data records with statistical properties consistent with datasets widely reported in the literature. Our experiments mainly focused on evaluating the filtering throughput of the proposed mitigation schema given the constraints imposed by XDP. We conclude that our XDP-based Naive Bayes Classifier significantly decreases the volume of attack traffic within the data plane, thus efficiently safeguarding Resolvers.","PeriodicalId":369247,"journal":{"name":"2021 IEEE 10th International Conference on Cloud Networking (CloudNet)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123474532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Leveraging Partial Model Extractions using Uncertainty Quantification 利用不确定性量化的部分模型提取
Pub Date : 2021-11-08 DOI: 10.1109/CloudNet53349.2021.9657130
Arne Aarts, Wil Michiels, Peter Roelse
Companies deploy deep learning models in the cloud and offer black-box access to them as a pay as you go service. It has been shown that with enough queries those models can be extracted. This paper presents a new cloning scheme using uncertainty quantification, enabling the adversary to leverage partial model extractions. First, a relatively small number of queries is spent to extract part of the target’s model. Second, for every query directed at the adversary, the uncertainty of the output of the extracted model is computed; when below a given threshold, the adversary will return the output. Otherwise, the query is delegated to the target’s model and its output returned. In this way the adversary is able to monetize knowledge that has successfully been extracted. We propose methods to determine thresholds such that the accuracy of the new scheme is close to the target network’s accuracy. The new scheme has been implemented, and experiments were conducted on the Caltech-256 and indoor datasets using multiple uncertainty quantification methods. The results show that the rate of delegation decreases logarithmically with the initial number of queries spent on extraction. Compared to conventional cloning techniques, the main advantages of the new scheme are that the total costs in terms of queries to the target model can be lower while achieving the same accuracy, and that the accuracy of the new scheme can be arbitrarily close to the target model’s accuracy by selecting a suitable value of the threshold.
公司在云中部署深度学习模型,并以即用即付服务的形式提供黑盒访问。事实证明,通过足够的查询,这些模型可以被提取出来。本文提出了一种新的克隆方案,使用不确定性量化,使对手能够利用部分模型提取。首先,花费相对较少的查询来提取目标模型的一部分。其次,对于针对对手的每个查询,计算提取模型输出的不确定性;当低于给定阈值时,攻击者将返回输出。否则,查询将委托给目标模型并返回其输出。通过这种方式,对手能够将成功提取的知识货币化。我们提出了确定阈值的方法,使新方案的精度接近目标网络的精度。并利用多种不确定度量化方法在Caltech-256和室内数据集上进行了实验。结果表明,随着用于提取的初始查询数的增加,委托率呈对数递减。与传统克隆技术相比,新方案的主要优点是在达到相同精度的情况下,查询目标模型的总成本更低,并且通过选择合适的阈值,新方案的精度可以任意接近目标模型的精度。
{"title":"Leveraging Partial Model Extractions using Uncertainty Quantification","authors":"Arne Aarts, Wil Michiels, Peter Roelse","doi":"10.1109/CloudNet53349.2021.9657130","DOIUrl":"https://doi.org/10.1109/CloudNet53349.2021.9657130","url":null,"abstract":"Companies deploy deep learning models in the cloud and offer black-box access to them as a pay as you go service. It has been shown that with enough queries those models can be extracted. This paper presents a new cloning scheme using uncertainty quantification, enabling the adversary to leverage partial model extractions. First, a relatively small number of queries is spent to extract part of the target’s model. Second, for every query directed at the adversary, the uncertainty of the output of the extracted model is computed; when below a given threshold, the adversary will return the output. Otherwise, the query is delegated to the target’s model and its output returned. In this way the adversary is able to monetize knowledge that has successfully been extracted. We propose methods to determine thresholds such that the accuracy of the new scheme is close to the target network’s accuracy. The new scheme has been implemented, and experiments were conducted on the Caltech-256 and indoor datasets using multiple uncertainty quantification methods. The results show that the rate of delegation decreases logarithmically with the initial number of queries spent on extraction. Compared to conventional cloning techniques, the main advantages of the new scheme are that the total costs in terms of queries to the target model can be lower while achieving the same accuracy, and that the accuracy of the new scheme can be arbitrarily close to the target model’s accuracy by selecting a suitable value of the threshold.","PeriodicalId":369247,"journal":{"name":"2021 IEEE 10th International Conference on Cloud Networking (CloudNet)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122189825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Using Machine Learning and In-band Network Telemetry for Service Metrics Estimation 利用机器学习和带内网络遥测技术进行服务度量评估
Pub Date : 2021-11-08 DOI: 10.1109/CloudNet53349.2021.9657155
L. Almeida, R. Pasquini, F. Verdi
Data plane programmable devices used together with In-band Network Telemetry (INT) enable the collection of data regarding networks’ operation at a level of granularity never achieved before. Based on the fact that Machine Learning (ML) has been widely adopted in networking, the scenario investigated in this paper opens up the opportunity to advance the state of the art by applying such vast amount of data to the management of networks and the services offered on top of it. This paper feeds ML algorithms with data piped directly from INT - essentially statistics associated to buffers at network devices’ interfaces - with the objective of estimating services’ metrics. The service running on our testbed is DASH (Dynamic Adaptive Streaming over HTTP) - the most used protocol for video streaming nowadays - which brings great challenges to our investigations since it is capable of automatically adapting the quality of the videos due to oscillations in networks’ conditions. By using well established load patterns from the literature - sinusoid, flashcrowd and a mix of both at the same time - we emulate oscillations in the network, i.e., realistic dynamics at all buffers in the interfaces, which are captured by using INT capabilities. While estimating the quality of video being streamed towards our clients, we observed an NMAE (Normalized Mean Absolute Error) below 10% when Random Forest is used, which is better than current related works.
数据平面可编程设备与带内网络遥测(INT)一起使用,能够以前所未有的粒度级别收集有关网络运行的数据。基于机器学习(ML)在网络中被广泛采用的事实,本文所研究的场景通过将如此大量的数据应用于网络管理及其提供的服务,为推进最新技术的发展提供了机会。本文为ML算法提供直接来自INT的数据——本质上是与网络设备接口上的缓冲区相关的统计数据——目的是估计服务的度量。在我们的测试平台上运行的服务是DASH (HTTP上的动态自适应流),这是目前最常用的视频流协议,这给我们的研究带来了很大的挑战,因为它能够自动适应网络条件下的振荡视频的质量。通过使用文献中建立良好的负载模式-正弦,flashcrowd和两者的混合-我们模拟网络中的振荡,即接口中所有缓冲区的真实动态,这些动态通过使用INT功能捕获。在估计流向客户的视频质量时,我们观察到使用随机森林时NMAE(归一化平均绝对误差)低于10%,这比目前的相关工作要好。
{"title":"Using Machine Learning and In-band Network Telemetry for Service Metrics Estimation","authors":"L. Almeida, R. Pasquini, F. Verdi","doi":"10.1109/CloudNet53349.2021.9657155","DOIUrl":"https://doi.org/10.1109/CloudNet53349.2021.9657155","url":null,"abstract":"Data plane programmable devices used together with In-band Network Telemetry (INT) enable the collection of data regarding networks’ operation at a level of granularity never achieved before. Based on the fact that Machine Learning (ML) has been widely adopted in networking, the scenario investigated in this paper opens up the opportunity to advance the state of the art by applying such vast amount of data to the management of networks and the services offered on top of it. This paper feeds ML algorithms with data piped directly from INT - essentially statistics associated to buffers at network devices’ interfaces - with the objective of estimating services’ metrics. The service running on our testbed is DASH (Dynamic Adaptive Streaming over HTTP) - the most used protocol for video streaming nowadays - which brings great challenges to our investigations since it is capable of automatically adapting the quality of the videos due to oscillations in networks’ conditions. By using well established load patterns from the literature - sinusoid, flashcrowd and a mix of both at the same time - we emulate oscillations in the network, i.e., realistic dynamics at all buffers in the interfaces, which are captured by using INT capabilities. While estimating the quality of video being streamed towards our clients, we observed an NMAE (Normalized Mean Absolute Error) below 10% when Random Forest is used, which is better than current related works.","PeriodicalId":369247,"journal":{"name":"2021 IEEE 10th International Conference on Cloud Networking (CloudNet)","volume":"131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115422318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
DFaaS: Decentralized Function-as-a-Service for Federated Edge Computing DFaaS:用于联邦边缘计算的分散式功能即服务
Pub Date : 2021-11-08 DOI: 10.1109/CloudNet53349.2021.9657141
M. Ciavotta, Davide Motterlini, Marco Savi, Alessandro Tundo
Edge Computing pushes cloud capabilities to the edge of the network, closer to the users, to address stringent Quality-of-Service requirements and ensure more efficient bandwidth usage. Function-as-a-Service appears to be the most natural service model solution to enhance Edge Computing applications’ deployment and responsiveness. Unfortunately, the conventional FaaS model does not fit well in distributed and heterogeneous edge environments, where traffic demands arrive to (and are served by) edge nodes that may get overloaded under certain traffic conditions or where the access points of the network might frequently change, as for mobile applications. This short paper tries to fill this gap by proposing DFaaS, a novel decentralized FaaS-based architecture designed to autonomously balance the traffic load across edge nodes belonging to federated Edge Computing ecosystems. DFaaS implementation relies on an overlay peer-to-peer network and a distributed control plane that takes decisions on load redistribution. Although preliminary, results confirm the feasibility of the approach, showing that the system can transparently redistribute the load across edge nodes when they become overloaded.
边缘计算将云功能推向网络边缘,更接近用户,以满足严格的服务质量要求,并确保更有效的带宽使用。功能即服务似乎是增强边缘计算应用程序部署和响应能力的最自然的服务模型解决方案。不幸的是,传统的FaaS模型不适用于分布式和异构边缘环境,在这种环境中,流量需求到达边缘节点(并由边缘节点提供服务),这些节点在某些流量条件下可能会过载,或者网络的接入点可能经常发生变化,例如移动应用程序。这篇简短的论文试图通过提出DFaaS来填补这一空白,DFaaS是一种新颖的分散的基于faas的架构,旨在自主平衡属于联邦边缘计算生态系统的边缘节点之间的流量负载。DFaaS实现依赖于覆盖的点对点网络和对负载重新分配做出决策的分布式控制平面。虽然是初步的,但结果证实了该方法的可行性,表明系统可以在边缘节点过载时透明地重新分配负载。
{"title":"DFaaS: Decentralized Function-as-a-Service for Federated Edge Computing","authors":"M. Ciavotta, Davide Motterlini, Marco Savi, Alessandro Tundo","doi":"10.1109/CloudNet53349.2021.9657141","DOIUrl":"https://doi.org/10.1109/CloudNet53349.2021.9657141","url":null,"abstract":"Edge Computing pushes cloud capabilities to the edge of the network, closer to the users, to address stringent Quality-of-Service requirements and ensure more efficient bandwidth usage. Function-as-a-Service appears to be the most natural service model solution to enhance Edge Computing applications’ deployment and responsiveness. Unfortunately, the conventional FaaS model does not fit well in distributed and heterogeneous edge environments, where traffic demands arrive to (and are served by) edge nodes that may get overloaded under certain traffic conditions or where the access points of the network might frequently change, as for mobile applications. This short paper tries to fill this gap by proposing DFaaS, a novel decentralized FaaS-based architecture designed to autonomously balance the traffic load across edge nodes belonging to federated Edge Computing ecosystems. DFaaS implementation relies on an overlay peer-to-peer network and a distributed control plane that takes decisions on load redistribution. Although preliminary, results confirm the feasibility of the approach, showing that the system can transparently redistribute the load across edge nodes when they become overloaded.","PeriodicalId":369247,"journal":{"name":"2021 IEEE 10th International Conference on Cloud Networking (CloudNet)","volume":"17 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123208923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
An Edge Video Analysis Solution For Intelligent Real-Time Video Surveillance Systems 智能实时视频监控系统的边缘视频分析解决方案
Pub Date : 2021-11-08 DOI: 10.1109/CloudNet53349.2021.9657113
Alessandro Silva, Michel S. Bonfim, P. Rego
Video Analytics has played an essential role in the most varied public safety sectors, mainly when applied to Intelligent Video Surveillance Systems. In this scenario, Edge Video Analytics seeks to migrate part of the workload of the Video Analysis process to devices close to the data source to reduce transmission overhead on the network and overall latency. Therefore, this work proposes an Edge Video Analytics architecture for real-time video monitoring systems. Such architecture divides the analysis process into functional and independent modules, being flexible to support analytics or network functions. We developed a proof of concept to validate the proposed architecture, focusing on detecting and recognizing license plate characters in the edge. In this scenario, between the detection and recognition modules, we used Deep Learning to implement a module responsible for discard plates with distorted identification text to reduce network utilization. Conducted experiments demonstrate that the architecture meets its objectives by reducing an average of 25.64% of the network traffic in its frames flow due to a resolution quality control and 56.65% in its license plate flow due to the filtering step proposed.
视频分析在各种公共安全领域发挥着至关重要的作用,主要应用于智能视频监控系统。在这种情况下,Edge Video Analytics试图将视频分析过程的部分工作负载迁移到靠近数据源的设备上,以减少网络上的传输开销和总体延迟。因此,本研究提出了一种用于实时视频监控系统的边缘视频分析架构。这种架构将分析过程划分为功能模块和独立模块,可以灵活地支持分析或网络功能。我们开发了一个概念证明来验证所提出的架构,重点是检测和识别边缘的车牌字符。在这个场景中,在检测和识别模块之间,我们使用深度学习实现了一个模块,负责识别文本扭曲的丢弃车牌,以降低网络利用率。实验表明,该架构达到了预期的目标,由于分辨率质量控制,帧流平均减少了25.64%的网络流量,而由于所提出的过滤步骤,车牌流平均减少了56.65%的网络流量。
{"title":"An Edge Video Analysis Solution For Intelligent Real-Time Video Surveillance Systems","authors":"Alessandro Silva, Michel S. Bonfim, P. Rego","doi":"10.1109/CloudNet53349.2021.9657113","DOIUrl":"https://doi.org/10.1109/CloudNet53349.2021.9657113","url":null,"abstract":"Video Analytics has played an essential role in the most varied public safety sectors, mainly when applied to Intelligent Video Surveillance Systems. In this scenario, Edge Video Analytics seeks to migrate part of the workload of the Video Analysis process to devices close to the data source to reduce transmission overhead on the network and overall latency. Therefore, this work proposes an Edge Video Analytics architecture for real-time video monitoring systems. Such architecture divides the analysis process into functional and independent modules, being flexible to support analytics or network functions. We developed a proof of concept to validate the proposed architecture, focusing on detecting and recognizing license plate characters in the edge. In this scenario, between the detection and recognition modules, we used Deep Learning to implement a module responsible for discard plates with distorted identification text to reduce network utilization. Conducted experiments demonstrate that the architecture meets its objectives by reducing an average of 25.64% of the network traffic in its frames flow due to a resolution quality control and 56.65% in its license plate flow due to the filtering step proposed.","PeriodicalId":369247,"journal":{"name":"2021 IEEE 10th International Conference on Cloud Networking (CloudNet)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121681473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Global Orchestration Matching Framework for Energy-Efficient Multi-Access Edge Computing 一种面向高能效多接入边缘计算的全局编排匹配框架
Pub Date : 2021-11-08 DOI: 10.1109/CloudNet53349.2021.9657120
Tobias Mahn, Anja Klein
Multi-access edge computing (MEC) enables mobile units (MUs) to offload computation tasks to edge servers nearby. This translates in energy savings for the MUs, but creates a joint problem of offloading decision making and allocation of the shared communication and computation resources. In a MEC scenario with multiple MUs, multiple access points and multiple cloudlets the complexity of this joint problem grows rapidly with the number of entities in the network. The complexity increases even further when some MUs have a higher incentive to offload tasks due to a low battery level and are willing to pay in exchange for more resources. Our proposed energy-minimization approach with a flexible maximum offloading time constraint is based on matching theory. A global orchestrator (GO) collects all the system state information and coordinates the offloading preferences of the MUs. A MU can lower the maximum time constraint by a payment. The GO allocates the shared communication and computation resources accordingly to satisfy the time constraint. The computation load of the algorithm at each MU is reduced to a minimum as each MU only has to take a simple offloading decision based on its task properties and payment willingness. In numerical simulations, the proposed matching approach and flexible resource allocation scheme is tested for fast and reliable convergence, even in large networks with hundreds of MUs. Furthermore, the matching algorithm, tested with different resource allocation strategies, shows a significant improvement in terms of energy-efficiency over the considered reference schemes.
多访问边缘计算(MEC)使移动单元(mu)能够将计算任务卸载到附近的边缘服务器上。这为mu节省了能源,但同时也产生了卸载决策和分配共享通信和计算资源的问题。在具有多个mu、多个接入点和多个云的MEC场景中,这种联合问题的复杂性随着网络中实体的数量而迅速增长。当一些mu由于电池电量低而有更高的动机卸载任务,并愿意支付更多的资源时,复杂性甚至会进一步增加。基于匹配理论,提出了具有柔性最大卸载时间约束的能量最小化方法。全局协调器(GO)收集所有系统状态信息并协调mu的卸载首选项。MU可以通过支付降低最大时间限制。GO对共享通信资源和计算资源进行相应的分配,以满足时间约束。由于每个MU只需根据其任务性质和支付意愿做出简单的卸载决策,使得算法在每个MU处的计算负荷降到最小。通过数值仿真,验证了所提出的匹配方法和灵活的资源分配方案在数百mu的大型网络中也能快速可靠地收敛。此外,在不同资源分配策略的测试下,匹配算法在能源效率方面比所考虑的参考方案有显着改善。
{"title":"A Global Orchestration Matching Framework for Energy-Efficient Multi-Access Edge Computing","authors":"Tobias Mahn, Anja Klein","doi":"10.1109/CloudNet53349.2021.9657120","DOIUrl":"https://doi.org/10.1109/CloudNet53349.2021.9657120","url":null,"abstract":"Multi-access edge computing (MEC) enables mobile units (MUs) to offload computation tasks to edge servers nearby. This translates in energy savings for the MUs, but creates a joint problem of offloading decision making and allocation of the shared communication and computation resources. In a MEC scenario with multiple MUs, multiple access points and multiple cloudlets the complexity of this joint problem grows rapidly with the number of entities in the network. The complexity increases even further when some MUs have a higher incentive to offload tasks due to a low battery level and are willing to pay in exchange for more resources. Our proposed energy-minimization approach with a flexible maximum offloading time constraint is based on matching theory. A global orchestrator (GO) collects all the system state information and coordinates the offloading preferences of the MUs. A MU can lower the maximum time constraint by a payment. The GO allocates the shared communication and computation resources accordingly to satisfy the time constraint. The computation load of the algorithm at each MU is reduced to a minimum as each MU only has to take a simple offloading decision based on its task properties and payment willingness. In numerical simulations, the proposed matching approach and flexible resource allocation scheme is tested for fast and reliable convergence, even in large networks with hundreds of MUs. Furthermore, the matching algorithm, tested with different resource allocation strategies, shows a significant improvement in terms of energy-efficiency over the considered reference schemes.","PeriodicalId":369247,"journal":{"name":"2021 IEEE 10th International Conference on Cloud Networking (CloudNet)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129311672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
2021 IEEE 10th International Conference on Cloud Networking (CloudNet)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1