首页 > 最新文献

2023 IEEE 9th International Conference on Network Softwarization (NetSoft)最新文献

英文 中文
Towards Digital Network Twins: Can we Machine Learn Network Function Behaviors? 走向数字网络双胞胎:我们能机器学习网络功能行为吗?
Pub Date : 2023-06-19 DOI: 10.1109/NetSoft57336.2023.10175422
Razvan-Mihai Ursu, Johannes Zerwas, Patrick Krämer, Navidreza Asadi, Phil Rodgers, Leon Wong, W. Kellerer
Cluster orchestrators such as Kubernetes (K8s) provide many knobs that cloud administrators can tune to conFigure their system. However, different configurations lead to different levels of performance, which additionally depend on the application. Hence, finding exactly the best configuration for a given system can be a difficult task. A particularly innovative approach to evaluate configurations and optimize desired performance metrics is the use of Digital Twins (DT). To achieve good results in short time, the models of the cloud network functions underlying the DT must be minimally complex but highly accurate. Developing such models requires detailed knowledge about the system components and their interactions. We believe that a data-driven paradigm can capture the actual behavior of a network function (NF) deployed in the cluster, while decoupling it from internal feedback loops. In this paper, we analyze the HTTP load balancing function as an example of an NF and explore the data-driven paradigm to learn its behavior in a K8s cluster deployment. We develop, implement, and evaluate two approaches to learn the behavior of a state-of-the-art load balancer and show that Machine Learning has the potential to enhance the way we model NF behaviors.
Kubernetes (k8)等集群编排器提供了许多旋钮,云管理员可以通过调优来配置他们的系统。但是,不同的配置会导致不同级别的性能,这还取决于应用程序。因此,为给定系统找到准确的最佳配置可能是一项艰巨的任务。评估配置和优化所需性能指标的一种特别创新的方法是使用数字双胞胎(DT)。为了在短时间内获得良好的结果,DT基础的云网络函数模型必须具有最低程度的复杂性,但必须具有高度的准确性。开发这样的模型需要详细了解系统组件及其相互作用。我们相信,数据驱动的范式可以捕获部署在集群中的网络功能(NF)的实际行为,同时将其与内部反馈回路解耦。在本文中,我们分析了HTTP负载平衡功能作为NF的一个例子,并探讨了数据驱动的范例,以了解其在K8s集群部署中的行为。我们开发、实现和评估了两种方法来学习最先进的负载均衡器的行为,并表明机器学习有潜力增强我们建模NF行为的方式。
{"title":"Towards Digital Network Twins: Can we Machine Learn Network Function Behaviors?","authors":"Razvan-Mihai Ursu, Johannes Zerwas, Patrick Krämer, Navidreza Asadi, Phil Rodgers, Leon Wong, W. Kellerer","doi":"10.1109/NetSoft57336.2023.10175422","DOIUrl":"https://doi.org/10.1109/NetSoft57336.2023.10175422","url":null,"abstract":"Cluster orchestrators such as Kubernetes (K8s) provide many knobs that cloud administrators can tune to conFigure their system. However, different configurations lead to different levels of performance, which additionally depend on the application. Hence, finding exactly the best configuration for a given system can be a difficult task. A particularly innovative approach to evaluate configurations and optimize desired performance metrics is the use of Digital Twins (DT). To achieve good results in short time, the models of the cloud network functions underlying the DT must be minimally complex but highly accurate. Developing such models requires detailed knowledge about the system components and their interactions. We believe that a data-driven paradigm can capture the actual behavior of a network function (NF) deployed in the cluster, while decoupling it from internal feedback loops. In this paper, we analyze the HTTP load balancing function as an example of an NF and explore the data-driven paradigm to learn its behavior in a K8s cluster deployment. We develop, implement, and evaluate two approaches to learn the behavior of a state-of-the-art load balancer and show that Machine Learning has the potential to enhance the way we model NF behaviors.","PeriodicalId":223208,"journal":{"name":"2023 IEEE 9th International Conference on Network Softwarization (NetSoft)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131068867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Multi-Hop-Aware User To Edge-Server Association Game 多跳感知用户到边缘服务器关联游戏
Pub Date : 2023-06-19 DOI: 10.1109/NetSoft57336.2023.10175406
Youcef Kardjadja, Alan Tsang, M. Ibnkahla, Y. Ghamri-Doudane
Nowadays, services and applications are becoming more latency-sensitive and resource-hungry. Due to their high computational complexity, they can not always be processed locally in user equipment, and have to be offloaded to a distant powerful server. Instead of resorting to remote Cloud servers with high latency and traffic bottlenecks, service providers could map their users to Multi-Access Edge Computing (MEC) servers that can run computation-intensive tasks nearby. This mapping of users to MEC distributed servers is known as the Edge User Allocation (EUA) problem, and has been widely studied in the literature from the perspective of service providers. However, users in previous works can only be allocated to a server if they are in its coverage. In reality, it may be optimal to allocate a user to a distant server (e.g., two hops away from the user) if the latency threshold and system cost are both respected. This work presents the first attempt to tackle the multi-hop aware EUA problem. We consider the static EUA problem where users have a simultaneous-batch arrival pattern, and detail the added complexity compared to the original EUA setting. Afterwards, we propose a game theory-based distributed approach for allocating users to edge servers. We finally conduct a series of experiments to evaluate the performance of our approach against other baseline approaches. The results illustrate the potential benefits of allowing multi-hop allocations in providing better overall system cost to service providers.
如今,服务和应用程序变得越来越对延迟敏感,也越来越需要资源。由于它们的高计算复杂性,它们不能总是在用户设备中本地处理,而必须卸载到远程功能强大的服务器。服务提供商可以将其用户映射到可以在附近运行计算密集型任务的多访问边缘计算(MEC)服务器,而不是求助于具有高延迟和流量瓶颈的远程云服务器。这种用户到MEC分布式服务器的映射被称为边缘用户分配(EUA)问题,并且在文献中从服务提供商的角度进行了广泛的研究。但是,以前工作中的用户只有在其覆盖范围内才能分配给服务器。实际上,如果满足延迟阈值和系统开销,则将用户分配到远程服务器(例如,距离用户两个跃点)可能是最优的。这项工作提出了解决多跳感知EUA问题的首次尝试。我们考虑了静态EUA问题,其中用户具有同时批到达模式,并详细说明了与原始EUA设置相比增加的复杂性。然后,我们提出了一种基于博弈论的分布式方法来分配用户到边缘服务器。最后,我们进行了一系列的实验来评估我们的方法与其他基线方法的性能。结果说明了允许多跳分配在为服务提供商提供更好的总体系统成本方面的潜在好处。
{"title":"A Multi-Hop-Aware User To Edge-Server Association Game","authors":"Youcef Kardjadja, Alan Tsang, M. Ibnkahla, Y. Ghamri-Doudane","doi":"10.1109/NetSoft57336.2023.10175406","DOIUrl":"https://doi.org/10.1109/NetSoft57336.2023.10175406","url":null,"abstract":"Nowadays, services and applications are becoming more latency-sensitive and resource-hungry. Due to their high computational complexity, they can not always be processed locally in user equipment, and have to be offloaded to a distant powerful server. Instead of resorting to remote Cloud servers with high latency and traffic bottlenecks, service providers could map their users to Multi-Access Edge Computing (MEC) servers that can run computation-intensive tasks nearby. This mapping of users to MEC distributed servers is known as the Edge User Allocation (EUA) problem, and has been widely studied in the literature from the perspective of service providers. However, users in previous works can only be allocated to a server if they are in its coverage. In reality, it may be optimal to allocate a user to a distant server (e.g., two hops away from the user) if the latency threshold and system cost are both respected. This work presents the first attempt to tackle the multi-hop aware EUA problem. We consider the static EUA problem where users have a simultaneous-batch arrival pattern, and detail the added complexity compared to the original EUA setting. Afterwards, we propose a game theory-based distributed approach for allocating users to edge servers. We finally conduct a series of experiments to evaluate the performance of our approach against other baseline approaches. The results illustrate the potential benefits of allowing multi-hop allocations in providing better overall system cost to service providers.","PeriodicalId":223208,"journal":{"name":"2023 IEEE 9th International Conference on Network Softwarization (NetSoft)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128620674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distributed Timeslot Allocation in mMTC Network by Magnitude-Sensitive Bayesian Attractor Model 基于量敏感贝叶斯吸引子模型的mMTC网络时隙分配
Pub Date : 2023-06-19 DOI: 10.1109/NetSoft57336.2023.10175490
Tatsuya Otoshi, Masayuki Murata, H. Shimonishi, T. Shimokawa
In 5G, flexible resource management, mainly by base stations, will enable support for a variety of use cases. However, in a situation where a large number of devices exist, such as in mMTC, devices need to allocate resources appropriately in an autonomous decentralized manner. In this paper, autonomous decentralized timeslot allocation is achieved by using a decision model for each device. As a decision model, we propose an extension of the Bayesian Attractor Model (BAM) using Bayesian estimation. The proposed model incorporates a feature of human decision-making called magnitude sensitivity, where the time to decision varies with the sum of the values of all alternatives. This allows the natural introduction of the behavior of making a decision quickly when a time slot is available and waiting otherwise. Simulation-based evaluations show that the proposed method can avoid time slot conflicts during congestion more effectively than conventional Q-learning based time slot selection.
在5G中,主要由基站进行的灵活资源管理将支持各种用例。但是,在存在大量设备的情况下,例如在mMTC中,设备需要以自治的分散方式适当地分配资源。本文通过对每个设备使用决策模型来实现自主分散的时隙分配。作为一种决策模型,我们利用贝叶斯估计对贝叶斯吸引模型(BAM)进行了扩展。提出的模型结合了人类决策的一个特征,称为幅度敏感性,其中决策时间随所有备选值的总和而变化。这允许自然地引入这样的行为:当有时间段可用时快速做出决定,否则就等待。仿真结果表明,该方法比传统的基于q学习的时隙选择方法更有效地避免了拥塞时隙冲突。
{"title":"Distributed Timeslot Allocation in mMTC Network by Magnitude-Sensitive Bayesian Attractor Model","authors":"Tatsuya Otoshi, Masayuki Murata, H. Shimonishi, T. Shimokawa","doi":"10.1109/NetSoft57336.2023.10175490","DOIUrl":"https://doi.org/10.1109/NetSoft57336.2023.10175490","url":null,"abstract":"In 5G, flexible resource management, mainly by base stations, will enable support for a variety of use cases. However, in a situation where a large number of devices exist, such as in mMTC, devices need to allocate resources appropriately in an autonomous decentralized manner. In this paper, autonomous decentralized timeslot allocation is achieved by using a decision model for each device. As a decision model, we propose an extension of the Bayesian Attractor Model (BAM) using Bayesian estimation. The proposed model incorporates a feature of human decision-making called magnitude sensitivity, where the time to decision varies with the sum of the values of all alternatives. This allows the natural introduction of the behavior of making a decision quickly when a time slot is available and waiting otherwise. Simulation-based evaluations show that the proposed method can avoid time slot conflicts during congestion more effectively than conventional Q-learning based time slot selection.","PeriodicalId":223208,"journal":{"name":"2023 IEEE 9th International Conference on Network Softwarization (NetSoft)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125009782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Flow classification for network security using P4-based Programmable Data Plane switches 基于p4的可编程数据平面交换机的网络安全流分类
Pub Date : 2023-06-19 DOI: 10.1109/NetSoft57336.2023.10175420
Aniswar S. Krishnan, K. Sivalingam, Gauravdeep Shami, M. Lyonnais, Rodney G. Wilson
This paper deals with programmable data plane switches that perform flow classification using machine learning (ML) algorithms. This paper describes the implementation-based study of an existing ML-based packet marking scheme called FlowLens. The core algorithm, written in the P4 language, generates features, called flow markers, while processing packets. These flow markers are an efficient formulation of the packet length distribution of a particular flow. Secondly, a controller responsible for configuring the switch, extracting the features periodically, and applying machine learning algorithms for flow classification, is implemented in Python. The generation of flow markers is evaluated using flows in a tree-based topology in Mininet using the P4-enab1ed BMv2 packet switch on the mininet emulator. Classification is performed for the detection of two different types of network attacks: Active Wiretap and Mirai Botnet. In both cases, we obtain a 30-fold reduction in memory footprint with no loss in accuracy demonstrating the potential of running P4-based ML algorithms in packet switches.
本文研究使用机器学习(ML)算法执行流分类的可编程数据平面交换机。本文对现有的基于ml的数据包标记方案FlowLens进行了基于实现的研究。核心算法用P4语言编写,在处理数据包时生成称为流量标记的特征。这些流标记是特定流的包长度分布的有效公式。其次,在Python中实现了一个控制器,负责配置开关,定期提取特征,并应用机器学习算法进行流分类。在Mininet模拟器上使用启用了p4的BMv2数据包开关,使用基于树的拓扑中的流来评估流标记的生成。分类检测两种不同类型的网络攻击:Active Wiretap和Mirai Botnet。在这两种情况下,我们都将内存占用减少了30倍,而精度没有损失,这证明了在分组交换机中运行基于p4的ML算法的潜力。
{"title":"Flow classification for network security using P4-based Programmable Data Plane switches","authors":"Aniswar S. Krishnan, K. Sivalingam, Gauravdeep Shami, M. Lyonnais, Rodney G. Wilson","doi":"10.1109/NetSoft57336.2023.10175420","DOIUrl":"https://doi.org/10.1109/NetSoft57336.2023.10175420","url":null,"abstract":"This paper deals with programmable data plane switches that perform flow classification using machine learning (ML) algorithms. This paper describes the implementation-based study of an existing ML-based packet marking scheme called FlowLens. The core algorithm, written in the P4 language, generates features, called flow markers, while processing packets. These flow markers are an efficient formulation of the packet length distribution of a particular flow. Secondly, a controller responsible for configuring the switch, extracting the features periodically, and applying machine learning algorithms for flow classification, is implemented in Python. The generation of flow markers is evaluated using flows in a tree-based topology in Mininet using the P4-enab1ed BMv2 packet switch on the mininet emulator. Classification is performed for the detection of two different types of network attacks: Active Wiretap and Mirai Botnet. In both cases, we obtain a 30-fold reduction in memory footprint with no loss in accuracy demonstrating the potential of running P4-based ML algorithms in packet switches.","PeriodicalId":223208,"journal":{"name":"2023 IEEE 9th International Conference on Network Softwarization (NetSoft)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127876865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic Machine Learning Algorithm Selection For Network Slicing in Beyond 5G Networks 超5G网络中网络切片的动态机器学习算法选择
Pub Date : 2023-06-19 DOI: 10.1109/NetSoft57336.2023.10175443
Abdelmounaim Bouroudi, A. Outtagarts, Y. H. Aoul
The advanced 5G and 6G mobile network generations offer new capabilities that enable the creation of multiple virtual network instances with distinct and stringent requirements. However, the coexistence of multiple network functions on top of a shared substrate network poses a resource allocation challenge known as the Virtual Network Embedding (VNE) problem. In recent years, this NP-hard problem has received increasing attention in the literature due to the growing need to optimize resources at the edge of the network, where computational and storage capabilities are limited. In this demo paper, we propose a solution to this problem, utilizing the Algorithm Selection (AS) paradigm. This selects the most optimal Deep Reinforcement Learning (DRL) algorithm from a portfolio of agents, in an offline manner, based on past performance. To evaluate our solution, we developed a simulation platform using the OMNeT++ framework, with an orchestration module containerized using Docker. The proposed solution shows good performance and outperforms standalone algorithms.
先进的5G和6G移动网络提供了新的功能,可以创建具有不同和严格要求的多个虚拟网络实例。然而,多种网络功能在共享的基板网络上共存带来了一个资源分配挑战,即虚拟网络嵌入(VNE)问题。近年来,由于在计算和存储能力有限的网络边缘优化资源的需求日益增长,这个NP-hard问题在文献中受到越来越多的关注。在这篇演示论文中,我们提出了一个解决这个问题的方法,利用算法选择(AS)范式。该算法基于过去的表现,以离线方式从智能体组合中选择最优的深度强化学习(DRL)算法。为了评估我们的解决方案,我们使用omnet++框架开发了一个模拟平台,并使用Docker容器化了一个编排模块。该方案具有良好的性能,优于独立算法。
{"title":"Dynamic Machine Learning Algorithm Selection For Network Slicing in Beyond 5G Networks","authors":"Abdelmounaim Bouroudi, A. Outtagarts, Y. H. Aoul","doi":"10.1109/NetSoft57336.2023.10175443","DOIUrl":"https://doi.org/10.1109/NetSoft57336.2023.10175443","url":null,"abstract":"The advanced 5G and 6G mobile network generations offer new capabilities that enable the creation of multiple virtual network instances with distinct and stringent requirements. However, the coexistence of multiple network functions on top of a shared substrate network poses a resource allocation challenge known as the Virtual Network Embedding (VNE) problem. In recent years, this NP-hard problem has received increasing attention in the literature due to the growing need to optimize resources at the edge of the network, where computational and storage capabilities are limited. In this demo paper, we propose a solution to this problem, utilizing the Algorithm Selection (AS) paradigm. This selects the most optimal Deep Reinforcement Learning (DRL) algorithm from a portfolio of agents, in an offline manner, based on past performance. To evaluate our solution, we developed a simulation platform using the OMNeT++ framework, with an orchestration module containerized using Docker. The proposed solution shows good performance and outperforms standalone algorithms.","PeriodicalId":223208,"journal":{"name":"2023 IEEE 9th International Conference on Network Softwarization (NetSoft)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129243239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Precise Turbo Frequency Tuning and Shared Resource Optimisation for Energy-Efficient Cloud Native Workloads 精确的涡轮频率调谐和共享资源优化节能云原生工作负载
Pub Date : 2023-06-19 DOI: 10.1109/NetSoft57336.2023.10175455
P. Veitch, Chris MacNamara, John J. Browne
As an increasing number of software-oriented telecoms workloads are run as Containerised Network Functions (CNFs) on cloud native virtualised infrastructure, performance tuning is vital. When compute infrastructure is distributed towards the edge of networks, efficient use of scarce resources is key meaning the available resources must be fine-tuned to achieve deterministic performance; another vital factor is the energy consumption of such compute which should be carefully managed. In the latest generation of Intel x86 servers, a new capability called Speed Select Technology Turbo Frequency (SST-TF) is available, enabling more targeted allocation of turbo frequency settings to specific CPU cores. This has significant potential in multi-tenant edge compute environments increasingly seen in 5G deployments and is likely to be a key building block for 6G. This paper evaluates the potential application of SST-TF for competing CNFs – a mix of high and low priority workloads - in a multi-tenant edge compute scenario. The targeted application of SST-TF is shown to yield performance benefits compared to the legacy turbo frequency capability in earlier generations of processor (by up to 35%), and when combined with other intelligent resource management tooling can also achieve a net reduction in server power consumption (of 1.7%).
随着越来越多的面向软件的电信工作负载作为容器化网络功能(cnf)在云原生虚拟化基础设施上运行,性能调优变得至关重要。当计算基础设施向网络边缘分布时,有效利用稀缺资源是关键,这意味着必须对可用资源进行微调以实现确定性性能;另一个至关重要的因素是这种计算的能量消耗,应该仔细管理。在最新一代的Intel x86服务器中,提供了一种名为Speed Select Technology Turbo Frequency (SST-TF)的新功能,可以更有针对性地将Turbo频率设置分配给特定的CPU内核。这在5G部署中越来越多地看到的多租户边缘计算环境中具有巨大潜力,并且可能成为6G的关键构建块。本文评估了SST-TF在多租户边缘计算场景中用于竞争cnf(高优先级和低优先级工作负载的混合)的潜在应用。与前几代处理器的传统涡轮频率能力相比,SST-TF的目标应用显示出性能优势(高达35%),并且当与其他智能资源管理工具结合使用时,还可以实现服务器功耗的净降低(1.7%)。
{"title":"Precise Turbo Frequency Tuning and Shared Resource Optimisation for Energy-Efficient Cloud Native Workloads","authors":"P. Veitch, Chris MacNamara, John J. Browne","doi":"10.1109/NetSoft57336.2023.10175455","DOIUrl":"https://doi.org/10.1109/NetSoft57336.2023.10175455","url":null,"abstract":"As an increasing number of software-oriented telecoms workloads are run as Containerised Network Functions (CNFs) on cloud native virtualised infrastructure, performance tuning is vital. When compute infrastructure is distributed towards the edge of networks, efficient use of scarce resources is key meaning the available resources must be fine-tuned to achieve deterministic performance; another vital factor is the energy consumption of such compute which should be carefully managed. In the latest generation of Intel x86 servers, a new capability called Speed Select Technology Turbo Frequency (SST-TF) is available, enabling more targeted allocation of turbo frequency settings to specific CPU cores. This has significant potential in multi-tenant edge compute environments increasingly seen in 5G deployments and is likely to be a key building block for 6G. This paper evaluates the potential application of SST-TF for competing CNFs – a mix of high and low priority workloads - in a multi-tenant edge compute scenario. The targeted application of SST-TF is shown to yield performance benefits compared to the legacy turbo frequency capability in earlier generations of processor (by up to 35%), and when combined with other intelligent resource management tooling can also achieve a net reduction in server power consumption (of 1.7%).","PeriodicalId":223208,"journal":{"name":"2023 IEEE 9th International Conference on Network Softwarization (NetSoft)","volume":"131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115896218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intelligent Service Provisioning in Fog Computing 雾计算中的智能业务发放
Pub Date : 2023-06-19 DOI: 10.1109/NetSoft57336.2023.10175416
Gaetano Francesco Pittalà, W. Cerroni
Fog computing is a distributed paradigm that extends cloud computing closer to the edge of the network, and even beyond that. By employing local resources, it enables quicker and more effective data processing and analysis. The optimization and automation of resource allocation, data processing, and job scheduling in the fog environment are made possible by the application of machine learning to Fog Computing Orchestration. It is also important, when working with the network computing models, to consider the XaaS paradigm, as it promotes the flexibility and scalability of fog services, bringing the concept of “service” into the foreground. Therefore, the need for a fog orchestrator enabling such characteristics arises, leveraging AI and the “service-centric” approach to enhance users’ service fruition. The design and development of such an orchestrator will be the objective of the early-stage PhD project presented in this paper.
雾计算是一种分布式范例,它将云计算扩展到更接近网络边缘的地方,甚至更远的地方。通过利用本地资源,它可以实现更快、更有效的数据处理和分析。将机器学习应用于雾计算编排,使雾环境中资源分配、数据处理和作业调度的优化和自动化成为可能。在使用网络计算模型时,考虑XaaS范式也很重要,因为它促进了雾服务的灵活性和可伸缩性,将“服务”的概念带到了前台。因此,需要一个支持这些特征的雾编排器,利用人工智能和“以服务为中心”的方法来增强用户的服务成果。设计和开发这样一个编排器将是本文中提出的早期博士项目的目标。
{"title":"Intelligent Service Provisioning in Fog Computing","authors":"Gaetano Francesco Pittalà, W. Cerroni","doi":"10.1109/NetSoft57336.2023.10175416","DOIUrl":"https://doi.org/10.1109/NetSoft57336.2023.10175416","url":null,"abstract":"Fog computing is a distributed paradigm that extends cloud computing closer to the edge of the network, and even beyond that. By employing local resources, it enables quicker and more effective data processing and analysis. The optimization and automation of resource allocation, data processing, and job scheduling in the fog environment are made possible by the application of machine learning to Fog Computing Orchestration. It is also important, when working with the network computing models, to consider the XaaS paradigm, as it promotes the flexibility and scalability of fog services, bringing the concept of “service” into the foreground. Therefore, the need for a fog orchestrator enabling such characteristics arises, leveraging AI and the “service-centric” approach to enhance users’ service fruition. The design and development of such an orchestrator will be the objective of the early-stage PhD project presented in this paper.","PeriodicalId":223208,"journal":{"name":"2023 IEEE 9th International Conference on Network Softwarization (NetSoft)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115988137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DRL-based Service Migration for MEC Cloud-Native 5G and beyond Networks 基于drl的MEC云原生5G及以上网络的业务迁移
Pub Date : 2023-06-19 DOI: 10.1109/NetSoft57336.2023.10175417
Theodoros Tsourdinis, N. Makris, S. Fdida, T. Korakis
Multi-access Edge Computing (MEC) has been considered one of the most prominent enablers for low-latency access to services provided over the telecommunications network. Nevertheless, client mobility, as well as external factors which impact the communication channel can severely deteriorate the eventual user-perceived latency times. Such processes can be averted by migrating the provided services to other edges, while the end-user changes their base station association as they move within the serviced region. In this work, we start from an entirely virtualized cloud-native 5G network based on the OpenAirInterface platform and develop our architecture for providing seamless live migration of edge services. On top of this infrastructure, we employ a Deep Reinforcement Learning (DRL) approach that is able to proactively relocate services to new edges, subject to the user’s multi-cell latency measurements and the workload status of the servers. We evaluate our scheme in a testbed setup by emulating mobility using realistic mobility patterns and workloads from real-world clusters. Our results denote that our scheme is capable sustain low-latency values for the end users, based on their mobility within the serviced region.
多访问边缘计算(MEC)被认为是电信网络上提供的服务的低延迟访问的最重要的推动者之一。然而,客户机移动性以及影响通信通道的外部因素可能会严重恶化用户感知到的最终延迟时间。当终端用户在服务区域内移动时更改其基站关联时,可以通过将所提供的服务迁移到其他边缘来避免此类过程。在这项工作中,我们从基于OpenAirInterface平台的完全虚拟化的云原生5G网络开始,开发我们的架构,以提供边缘服务的无缝实时迁移。在此基础设施之上,我们采用深度强化学习(DRL)方法,能够根据用户的多单元延迟测量和服务器的工作负载状态,主动将服务重新定位到新的边缘。我们通过使用真实的迁移模式和来自真实集群的工作负载来模拟迁移,从而在测试平台设置中评估我们的方案。我们的结果表明,基于终端用户在服务区域内的移动性,我们的方案能够为终端用户维持低延迟值。
{"title":"DRL-based Service Migration for MEC Cloud-Native 5G and beyond Networks","authors":"Theodoros Tsourdinis, N. Makris, S. Fdida, T. Korakis","doi":"10.1109/NetSoft57336.2023.10175417","DOIUrl":"https://doi.org/10.1109/NetSoft57336.2023.10175417","url":null,"abstract":"Multi-access Edge Computing (MEC) has been considered one of the most prominent enablers for low-latency access to services provided over the telecommunications network. Nevertheless, client mobility, as well as external factors which impact the communication channel can severely deteriorate the eventual user-perceived latency times. Such processes can be averted by migrating the provided services to other edges, while the end-user changes their base station association as they move within the serviced region. In this work, we start from an entirely virtualized cloud-native 5G network based on the OpenAirInterface platform and develop our architecture for providing seamless live migration of edge services. On top of this infrastructure, we employ a Deep Reinforcement Learning (DRL) approach that is able to proactively relocate services to new edges, subject to the user’s multi-cell latency measurements and the workload status of the servers. We evaluate our scheme in a testbed setup by emulating mobility using realistic mobility patterns and workloads from real-world clusters. Our results denote that our scheme is capable sustain low-latency values for the end users, based on their mobility within the serviced region.","PeriodicalId":223208,"journal":{"name":"2023 IEEE 9th International Conference on Network Softwarization (NetSoft)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114723901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Chatbot-based Feedback for Dynamically Generated Workflows in Docker Networks Docker网络中基于聊天机器人的动态工作流反馈
Pub Date : 2023-06-19 DOI: 10.1109/NetSoft57336.2023.10175429
Andrzej Jasinski, Yuansong Qiao, Enda Fallon, R. Flynn
This paper presents an implementation of a feedback mechanism for a workflow management framework. A chatbot that uses natural language processing (NLP) is central to the proposed feedback mechanism. NLP is used to transform text-based plain language input, both human-written and machine-generated, into a form that the framework can use to generate a workflow for execution in an environment of interest. The example environment described here is containerized network management, in which the workflow management framework, using feedback, can detect anomalies and mitigate potential incidents.
提出了一种工作流管理框架反馈机制的实现方法。使用自然语言处理(NLP)的聊天机器人是所提出的反馈机制的核心。NLP用于将基于文本的纯语言输入(包括人工编写的和机器生成的)转换为框架可以使用的形式,以生成在感兴趣的环境中执行的工作流。这里描述的示例环境是容器化的网络管理,其中工作流管理框架使用反馈可以检测异常并减轻潜在事件。
{"title":"Chatbot-based Feedback for Dynamically Generated Workflows in Docker Networks","authors":"Andrzej Jasinski, Yuansong Qiao, Enda Fallon, R. Flynn","doi":"10.1109/NetSoft57336.2023.10175429","DOIUrl":"https://doi.org/10.1109/NetSoft57336.2023.10175429","url":null,"abstract":"This paper presents an implementation of a feedback mechanism for a workflow management framework. A chatbot that uses natural language processing (NLP) is central to the proposed feedback mechanism. NLP is used to transform text-based plain language input, both human-written and machine-generated, into a form that the framework can use to generate a workflow for execution in an environment of interest. The example environment described here is containerized network management, in which the workflow management framework, using feedback, can detect anomalies and mitigate potential incidents.","PeriodicalId":223208,"journal":{"name":"2023 IEEE 9th International Conference on Network Softwarization (NetSoft)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127171612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AppleSeed: Intent-Based Multi-Domain Infrastructure Management via Few-Shot Learning AppleSeed:基于意图的多域基础设施管理,通过几次学习
Pub Date : 2023-06-19 DOI: 10.1109/NetSoft57336.2023.10175410
Jieyu Lin, Kristina Dzeparoska, A. Tizghadam, A. Leon-Garcia
Managing complex infrastructures in multi-domain settings is time-consuming and error-prone. Intent-based infrastructure management is a means to simplify management by allowing users to specify intents, i.e., high-level statements in natural language, that are automatically realized by the system. However, providing intent-based multi-domain infrastructure management poses a number of challenges: 1) intent translation; 2) plan execution and parallelization; 3) incompatible cross-domain abstractions. To tackle these challenges, we propose AppleSeed, an intent-based infrastructure management system that enables an end-to-end intent-to-deployment pipeline. AppleSeed uses few-shot learning for training a Large Language Model (LLM) to translate intents into intermediate programs, which are processed by a just-in-time compiler and a materialization module to automatically generate parallelizable, domain-specific executable programs. We evaluate the system in two use cases: Deep Packet Inspection (DPI); and machine learning training and inferencing. Our system achieves efficient intent translation into an execution plan with an average 22.3x lines of code to intent word ratio. It also speeds up the execution of the management plan by 1.7-2.6 times with our JIT compilation for parallelized execution compared to sequential execution.
在多域设置中管理复杂的基础设施既耗时又容易出错。基于意图的基础架构管理是一种简化管理的方法,允许用户指定意图,即用自然语言指定高级语句,由系统自动实现。然而,提供基于意图的多域基础设施管理带来了许多挑战:1)意图转换;2)计划执行和并行化;3)不兼容的跨领域抽象。为了应对这些挑战,我们提出了AppleSeed,这是一个基于意图的基础设施管理系统,可以实现端到端的意图到部署管道。AppleSeed使用少量学习来训练一个大型语言模型(LLM),将意图翻译成中间程序,中间程序由即时编译器和具体化模块处理,自动生成可并行化的、特定领域的可执行程序。我们在两个用例中评估系统:深度包检测(DPI);机器学习训练和推理。我们的系统实现了高效的意图转换为执行计划,平均代码行数与意图字数之比为22.3倍。与顺序执行相比,使用并行执行的JIT编译还可以将管理计划的执行速度提高1.7-2.6倍。
{"title":"AppleSeed: Intent-Based Multi-Domain Infrastructure Management via Few-Shot Learning","authors":"Jieyu Lin, Kristina Dzeparoska, A. Tizghadam, A. Leon-Garcia","doi":"10.1109/NetSoft57336.2023.10175410","DOIUrl":"https://doi.org/10.1109/NetSoft57336.2023.10175410","url":null,"abstract":"Managing complex infrastructures in multi-domain settings is time-consuming and error-prone. Intent-based infrastructure management is a means to simplify management by allowing users to specify intents, i.e., high-level statements in natural language, that are automatically realized by the system. However, providing intent-based multi-domain infrastructure management poses a number of challenges: 1) intent translation; 2) plan execution and parallelization; 3) incompatible cross-domain abstractions. To tackle these challenges, we propose AppleSeed, an intent-based infrastructure management system that enables an end-to-end intent-to-deployment pipeline. AppleSeed uses few-shot learning for training a Large Language Model (LLM) to translate intents into intermediate programs, which are processed by a just-in-time compiler and a materialization module to automatically generate parallelizable, domain-specific executable programs. We evaluate the system in two use cases: Deep Packet Inspection (DPI); and machine learning training and inferencing. Our system achieves efficient intent translation into an execution plan with an average 22.3x lines of code to intent word ratio. It also speeds up the execution of the management plan by 1.7-2.6 times with our JIT compilation for parallelized execution compared to sequential execution.","PeriodicalId":223208,"journal":{"name":"2023 IEEE 9th International Conference on Network Softwarization (NetSoft)","volume":"173 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125794317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2023 IEEE 9th International Conference on Network Softwarization (NetSoft)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1