Pub Date : 2023-06-19DOI: 10.1109/NetSoft57336.2023.10175422
Razvan-Mihai Ursu, Johannes Zerwas, Patrick Krämer, Navidreza Asadi, Phil Rodgers, Leon Wong, W. Kellerer
Cluster orchestrators such as Kubernetes (K8s) provide many knobs that cloud administrators can tune to conFigure their system. However, different configurations lead to different levels of performance, which additionally depend on the application. Hence, finding exactly the best configuration for a given system can be a difficult task. A particularly innovative approach to evaluate configurations and optimize desired performance metrics is the use of Digital Twins (DT). To achieve good results in short time, the models of the cloud network functions underlying the DT must be minimally complex but highly accurate. Developing such models requires detailed knowledge about the system components and their interactions. We believe that a data-driven paradigm can capture the actual behavior of a network function (NF) deployed in the cluster, while decoupling it from internal feedback loops. In this paper, we analyze the HTTP load balancing function as an example of an NF and explore the data-driven paradigm to learn its behavior in a K8s cluster deployment. We develop, implement, and evaluate two approaches to learn the behavior of a state-of-the-art load balancer and show that Machine Learning has the potential to enhance the way we model NF behaviors.
{"title":"Towards Digital Network Twins: Can we Machine Learn Network Function Behaviors?","authors":"Razvan-Mihai Ursu, Johannes Zerwas, Patrick Krämer, Navidreza Asadi, Phil Rodgers, Leon Wong, W. Kellerer","doi":"10.1109/NetSoft57336.2023.10175422","DOIUrl":"https://doi.org/10.1109/NetSoft57336.2023.10175422","url":null,"abstract":"Cluster orchestrators such as Kubernetes (K8s) provide many knobs that cloud administrators can tune to conFigure their system. However, different configurations lead to different levels of performance, which additionally depend on the application. Hence, finding exactly the best configuration for a given system can be a difficult task. A particularly innovative approach to evaluate configurations and optimize desired performance metrics is the use of Digital Twins (DT). To achieve good results in short time, the models of the cloud network functions underlying the DT must be minimally complex but highly accurate. Developing such models requires detailed knowledge about the system components and their interactions. We believe that a data-driven paradigm can capture the actual behavior of a network function (NF) deployed in the cluster, while decoupling it from internal feedback loops. In this paper, we analyze the HTTP load balancing function as an example of an NF and explore the data-driven paradigm to learn its behavior in a K8s cluster deployment. We develop, implement, and evaluate two approaches to learn the behavior of a state-of-the-art load balancer and show that Machine Learning has the potential to enhance the way we model NF behaviors.","PeriodicalId":223208,"journal":{"name":"2023 IEEE 9th International Conference on Network Softwarization (NetSoft)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131068867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-19DOI: 10.1109/NetSoft57336.2023.10175406
Youcef Kardjadja, Alan Tsang, M. Ibnkahla, Y. Ghamri-Doudane
Nowadays, services and applications are becoming more latency-sensitive and resource-hungry. Due to their high computational complexity, they can not always be processed locally in user equipment, and have to be offloaded to a distant powerful server. Instead of resorting to remote Cloud servers with high latency and traffic bottlenecks, service providers could map their users to Multi-Access Edge Computing (MEC) servers that can run computation-intensive tasks nearby. This mapping of users to MEC distributed servers is known as the Edge User Allocation (EUA) problem, and has been widely studied in the literature from the perspective of service providers. However, users in previous works can only be allocated to a server if they are in its coverage. In reality, it may be optimal to allocate a user to a distant server (e.g., two hops away from the user) if the latency threshold and system cost are both respected. This work presents the first attempt to tackle the multi-hop aware EUA problem. We consider the static EUA problem where users have a simultaneous-batch arrival pattern, and detail the added complexity compared to the original EUA setting. Afterwards, we propose a game theory-based distributed approach for allocating users to edge servers. We finally conduct a series of experiments to evaluate the performance of our approach against other baseline approaches. The results illustrate the potential benefits of allowing multi-hop allocations in providing better overall system cost to service providers.
{"title":"A Multi-Hop-Aware User To Edge-Server Association Game","authors":"Youcef Kardjadja, Alan Tsang, M. Ibnkahla, Y. Ghamri-Doudane","doi":"10.1109/NetSoft57336.2023.10175406","DOIUrl":"https://doi.org/10.1109/NetSoft57336.2023.10175406","url":null,"abstract":"Nowadays, services and applications are becoming more latency-sensitive and resource-hungry. Due to their high computational complexity, they can not always be processed locally in user equipment, and have to be offloaded to a distant powerful server. Instead of resorting to remote Cloud servers with high latency and traffic bottlenecks, service providers could map their users to Multi-Access Edge Computing (MEC) servers that can run computation-intensive tasks nearby. This mapping of users to MEC distributed servers is known as the Edge User Allocation (EUA) problem, and has been widely studied in the literature from the perspective of service providers. However, users in previous works can only be allocated to a server if they are in its coverage. In reality, it may be optimal to allocate a user to a distant server (e.g., two hops away from the user) if the latency threshold and system cost are both respected. This work presents the first attempt to tackle the multi-hop aware EUA problem. We consider the static EUA problem where users have a simultaneous-batch arrival pattern, and detail the added complexity compared to the original EUA setting. Afterwards, we propose a game theory-based distributed approach for allocating users to edge servers. We finally conduct a series of experiments to evaluate the performance of our approach against other baseline approaches. The results illustrate the potential benefits of allowing multi-hop allocations in providing better overall system cost to service providers.","PeriodicalId":223208,"journal":{"name":"2023 IEEE 9th International Conference on Network Softwarization (NetSoft)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128620674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-19DOI: 10.1109/NetSoft57336.2023.10175490
Tatsuya Otoshi, Masayuki Murata, H. Shimonishi, T. Shimokawa
In 5G, flexible resource management, mainly by base stations, will enable support for a variety of use cases. However, in a situation where a large number of devices exist, such as in mMTC, devices need to allocate resources appropriately in an autonomous decentralized manner. In this paper, autonomous decentralized timeslot allocation is achieved by using a decision model for each device. As a decision model, we propose an extension of the Bayesian Attractor Model (BAM) using Bayesian estimation. The proposed model incorporates a feature of human decision-making called magnitude sensitivity, where the time to decision varies with the sum of the values of all alternatives. This allows the natural introduction of the behavior of making a decision quickly when a time slot is available and waiting otherwise. Simulation-based evaluations show that the proposed method can avoid time slot conflicts during congestion more effectively than conventional Q-learning based time slot selection.
{"title":"Distributed Timeslot Allocation in mMTC Network by Magnitude-Sensitive Bayesian Attractor Model","authors":"Tatsuya Otoshi, Masayuki Murata, H. Shimonishi, T. Shimokawa","doi":"10.1109/NetSoft57336.2023.10175490","DOIUrl":"https://doi.org/10.1109/NetSoft57336.2023.10175490","url":null,"abstract":"In 5G, flexible resource management, mainly by base stations, will enable support for a variety of use cases. However, in a situation where a large number of devices exist, such as in mMTC, devices need to allocate resources appropriately in an autonomous decentralized manner. In this paper, autonomous decentralized timeslot allocation is achieved by using a decision model for each device. As a decision model, we propose an extension of the Bayesian Attractor Model (BAM) using Bayesian estimation. The proposed model incorporates a feature of human decision-making called magnitude sensitivity, where the time to decision varies with the sum of the values of all alternatives. This allows the natural introduction of the behavior of making a decision quickly when a time slot is available and waiting otherwise. Simulation-based evaluations show that the proposed method can avoid time slot conflicts during congestion more effectively than conventional Q-learning based time slot selection.","PeriodicalId":223208,"journal":{"name":"2023 IEEE 9th International Conference on Network Softwarization (NetSoft)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125009782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-19DOI: 10.1109/NetSoft57336.2023.10175420
Aniswar S. Krishnan, K. Sivalingam, Gauravdeep Shami, M. Lyonnais, Rodney G. Wilson
This paper deals with programmable data plane switches that perform flow classification using machine learning (ML) algorithms. This paper describes the implementation-based study of an existing ML-based packet marking scheme called FlowLens. The core algorithm, written in the P4 language, generates features, called flow markers, while processing packets. These flow markers are an efficient formulation of the packet length distribution of a particular flow. Secondly, a controller responsible for configuring the switch, extracting the features periodically, and applying machine learning algorithms for flow classification, is implemented in Python. The generation of flow markers is evaluated using flows in a tree-based topology in Mininet using the P4-enab1ed BMv2 packet switch on the mininet emulator. Classification is performed for the detection of two different types of network attacks: Active Wiretap and Mirai Botnet. In both cases, we obtain a 30-fold reduction in memory footprint with no loss in accuracy demonstrating the potential of running P4-based ML algorithms in packet switches.
{"title":"Flow classification for network security using P4-based Programmable Data Plane switches","authors":"Aniswar S. Krishnan, K. Sivalingam, Gauravdeep Shami, M. Lyonnais, Rodney G. Wilson","doi":"10.1109/NetSoft57336.2023.10175420","DOIUrl":"https://doi.org/10.1109/NetSoft57336.2023.10175420","url":null,"abstract":"This paper deals with programmable data plane switches that perform flow classification using machine learning (ML) algorithms. This paper describes the implementation-based study of an existing ML-based packet marking scheme called FlowLens. The core algorithm, written in the P4 language, generates features, called flow markers, while processing packets. These flow markers are an efficient formulation of the packet length distribution of a particular flow. Secondly, a controller responsible for configuring the switch, extracting the features periodically, and applying machine learning algorithms for flow classification, is implemented in Python. The generation of flow markers is evaluated using flows in a tree-based topology in Mininet using the P4-enab1ed BMv2 packet switch on the mininet emulator. Classification is performed for the detection of two different types of network attacks: Active Wiretap and Mirai Botnet. In both cases, we obtain a 30-fold reduction in memory footprint with no loss in accuracy demonstrating the potential of running P4-based ML algorithms in packet switches.","PeriodicalId":223208,"journal":{"name":"2023 IEEE 9th International Conference on Network Softwarization (NetSoft)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127876865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-19DOI: 10.1109/NetSoft57336.2023.10175443
Abdelmounaim Bouroudi, A. Outtagarts, Y. H. Aoul
The advanced 5G and 6G mobile network generations offer new capabilities that enable the creation of multiple virtual network instances with distinct and stringent requirements. However, the coexistence of multiple network functions on top of a shared substrate network poses a resource allocation challenge known as the Virtual Network Embedding (VNE) problem. In recent years, this NP-hard problem has received increasing attention in the literature due to the growing need to optimize resources at the edge of the network, where computational and storage capabilities are limited. In this demo paper, we propose a solution to this problem, utilizing the Algorithm Selection (AS) paradigm. This selects the most optimal Deep Reinforcement Learning (DRL) algorithm from a portfolio of agents, in an offline manner, based on past performance. To evaluate our solution, we developed a simulation platform using the OMNeT++ framework, with an orchestration module containerized using Docker. The proposed solution shows good performance and outperforms standalone algorithms.
{"title":"Dynamic Machine Learning Algorithm Selection For Network Slicing in Beyond 5G Networks","authors":"Abdelmounaim Bouroudi, A. Outtagarts, Y. H. Aoul","doi":"10.1109/NetSoft57336.2023.10175443","DOIUrl":"https://doi.org/10.1109/NetSoft57336.2023.10175443","url":null,"abstract":"The advanced 5G and 6G mobile network generations offer new capabilities that enable the creation of multiple virtual network instances with distinct and stringent requirements. However, the coexistence of multiple network functions on top of a shared substrate network poses a resource allocation challenge known as the Virtual Network Embedding (VNE) problem. In recent years, this NP-hard problem has received increasing attention in the literature due to the growing need to optimize resources at the edge of the network, where computational and storage capabilities are limited. In this demo paper, we propose a solution to this problem, utilizing the Algorithm Selection (AS) paradigm. This selects the most optimal Deep Reinforcement Learning (DRL) algorithm from a portfolio of agents, in an offline manner, based on past performance. To evaluate our solution, we developed a simulation platform using the OMNeT++ framework, with an orchestration module containerized using Docker. The proposed solution shows good performance and outperforms standalone algorithms.","PeriodicalId":223208,"journal":{"name":"2023 IEEE 9th International Conference on Network Softwarization (NetSoft)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129243239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-19DOI: 10.1109/NetSoft57336.2023.10175455
P. Veitch, Chris MacNamara, John J. Browne
As an increasing number of software-oriented telecoms workloads are run as Containerised Network Functions (CNFs) on cloud native virtualised infrastructure, performance tuning is vital. When compute infrastructure is distributed towards the edge of networks, efficient use of scarce resources is key meaning the available resources must be fine-tuned to achieve deterministic performance; another vital factor is the energy consumption of such compute which should be carefully managed. In the latest generation of Intel x86 servers, a new capability called Speed Select Technology Turbo Frequency (SST-TF) is available, enabling more targeted allocation of turbo frequency settings to specific CPU cores. This has significant potential in multi-tenant edge compute environments increasingly seen in 5G deployments and is likely to be a key building block for 6G. This paper evaluates the potential application of SST-TF for competing CNFs – a mix of high and low priority workloads - in a multi-tenant edge compute scenario. The targeted application of SST-TF is shown to yield performance benefits compared to the legacy turbo frequency capability in earlier generations of processor (by up to 35%), and when combined with other intelligent resource management tooling can also achieve a net reduction in server power consumption (of 1.7%).
随着越来越多的面向软件的电信工作负载作为容器化网络功能(cnf)在云原生虚拟化基础设施上运行,性能调优变得至关重要。当计算基础设施向网络边缘分布时,有效利用稀缺资源是关键,这意味着必须对可用资源进行微调以实现确定性性能;另一个至关重要的因素是这种计算的能量消耗,应该仔细管理。在最新一代的Intel x86服务器中,提供了一种名为Speed Select Technology Turbo Frequency (SST-TF)的新功能,可以更有针对性地将Turbo频率设置分配给特定的CPU内核。这在5G部署中越来越多地看到的多租户边缘计算环境中具有巨大潜力,并且可能成为6G的关键构建块。本文评估了SST-TF在多租户边缘计算场景中用于竞争cnf(高优先级和低优先级工作负载的混合)的潜在应用。与前几代处理器的传统涡轮频率能力相比,SST-TF的目标应用显示出性能优势(高达35%),并且当与其他智能资源管理工具结合使用时,还可以实现服务器功耗的净降低(1.7%)。
{"title":"Precise Turbo Frequency Tuning and Shared Resource Optimisation for Energy-Efficient Cloud Native Workloads","authors":"P. Veitch, Chris MacNamara, John J. Browne","doi":"10.1109/NetSoft57336.2023.10175455","DOIUrl":"https://doi.org/10.1109/NetSoft57336.2023.10175455","url":null,"abstract":"As an increasing number of software-oriented telecoms workloads are run as Containerised Network Functions (CNFs) on cloud native virtualised infrastructure, performance tuning is vital. When compute infrastructure is distributed towards the edge of networks, efficient use of scarce resources is key meaning the available resources must be fine-tuned to achieve deterministic performance; another vital factor is the energy consumption of such compute which should be carefully managed. In the latest generation of Intel x86 servers, a new capability called Speed Select Technology Turbo Frequency (SST-TF) is available, enabling more targeted allocation of turbo frequency settings to specific CPU cores. This has significant potential in multi-tenant edge compute environments increasingly seen in 5G deployments and is likely to be a key building block for 6G. This paper evaluates the potential application of SST-TF for competing CNFs – a mix of high and low priority workloads - in a multi-tenant edge compute scenario. The targeted application of SST-TF is shown to yield performance benefits compared to the legacy turbo frequency capability in earlier generations of processor (by up to 35%), and when combined with other intelligent resource management tooling can also achieve a net reduction in server power consumption (of 1.7%).","PeriodicalId":223208,"journal":{"name":"2023 IEEE 9th International Conference on Network Softwarization (NetSoft)","volume":"131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115896218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-19DOI: 10.1109/NetSoft57336.2023.10175416
Gaetano Francesco Pittalà, W. Cerroni
Fog computing is a distributed paradigm that extends cloud computing closer to the edge of the network, and even beyond that. By employing local resources, it enables quicker and more effective data processing and analysis. The optimization and automation of resource allocation, data processing, and job scheduling in the fog environment are made possible by the application of machine learning to Fog Computing Orchestration. It is also important, when working with the network computing models, to consider the XaaS paradigm, as it promotes the flexibility and scalability of fog services, bringing the concept of “service” into the foreground. Therefore, the need for a fog orchestrator enabling such characteristics arises, leveraging AI and the “service-centric” approach to enhance users’ service fruition. The design and development of such an orchestrator will be the objective of the early-stage PhD project presented in this paper.
{"title":"Intelligent Service Provisioning in Fog Computing","authors":"Gaetano Francesco Pittalà, W. Cerroni","doi":"10.1109/NetSoft57336.2023.10175416","DOIUrl":"https://doi.org/10.1109/NetSoft57336.2023.10175416","url":null,"abstract":"Fog computing is a distributed paradigm that extends cloud computing closer to the edge of the network, and even beyond that. By employing local resources, it enables quicker and more effective data processing and analysis. The optimization and automation of resource allocation, data processing, and job scheduling in the fog environment are made possible by the application of machine learning to Fog Computing Orchestration. It is also important, when working with the network computing models, to consider the XaaS paradigm, as it promotes the flexibility and scalability of fog services, bringing the concept of “service” into the foreground. Therefore, the need for a fog orchestrator enabling such characteristics arises, leveraging AI and the “service-centric” approach to enhance users’ service fruition. The design and development of such an orchestrator will be the objective of the early-stage PhD project presented in this paper.","PeriodicalId":223208,"journal":{"name":"2023 IEEE 9th International Conference on Network Softwarization (NetSoft)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115988137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-19DOI: 10.1109/NetSoft57336.2023.10175417
Theodoros Tsourdinis, N. Makris, S. Fdida, T. Korakis
Multi-access Edge Computing (MEC) has been considered one of the most prominent enablers for low-latency access to services provided over the telecommunications network. Nevertheless, client mobility, as well as external factors which impact the communication channel can severely deteriorate the eventual user-perceived latency times. Such processes can be averted by migrating the provided services to other edges, while the end-user changes their base station association as they move within the serviced region. In this work, we start from an entirely virtualized cloud-native 5G network based on the OpenAirInterface platform and develop our architecture for providing seamless live migration of edge services. On top of this infrastructure, we employ a Deep Reinforcement Learning (DRL) approach that is able to proactively relocate services to new edges, subject to the user’s multi-cell latency measurements and the workload status of the servers. We evaluate our scheme in a testbed setup by emulating mobility using realistic mobility patterns and workloads from real-world clusters. Our results denote that our scheme is capable sustain low-latency values for the end users, based on their mobility within the serviced region.
{"title":"DRL-based Service Migration for MEC Cloud-Native 5G and beyond Networks","authors":"Theodoros Tsourdinis, N. Makris, S. Fdida, T. Korakis","doi":"10.1109/NetSoft57336.2023.10175417","DOIUrl":"https://doi.org/10.1109/NetSoft57336.2023.10175417","url":null,"abstract":"Multi-access Edge Computing (MEC) has been considered one of the most prominent enablers for low-latency access to services provided over the telecommunications network. Nevertheless, client mobility, as well as external factors which impact the communication channel can severely deteriorate the eventual user-perceived latency times. Such processes can be averted by migrating the provided services to other edges, while the end-user changes their base station association as they move within the serviced region. In this work, we start from an entirely virtualized cloud-native 5G network based on the OpenAirInterface platform and develop our architecture for providing seamless live migration of edge services. On top of this infrastructure, we employ a Deep Reinforcement Learning (DRL) approach that is able to proactively relocate services to new edges, subject to the user’s multi-cell latency measurements and the workload status of the servers. We evaluate our scheme in a testbed setup by emulating mobility using realistic mobility patterns and workloads from real-world clusters. Our results denote that our scheme is capable sustain low-latency values for the end users, based on their mobility within the serviced region.","PeriodicalId":223208,"journal":{"name":"2023 IEEE 9th International Conference on Network Softwarization (NetSoft)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114723901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-19DOI: 10.1109/NetSoft57336.2023.10175429
Andrzej Jasinski, Yuansong Qiao, Enda Fallon, R. Flynn
This paper presents an implementation of a feedback mechanism for a workflow management framework. A chatbot that uses natural language processing (NLP) is central to the proposed feedback mechanism. NLP is used to transform text-based plain language input, both human-written and machine-generated, into a form that the framework can use to generate a workflow for execution in an environment of interest. The example environment described here is containerized network management, in which the workflow management framework, using feedback, can detect anomalies and mitigate potential incidents.
{"title":"Chatbot-based Feedback for Dynamically Generated Workflows in Docker Networks","authors":"Andrzej Jasinski, Yuansong Qiao, Enda Fallon, R. Flynn","doi":"10.1109/NetSoft57336.2023.10175429","DOIUrl":"https://doi.org/10.1109/NetSoft57336.2023.10175429","url":null,"abstract":"This paper presents an implementation of a feedback mechanism for a workflow management framework. A chatbot that uses natural language processing (NLP) is central to the proposed feedback mechanism. NLP is used to transform text-based plain language input, both human-written and machine-generated, into a form that the framework can use to generate a workflow for execution in an environment of interest. The example environment described here is containerized network management, in which the workflow management framework, using feedback, can detect anomalies and mitigate potential incidents.","PeriodicalId":223208,"journal":{"name":"2023 IEEE 9th International Conference on Network Softwarization (NetSoft)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127171612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-19DOI: 10.1109/NetSoft57336.2023.10175410
Jieyu Lin, Kristina Dzeparoska, A. Tizghadam, A. Leon-Garcia
Managing complex infrastructures in multi-domain settings is time-consuming and error-prone. Intent-based infrastructure management is a means to simplify management by allowing users to specify intents, i.e., high-level statements in natural language, that are automatically realized by the system. However, providing intent-based multi-domain infrastructure management poses a number of challenges: 1) intent translation; 2) plan execution and parallelization; 3) incompatible cross-domain abstractions. To tackle these challenges, we propose AppleSeed, an intent-based infrastructure management system that enables an end-to-end intent-to-deployment pipeline. AppleSeed uses few-shot learning for training a Large Language Model (LLM) to translate intents into intermediate programs, which are processed by a just-in-time compiler and a materialization module to automatically generate parallelizable, domain-specific executable programs. We evaluate the system in two use cases: Deep Packet Inspection (DPI); and machine learning training and inferencing. Our system achieves efficient intent translation into an execution plan with an average 22.3x lines of code to intent word ratio. It also speeds up the execution of the management plan by 1.7-2.6 times with our JIT compilation for parallelized execution compared to sequential execution.
{"title":"AppleSeed: Intent-Based Multi-Domain Infrastructure Management via Few-Shot Learning","authors":"Jieyu Lin, Kristina Dzeparoska, A. Tizghadam, A. Leon-Garcia","doi":"10.1109/NetSoft57336.2023.10175410","DOIUrl":"https://doi.org/10.1109/NetSoft57336.2023.10175410","url":null,"abstract":"Managing complex infrastructures in multi-domain settings is time-consuming and error-prone. Intent-based infrastructure management is a means to simplify management by allowing users to specify intents, i.e., high-level statements in natural language, that are automatically realized by the system. However, providing intent-based multi-domain infrastructure management poses a number of challenges: 1) intent translation; 2) plan execution and parallelization; 3) incompatible cross-domain abstractions. To tackle these challenges, we propose AppleSeed, an intent-based infrastructure management system that enables an end-to-end intent-to-deployment pipeline. AppleSeed uses few-shot learning for training a Large Language Model (LLM) to translate intents into intermediate programs, which are processed by a just-in-time compiler and a materialization module to automatically generate parallelizable, domain-specific executable programs. We evaluate the system in two use cases: Deep Packet Inspection (DPI); and machine learning training and inferencing. Our system achieves efficient intent translation into an execution plan with an average 22.3x lines of code to intent word ratio. It also speeds up the execution of the management plan by 1.7-2.6 times with our JIT compilation for parallelized execution compared to sequential execution.","PeriodicalId":223208,"journal":{"name":"2023 IEEE 9th International Conference on Network Softwarization (NetSoft)","volume":"173 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125794317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}