Pub Date : 2023-06-19DOI: 10.1109/NetSoft57336.2023.10175406
Youcef Kardjadja, Alan Tsang, M. Ibnkahla, Y. Ghamri-Doudane
Nowadays, services and applications are becoming more latency-sensitive and resource-hungry. Due to their high computational complexity, they can not always be processed locally in user equipment, and have to be offloaded to a distant powerful server. Instead of resorting to remote Cloud servers with high latency and traffic bottlenecks, service providers could map their users to Multi-Access Edge Computing (MEC) servers that can run computation-intensive tasks nearby. This mapping of users to MEC distributed servers is known as the Edge User Allocation (EUA) problem, and has been widely studied in the literature from the perspective of service providers. However, users in previous works can only be allocated to a server if they are in its coverage. In reality, it may be optimal to allocate a user to a distant server (e.g., two hops away from the user) if the latency threshold and system cost are both respected. This work presents the first attempt to tackle the multi-hop aware EUA problem. We consider the static EUA problem where users have a simultaneous-batch arrival pattern, and detail the added complexity compared to the original EUA setting. Afterwards, we propose a game theory-based distributed approach for allocating users to edge servers. We finally conduct a series of experiments to evaluate the performance of our approach against other baseline approaches. The results illustrate the potential benefits of allowing multi-hop allocations in providing better overall system cost to service providers.
{"title":"A Multi-Hop-Aware User To Edge-Server Association Game","authors":"Youcef Kardjadja, Alan Tsang, M. Ibnkahla, Y. Ghamri-Doudane","doi":"10.1109/NetSoft57336.2023.10175406","DOIUrl":"https://doi.org/10.1109/NetSoft57336.2023.10175406","url":null,"abstract":"Nowadays, services and applications are becoming more latency-sensitive and resource-hungry. Due to their high computational complexity, they can not always be processed locally in user equipment, and have to be offloaded to a distant powerful server. Instead of resorting to remote Cloud servers with high latency and traffic bottlenecks, service providers could map their users to Multi-Access Edge Computing (MEC) servers that can run computation-intensive tasks nearby. This mapping of users to MEC distributed servers is known as the Edge User Allocation (EUA) problem, and has been widely studied in the literature from the perspective of service providers. However, users in previous works can only be allocated to a server if they are in its coverage. In reality, it may be optimal to allocate a user to a distant server (e.g., two hops away from the user) if the latency threshold and system cost are both respected. This work presents the first attempt to tackle the multi-hop aware EUA problem. We consider the static EUA problem where users have a simultaneous-batch arrival pattern, and detail the added complexity compared to the original EUA setting. Afterwards, we propose a game theory-based distributed approach for allocating users to edge servers. We finally conduct a series of experiments to evaluate the performance of our approach against other baseline approaches. The results illustrate the potential benefits of allowing multi-hop allocations in providing better overall system cost to service providers.","PeriodicalId":223208,"journal":{"name":"2023 IEEE 9th International Conference on Network Softwarization (NetSoft)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128620674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-19DOI: 10.1109/NetSoft57336.2023.10175414
Javier Palomares, Estefanía Coronado, C. Cervelló-Pastor, S. Siddiqui
Edge to Cloud Continuum is a concept that integrates cloud computing and cellular networks that has been gaining popularity due to its potential to provide a seamless user experience and address the challenges of managing complex multi-domain networks involving massive IoT devices. Enabling intelligence in the Edge to Cloud Continuum can further enhance its capabilities, offering benefits such as reduced latency, improved scalability, enhanced resource utilization, and increased context awareness. This paper provides insights into the opportunities and challenges of enabling intelligence in Edge to Cloud Continuum, highlighting the potential of this technology. This study presents a comprehensive review of the existing literature on enabling intelligence in Edge to Cloud Continuum, to reach the research questions that will construct the PhD. Various tools and technologies that can be used to integrate intelligence into the Edge to Cloud Continuum system were explored and analyzed. In addition, this study provides a detailed work plan for the upcoming months of the project.
{"title":"Enabling Intelligence Inclusiveness in Edge to Cloud Continuum: Challenges and Opportunities","authors":"Javier Palomares, Estefanía Coronado, C. Cervelló-Pastor, S. Siddiqui","doi":"10.1109/NetSoft57336.2023.10175414","DOIUrl":"https://doi.org/10.1109/NetSoft57336.2023.10175414","url":null,"abstract":"Edge to Cloud Continuum is a concept that integrates cloud computing and cellular networks that has been gaining popularity due to its potential to provide a seamless user experience and address the challenges of managing complex multi-domain networks involving massive IoT devices. Enabling intelligence in the Edge to Cloud Continuum can further enhance its capabilities, offering benefits such as reduced latency, improved scalability, enhanced resource utilization, and increased context awareness. This paper provides insights into the opportunities and challenges of enabling intelligence in Edge to Cloud Continuum, highlighting the potential of this technology. This study presents a comprehensive review of the existing literature on enabling intelligence in Edge to Cloud Continuum, to reach the research questions that will construct the PhD. Various tools and technologies that can be used to integrate intelligence into the Edge to Cloud Continuum system were explored and analyzed. In addition, this study provides a detailed work plan for the upcoming months of the project.","PeriodicalId":223208,"journal":{"name":"2023 IEEE 9th International Conference on Network Softwarization (NetSoft)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134164261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-19DOI: 10.1109/NetSoft57336.2023.10175468
Chenxing Ji, F. Kuipers
To cater to constantly changing network needs, enabling stateful reconfiguration of Network Functions (NFs) is crucial. Recently, there has been growing interest in offloading NFs to programmable network devices. Unfortunately, it is currently not possible to maintain the full state of NFs during a switch reconfiguration without consuming network resources from and to neighboring switches. In this paper, we present State4, a framework that maintains the state of P4 programs during the reconfiguration of a P4-programmab1e network device, by only using a small amount of local resources on the switch undergoing reconfiguration. State4 acts on both the in-switch control-plane and the data-plane. By utilizing the in-switch local controller, State4 requires no external network resources to achieve reconfiguration while preserving states. As such, State4 enables on-the-fly reconfiguration of stateful NFs, at minimal traffic disruption, where previously traffic had to be re-routed.
{"title":"State4: State-preserving Reconfiguration of P4-programmable Switches","authors":"Chenxing Ji, F. Kuipers","doi":"10.1109/NetSoft57336.2023.10175468","DOIUrl":"https://doi.org/10.1109/NetSoft57336.2023.10175468","url":null,"abstract":"To cater to constantly changing network needs, enabling stateful reconfiguration of Network Functions (NFs) is crucial. Recently, there has been growing interest in offloading NFs to programmable network devices. Unfortunately, it is currently not possible to maintain the full state of NFs during a switch reconfiguration without consuming network resources from and to neighboring switches. In this paper, we present State4, a framework that maintains the state of P4 programs during the reconfiguration of a P4-programmab1e network device, by only using a small amount of local resources on the switch undergoing reconfiguration. State4 acts on both the in-switch control-plane and the data-plane. By utilizing the in-switch local controller, State4 requires no external network resources to achieve reconfiguration while preserving states. As such, State4 enables on-the-fly reconfiguration of stateful NFs, at minimal traffic disruption, where previously traffic had to be re-routed.","PeriodicalId":223208,"journal":{"name":"2023 IEEE 9th International Conference on Network Softwarization (NetSoft)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133355408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-19DOI: 10.1109/NetSoft57336.2023.10175420
Aniswar S. Krishnan, K. Sivalingam, Gauravdeep Shami, M. Lyonnais, Rodney G. Wilson
This paper deals with programmable data plane switches that perform flow classification using machine learning (ML) algorithms. This paper describes the implementation-based study of an existing ML-based packet marking scheme called FlowLens. The core algorithm, written in the P4 language, generates features, called flow markers, while processing packets. These flow markers are an efficient formulation of the packet length distribution of a particular flow. Secondly, a controller responsible for configuring the switch, extracting the features periodically, and applying machine learning algorithms for flow classification, is implemented in Python. The generation of flow markers is evaluated using flows in a tree-based topology in Mininet using the P4-enab1ed BMv2 packet switch on the mininet emulator. Classification is performed for the detection of two different types of network attacks: Active Wiretap and Mirai Botnet. In both cases, we obtain a 30-fold reduction in memory footprint with no loss in accuracy demonstrating the potential of running P4-based ML algorithms in packet switches.
{"title":"Flow classification for network security using P4-based Programmable Data Plane switches","authors":"Aniswar S. Krishnan, K. Sivalingam, Gauravdeep Shami, M. Lyonnais, Rodney G. Wilson","doi":"10.1109/NetSoft57336.2023.10175420","DOIUrl":"https://doi.org/10.1109/NetSoft57336.2023.10175420","url":null,"abstract":"This paper deals with programmable data plane switches that perform flow classification using machine learning (ML) algorithms. This paper describes the implementation-based study of an existing ML-based packet marking scheme called FlowLens. The core algorithm, written in the P4 language, generates features, called flow markers, while processing packets. These flow markers are an efficient formulation of the packet length distribution of a particular flow. Secondly, a controller responsible for configuring the switch, extracting the features periodically, and applying machine learning algorithms for flow classification, is implemented in Python. The generation of flow markers is evaluated using flows in a tree-based topology in Mininet using the P4-enab1ed BMv2 packet switch on the mininet emulator. Classification is performed for the detection of two different types of network attacks: Active Wiretap and Mirai Botnet. In both cases, we obtain a 30-fold reduction in memory footprint with no loss in accuracy demonstrating the potential of running P4-based ML algorithms in packet switches.","PeriodicalId":223208,"journal":{"name":"2023 IEEE 9th International Conference on Network Softwarization (NetSoft)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127876865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-19DOI: 10.1109/NetSoft57336.2023.10175443
Abdelmounaim Bouroudi, A. Outtagarts, Y. H. Aoul
The advanced 5G and 6G mobile network generations offer new capabilities that enable the creation of multiple virtual network instances with distinct and stringent requirements. However, the coexistence of multiple network functions on top of a shared substrate network poses a resource allocation challenge known as the Virtual Network Embedding (VNE) problem. In recent years, this NP-hard problem has received increasing attention in the literature due to the growing need to optimize resources at the edge of the network, where computational and storage capabilities are limited. In this demo paper, we propose a solution to this problem, utilizing the Algorithm Selection (AS) paradigm. This selects the most optimal Deep Reinforcement Learning (DRL) algorithm from a portfolio of agents, in an offline manner, based on past performance. To evaluate our solution, we developed a simulation platform using the OMNeT++ framework, with an orchestration module containerized using Docker. The proposed solution shows good performance and outperforms standalone algorithms.
{"title":"Dynamic Machine Learning Algorithm Selection For Network Slicing in Beyond 5G Networks","authors":"Abdelmounaim Bouroudi, A. Outtagarts, Y. H. Aoul","doi":"10.1109/NetSoft57336.2023.10175443","DOIUrl":"https://doi.org/10.1109/NetSoft57336.2023.10175443","url":null,"abstract":"The advanced 5G and 6G mobile network generations offer new capabilities that enable the creation of multiple virtual network instances with distinct and stringent requirements. However, the coexistence of multiple network functions on top of a shared substrate network poses a resource allocation challenge known as the Virtual Network Embedding (VNE) problem. In recent years, this NP-hard problem has received increasing attention in the literature due to the growing need to optimize resources at the edge of the network, where computational and storage capabilities are limited. In this demo paper, we propose a solution to this problem, utilizing the Algorithm Selection (AS) paradigm. This selects the most optimal Deep Reinforcement Learning (DRL) algorithm from a portfolio of agents, in an offline manner, based on past performance. To evaluate our solution, we developed a simulation platform using the OMNeT++ framework, with an orchestration module containerized using Docker. The proposed solution shows good performance and outperforms standalone algorithms.","PeriodicalId":223208,"journal":{"name":"2023 IEEE 9th International Conference on Network Softwarization (NetSoft)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129243239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-19DOI: 10.1109/NetSoft57336.2023.10175455
P. Veitch, Chris MacNamara, John J. Browne
As an increasing number of software-oriented telecoms workloads are run as Containerised Network Functions (CNFs) on cloud native virtualised infrastructure, performance tuning is vital. When compute infrastructure is distributed towards the edge of networks, efficient use of scarce resources is key meaning the available resources must be fine-tuned to achieve deterministic performance; another vital factor is the energy consumption of such compute which should be carefully managed. In the latest generation of Intel x86 servers, a new capability called Speed Select Technology Turbo Frequency (SST-TF) is available, enabling more targeted allocation of turbo frequency settings to specific CPU cores. This has significant potential in multi-tenant edge compute environments increasingly seen in 5G deployments and is likely to be a key building block for 6G. This paper evaluates the potential application of SST-TF for competing CNFs – a mix of high and low priority workloads - in a multi-tenant edge compute scenario. The targeted application of SST-TF is shown to yield performance benefits compared to the legacy turbo frequency capability in earlier generations of processor (by up to 35%), and when combined with other intelligent resource management tooling can also achieve a net reduction in server power consumption (of 1.7%).
随着越来越多的面向软件的电信工作负载作为容器化网络功能(cnf)在云原生虚拟化基础设施上运行,性能调优变得至关重要。当计算基础设施向网络边缘分布时,有效利用稀缺资源是关键,这意味着必须对可用资源进行微调以实现确定性性能;另一个至关重要的因素是这种计算的能量消耗,应该仔细管理。在最新一代的Intel x86服务器中,提供了一种名为Speed Select Technology Turbo Frequency (SST-TF)的新功能,可以更有针对性地将Turbo频率设置分配给特定的CPU内核。这在5G部署中越来越多地看到的多租户边缘计算环境中具有巨大潜力,并且可能成为6G的关键构建块。本文评估了SST-TF在多租户边缘计算场景中用于竞争cnf(高优先级和低优先级工作负载的混合)的潜在应用。与前几代处理器的传统涡轮频率能力相比,SST-TF的目标应用显示出性能优势(高达35%),并且当与其他智能资源管理工具结合使用时,还可以实现服务器功耗的净降低(1.7%)。
{"title":"Precise Turbo Frequency Tuning and Shared Resource Optimisation for Energy-Efficient Cloud Native Workloads","authors":"P. Veitch, Chris MacNamara, John J. Browne","doi":"10.1109/NetSoft57336.2023.10175455","DOIUrl":"https://doi.org/10.1109/NetSoft57336.2023.10175455","url":null,"abstract":"As an increasing number of software-oriented telecoms workloads are run as Containerised Network Functions (CNFs) on cloud native virtualised infrastructure, performance tuning is vital. When compute infrastructure is distributed towards the edge of networks, efficient use of scarce resources is key meaning the available resources must be fine-tuned to achieve deterministic performance; another vital factor is the energy consumption of such compute which should be carefully managed. In the latest generation of Intel x86 servers, a new capability called Speed Select Technology Turbo Frequency (SST-TF) is available, enabling more targeted allocation of turbo frequency settings to specific CPU cores. This has significant potential in multi-tenant edge compute environments increasingly seen in 5G deployments and is likely to be a key building block for 6G. This paper evaluates the potential application of SST-TF for competing CNFs – a mix of high and low priority workloads - in a multi-tenant edge compute scenario. The targeted application of SST-TF is shown to yield performance benefits compared to the legacy turbo frequency capability in earlier generations of processor (by up to 35%), and when combined with other intelligent resource management tooling can also achieve a net reduction in server power consumption (of 1.7%).","PeriodicalId":223208,"journal":{"name":"2023 IEEE 9th International Conference on Network Softwarization (NetSoft)","volume":"131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115896218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-19DOI: 10.1109/NetSoft57336.2023.10175416
Gaetano Francesco Pittalà, W. Cerroni
Fog computing is a distributed paradigm that extends cloud computing closer to the edge of the network, and even beyond that. By employing local resources, it enables quicker and more effective data processing and analysis. The optimization and automation of resource allocation, data processing, and job scheduling in the fog environment are made possible by the application of machine learning to Fog Computing Orchestration. It is also important, when working with the network computing models, to consider the XaaS paradigm, as it promotes the flexibility and scalability of fog services, bringing the concept of “service” into the foreground. Therefore, the need for a fog orchestrator enabling such characteristics arises, leveraging AI and the “service-centric” approach to enhance users’ service fruition. The design and development of such an orchestrator will be the objective of the early-stage PhD project presented in this paper.
{"title":"Intelligent Service Provisioning in Fog Computing","authors":"Gaetano Francesco Pittalà, W. Cerroni","doi":"10.1109/NetSoft57336.2023.10175416","DOIUrl":"https://doi.org/10.1109/NetSoft57336.2023.10175416","url":null,"abstract":"Fog computing is a distributed paradigm that extends cloud computing closer to the edge of the network, and even beyond that. By employing local resources, it enables quicker and more effective data processing and analysis. The optimization and automation of resource allocation, data processing, and job scheduling in the fog environment are made possible by the application of machine learning to Fog Computing Orchestration. It is also important, when working with the network computing models, to consider the XaaS paradigm, as it promotes the flexibility and scalability of fog services, bringing the concept of “service” into the foreground. Therefore, the need for a fog orchestrator enabling such characteristics arises, leveraging AI and the “service-centric” approach to enhance users’ service fruition. The design and development of such an orchestrator will be the objective of the early-stage PhD project presented in this paper.","PeriodicalId":223208,"journal":{"name":"2023 IEEE 9th International Conference on Network Softwarization (NetSoft)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115988137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-19DOI: 10.1109/NetSoft57336.2023.10175417
Theodoros Tsourdinis, N. Makris, S. Fdida, T. Korakis
Multi-access Edge Computing (MEC) has been considered one of the most prominent enablers for low-latency access to services provided over the telecommunications network. Nevertheless, client mobility, as well as external factors which impact the communication channel can severely deteriorate the eventual user-perceived latency times. Such processes can be averted by migrating the provided services to other edges, while the end-user changes their base station association as they move within the serviced region. In this work, we start from an entirely virtualized cloud-native 5G network based on the OpenAirInterface platform and develop our architecture for providing seamless live migration of edge services. On top of this infrastructure, we employ a Deep Reinforcement Learning (DRL) approach that is able to proactively relocate services to new edges, subject to the user’s multi-cell latency measurements and the workload status of the servers. We evaluate our scheme in a testbed setup by emulating mobility using realistic mobility patterns and workloads from real-world clusters. Our results denote that our scheme is capable sustain low-latency values for the end users, based on their mobility within the serviced region.
{"title":"DRL-based Service Migration for MEC Cloud-Native 5G and beyond Networks","authors":"Theodoros Tsourdinis, N. Makris, S. Fdida, T. Korakis","doi":"10.1109/NetSoft57336.2023.10175417","DOIUrl":"https://doi.org/10.1109/NetSoft57336.2023.10175417","url":null,"abstract":"Multi-access Edge Computing (MEC) has been considered one of the most prominent enablers for low-latency access to services provided over the telecommunications network. Nevertheless, client mobility, as well as external factors which impact the communication channel can severely deteriorate the eventual user-perceived latency times. Such processes can be averted by migrating the provided services to other edges, while the end-user changes their base station association as they move within the serviced region. In this work, we start from an entirely virtualized cloud-native 5G network based on the OpenAirInterface platform and develop our architecture for providing seamless live migration of edge services. On top of this infrastructure, we employ a Deep Reinforcement Learning (DRL) approach that is able to proactively relocate services to new edges, subject to the user’s multi-cell latency measurements and the workload status of the servers. We evaluate our scheme in a testbed setup by emulating mobility using realistic mobility patterns and workloads from real-world clusters. Our results denote that our scheme is capable sustain low-latency values for the end users, based on their mobility within the serviced region.","PeriodicalId":223208,"journal":{"name":"2023 IEEE 9th International Conference on Network Softwarization (NetSoft)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114723901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-19DOI: 10.1109/NetSoft57336.2023.10175429
Andrzej Jasinski, Yuansong Qiao, Enda Fallon, R. Flynn
This paper presents an implementation of a feedback mechanism for a workflow management framework. A chatbot that uses natural language processing (NLP) is central to the proposed feedback mechanism. NLP is used to transform text-based plain language input, both human-written and machine-generated, into a form that the framework can use to generate a workflow for execution in an environment of interest. The example environment described here is containerized network management, in which the workflow management framework, using feedback, can detect anomalies and mitigate potential incidents.
{"title":"Chatbot-based Feedback for Dynamically Generated Workflows in Docker Networks","authors":"Andrzej Jasinski, Yuansong Qiao, Enda Fallon, R. Flynn","doi":"10.1109/NetSoft57336.2023.10175429","DOIUrl":"https://doi.org/10.1109/NetSoft57336.2023.10175429","url":null,"abstract":"This paper presents an implementation of a feedback mechanism for a workflow management framework. A chatbot that uses natural language processing (NLP) is central to the proposed feedback mechanism. NLP is used to transform text-based plain language input, both human-written and machine-generated, into a form that the framework can use to generate a workflow for execution in an environment of interest. The example environment described here is containerized network management, in which the workflow management framework, using feedback, can detect anomalies and mitigate potential incidents.","PeriodicalId":223208,"journal":{"name":"2023 IEEE 9th International Conference on Network Softwarization (NetSoft)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127171612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-19DOI: 10.1109/NetSoft57336.2023.10175410
Jieyu Lin, Kristina Dzeparoska, A. Tizghadam, A. Leon-Garcia
Managing complex infrastructures in multi-domain settings is time-consuming and error-prone. Intent-based infrastructure management is a means to simplify management by allowing users to specify intents, i.e., high-level statements in natural language, that are automatically realized by the system. However, providing intent-based multi-domain infrastructure management poses a number of challenges: 1) intent translation; 2) plan execution and parallelization; 3) incompatible cross-domain abstractions. To tackle these challenges, we propose AppleSeed, an intent-based infrastructure management system that enables an end-to-end intent-to-deployment pipeline. AppleSeed uses few-shot learning for training a Large Language Model (LLM) to translate intents into intermediate programs, which are processed by a just-in-time compiler and a materialization module to automatically generate parallelizable, domain-specific executable programs. We evaluate the system in two use cases: Deep Packet Inspection (DPI); and machine learning training and inferencing. Our system achieves efficient intent translation into an execution plan with an average 22.3x lines of code to intent word ratio. It also speeds up the execution of the management plan by 1.7-2.6 times with our JIT compilation for parallelized execution compared to sequential execution.
{"title":"AppleSeed: Intent-Based Multi-Domain Infrastructure Management via Few-Shot Learning","authors":"Jieyu Lin, Kristina Dzeparoska, A. Tizghadam, A. Leon-Garcia","doi":"10.1109/NetSoft57336.2023.10175410","DOIUrl":"https://doi.org/10.1109/NetSoft57336.2023.10175410","url":null,"abstract":"Managing complex infrastructures in multi-domain settings is time-consuming and error-prone. Intent-based infrastructure management is a means to simplify management by allowing users to specify intents, i.e., high-level statements in natural language, that are automatically realized by the system. However, providing intent-based multi-domain infrastructure management poses a number of challenges: 1) intent translation; 2) plan execution and parallelization; 3) incompatible cross-domain abstractions. To tackle these challenges, we propose AppleSeed, an intent-based infrastructure management system that enables an end-to-end intent-to-deployment pipeline. AppleSeed uses few-shot learning for training a Large Language Model (LLM) to translate intents into intermediate programs, which are processed by a just-in-time compiler and a materialization module to automatically generate parallelizable, domain-specific executable programs. We evaluate the system in two use cases: Deep Packet Inspection (DPI); and machine learning training and inferencing. Our system achieves efficient intent translation into an execution plan with an average 22.3x lines of code to intent word ratio. It also speeds up the execution of the management plan by 1.7-2.6 times with our JIT compilation for parallelized execution compared to sequential execution.","PeriodicalId":223208,"journal":{"name":"2023 IEEE 9th International Conference on Network Softwarization (NetSoft)","volume":"173 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125794317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}