None Jaime Fúster de la Fuente, None Álvaro Pendás Recondo, None Paul Harvey, None Tarek Mohamed, None Chandan Singh, None Vipul Sanap, None Ayush Kumar, None Sathish Venkateswaran, None Sarvasuddi Balaganesh, None Rajat Duggal, None Sree Ganesh Lalitaditya Divakarla, None Vaibhava Krishna Devulapali, None Ebeledike Frank Chukwubuikem, None Emmanuel Othniel Eggah, None Abel Oche Moses, None Nuhu Kontagora Bello, None James Agajo, None Wael Alron, None Fathi Abdeldayem, None Melanie Espinoza Hernández, None Abigail Morales Retana, None Jackeline García Alvarado, None Nicolle Gamboa Mena, None Juliana Morales Alvarado, None Ericka Pérez Chinchilla, None Amanda Calderón Campos, None Derek Rodríguez Villalobos, None Oscar Castillo Brenes, None Kodandram Ranganath, None Ayushi Khandal, None Rakshesh P Bhatt, None Kunal Mahajan, None Prikshit CS, None Ashok Kamaraj, None Srinwaynti Samaddar, None Sivaramakrishnan Swaminathan, None M Sri Bhuvan, None Nagaswaroop S N, None Blessed Guda, None Ibrahim Aliyu, None Kim Jinsul, None Vishnu Ram
Next Generation Networks (NGNs) are expected to handle heterogeneous technologies, services, verticals and devices of increasing complexity. It is essential to fathom an innovative approach to automatically and efficiently manage NGNs to deliver an adequate end-to-end Quality of Experience (QoE) while reducing operational expenses. An Autonomous Network (AN) using a closed loop can self-monitor, self-evaluate and self-heal, making it a potential solution for managing the NGN dynamically. This study describes the major results of building a closed-loop Proof of Concept (PoC) for various AN use cases organized by the International Telecommunication Union Focus Group on Autonomous Networks (ITU FG-AN). The scope of this PoC includes the representation of closed-loop use cases in a graph format, the development of evolution/exploration mechanisms to create new closed loops based on the graph representations, and the implementation of a reference orchestrator to demonstrate the parsing and validation of the closed loops. The main conclusions and future directions are summarized here, including observations and limitations of the PoC.
{"title":"Build your own closed loop: Graph-based proof of concept in closed loop for autonomous networks","authors":"None Jaime Fúster de la Fuente, None Álvaro Pendás Recondo, None Paul Harvey, None Tarek Mohamed, None Chandan Singh, None Vipul Sanap, None Ayush Kumar, None Sathish Venkateswaran, None Sarvasuddi Balaganesh, None Rajat Duggal, None Sree Ganesh Lalitaditya Divakarla, None Vaibhava Krishna Devulapali, None Ebeledike Frank Chukwubuikem, None Emmanuel Othniel Eggah, None Abel Oche Moses, None Nuhu Kontagora Bello, None James Agajo, None Wael Alron, None Fathi Abdeldayem, None Melanie Espinoza Hernández, None Abigail Morales Retana, None Jackeline García Alvarado, None Nicolle Gamboa Mena, None Juliana Morales Alvarado, None Ericka Pérez Chinchilla, None Amanda Calderón Campos, None Derek Rodríguez Villalobos, None Oscar Castillo Brenes, None Kodandram Ranganath, None Ayushi Khandal, None Rakshesh P Bhatt, None Kunal Mahajan, None Prikshit CS, None Ashok Kamaraj, None Srinwaynti Samaddar, None Sivaramakrishnan Swaminathan, None M Sri Bhuvan, None Nagaswaroop S N, None Blessed Guda, None Ibrahim Aliyu, None Kim Jinsul, None Vishnu Ram","doi":"10.52953/opdk5666","DOIUrl":"https://doi.org/10.52953/opdk5666","url":null,"abstract":"Next Generation Networks (NGNs) are expected to handle heterogeneous technologies, services, verticals and devices of increasing complexity. It is essential to fathom an innovative approach to automatically and efficiently manage NGNs to deliver an adequate end-to-end Quality of Experience (QoE) while reducing operational expenses. An Autonomous Network (AN) using a closed loop can self-monitor, self-evaluate and self-heal, making it a potential solution for managing the NGN dynamically. This study describes the major results of building a closed-loop Proof of Concept (PoC) for various AN use cases organized by the International Telecommunication Union Focus Group on Autonomous Networks (ITU FG-AN). The scope of this PoC includes the representation of closed-loop use cases in a graph format, the development of evolution/exploration mechanisms to create new closed loops based on the graph representations, and the implementation of a reference orchestrator to demonstrate the parsing and validation of the closed loops. The main conclusions and future directions are summarized here, including observations and limitations of the PoC.","PeriodicalId":93013,"journal":{"name":"ITU journal : ICT discoveries","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134915151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
None Junior Momo Ziazet, None Charles Boudreau, None Oscar Delgado, None Brigitte Jaumard
Machine learning is a data-driven domain, which means a learning model's performance depends on the availability of large volumes of data to train it. However, by improving data quality, we can train effective machine learning models with little data. This paper demonstrates this possibility by proposing a methodology to generate high-quality data in the networking domain. We designed a dataset to train a given Graph Neural Network (GNN) that not only contains a small number of samples, but whose samples also feature network graphs of a reduced size (10-node networks). Our evaluations indicate that the dataset generated by the proposed pipeline can train a GNN model that scales well to larger networks of 50 to 300 nodes. The trained model compares favorably to the baseline, achieving a mean absolute percentage error of 5-6%, while being significantly smaller at 90 samples total (vs. thousands of samples for the baseline).
{"title":"Designing graph neural networks training data with limited samples and small network sizes","authors":"None Junior Momo Ziazet, None Charles Boudreau, None Oscar Delgado, None Brigitte Jaumard","doi":"10.52953/afyw5455","DOIUrl":"https://doi.org/10.52953/afyw5455","url":null,"abstract":"Machine learning is a data-driven domain, which means a learning model's performance depends on the availability of large volumes of data to train it. However, by improving data quality, we can train effective machine learning models with little data. This paper demonstrates this possibility by proposing a methodology to generate high-quality data in the networking domain. We designed a dataset to train a given Graph Neural Network (GNN) that not only contains a small number of samples, but whose samples also feature network graphs of a reduced size (10-node networks). Our evaluations indicate that the dataset generated by the proposed pipeline can train a GNN model that scales well to larger networks of 50 to 300 nodes. The trained model compares favorably to the baseline, achieving a mean absolute percentage error of 5-6%, while being significantly smaller at 90 samples total (vs. thousands of samples for the baseline).","PeriodicalId":93013,"journal":{"name":"ITU journal : ICT discoveries","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135878488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
None Eliyahu Sason, None Yackov Lubarsky, None Alexei Gaissinski, None Eli Kravchik, None Pavel Kisilev
Recent advances in Graph Neural Networks (GNNs) has opened new capabilities to analyze complex communication systems. However, little work has been done to study the effects of limited data samples on the performance of GNN-based systems. In this paper, we present a novel solution to the problem of finding an optimal training set for efficient training of a RouteNet-Fermi GNN model. The proposed solution ensures good model generalization to large previously unseen networks under strict limitations on the training data budget and training topology sizes. Specifically, we generate an initial data set by emulating the flow distribution of large networks while using small networks. We then deploy a new clustering method that efficiently samples the above generated data set by analyzing the data embeddings from different Oracle models. This procedure provides a very small but information-rich training set. The above data embedding method translates highly heterogeneous network samples into a common embedding spac, wherein the samples can be easily related to each other. The proposed method outperforms state-of-the-art approaches, including the winning solutions of the 2022 Graph Neural Networking challenge.
{"title":"Oracle-based data generation for highly efficient digital twin network training","authors":"None Eliyahu Sason, None Yackov Lubarsky, None Alexei Gaissinski, None Eli Kravchik, None Pavel Kisilev","doi":"10.52953/aweu6345","DOIUrl":"https://doi.org/10.52953/aweu6345","url":null,"abstract":"Recent advances in Graph Neural Networks (GNNs) has opened new capabilities to analyze complex communication systems. However, little work has been done to study the effects of limited data samples on the performance of GNN-based systems. In this paper, we present a novel solution to the problem of finding an optimal training set for efficient training of a RouteNet-Fermi GNN model. The proposed solution ensures good model generalization to large previously unseen networks under strict limitations on the training data budget and training topology sizes. Specifically, we generate an initial data set by emulating the flow distribution of large networks while using small networks. We then deploy a new clustering method that efficiently samples the above generated data set by analyzing the data embeddings from different Oracle models. This procedure provides a very small but information-rich training set. The above data embedding method translates highly heterogeneous network samples into a common embedding spac, wherein the samples can be easily related to each other. The proposed method outperforms state-of-the-art approaches, including the winning solutions of the 2022 Graph Neural Networking challenge.","PeriodicalId":93013,"journal":{"name":"ITU journal : ICT discoveries","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136362241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
None Max Helm, None Benedikt Jaeger, None Georg Carle
Machine learning models for tasks in communication networks often require large datasets to be trained. This training is cost intensive, and solutions to reduce these costs are required. It is not clear what the best approach to solve this problem is. Here we show an approach that is able to create a minimally-sized training dataset while maintaining high predictive power of the model. We apply our approach to a state-of-the-art graph neural network model for performance prediction in communication networks. Our approach is limited to a dataset of 100 samples with reduced sizes and achieves an MAPE of 9.79% on a test dataset containing significantly larger problem sizes, compared to a baseline approach which achieved an MAPE of 37.82%. We think this approach can be useful to create high-quality datasets of communication networks and decrease the time needed to train graph neural network models on performance prediction tasks.
{"title":"Data-efficient GNN models of communication networks using beta-distribution-based sample ranking","authors":"None Max Helm, None Benedikt Jaeger, None Georg Carle","doi":"10.52953/fuqe7013","DOIUrl":"https://doi.org/10.52953/fuqe7013","url":null,"abstract":"Machine learning models for tasks in communication networks often require large datasets to be trained. This training is cost intensive, and solutions to reduce these costs are required. It is not clear what the best approach to solve this problem is. Here we show an approach that is able to create a minimally-sized training dataset while maintaining high predictive power of the model. We apply our approach to a state-of-the-art graph neural network model for performance prediction in communication networks. Our approach is limited to a dataset of 100 samples with reduced sizes and achieves an MAPE of 9.79% on a test dataset containing significantly larger problem sizes, compared to a baseline approach which achieved an MAPE of 37.82%. We think this approach can be useful to create high-quality datasets of communication networks and decrease the time needed to train graph neural network models on performance prediction tasks.","PeriodicalId":93013,"journal":{"name":"ITU journal : ICT discoveries","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136362063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
None Ilter Taha Aktolga, None Elif Sena Kuru, None Yigit Sever, None Pelin Angin
The rising use of microservice-based software deployment on the cloud leverages containerized software extensively. The security of applications running inside containers, as well as the container environment itself, are critical for infrastructure in cloud settings and 5G. To address security concerns, research efforts have been focused on container security with subfields such as intrusion detection, malware detection and container placement strategies. These security efforts are roughly divided into two categories: rule-based approaches and machine learning that can respond to novel threats. In this study, we survey the container security literature focusing on approaches that leverage machine learning to address security challenges.
{"title":"AI-driven container security approaches for 5G and beyond: A survey","authors":"None Ilter Taha Aktolga, None Elif Sena Kuru, None Yigit Sever, None Pelin Angin","doi":"10.52953/zrck3746","DOIUrl":"https://doi.org/10.52953/zrck3746","url":null,"abstract":"The rising use of microservice-based software deployment on the cloud leverages containerized software extensively. The security of applications running inside containers, as well as the container environment itself, are critical for infrastructure in cloud settings and 5G. To address security concerns, research efforts have been focused on container security with subfields such as intrusion detection, malware detection and container placement strategies. These security efforts are roughly divided into two categories: rule-based approaches and machine learning that can respond to novel threats. In this study, we survey the container security literature focusing on approaches that leverage machine learning to address security challenges.","PeriodicalId":93013,"journal":{"name":"ITU journal : ICT discoveries","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136044739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study provides explicit mathematical formulations for the bistatic scattering coefficient from a randomly rough surface with a complex relative permittivity based on the following analytic models: small perturbation model (SPM), physical optics model (PO), and Kirchhoff approximation model (KA). Then it addresses the two shortcomings associated with each of the three models: i) limited applicability domain, and ii) null predicted values for the cross polarized bistatic scattering coefficients within plane of incidence. The plane of incidence contains both backscattering direction and forward (specular reflection) direction which are of interest to the spectrum community.
{"title":"ANALYTIC MODELS FOR BISTATIC SCATTERING FROM A RANDOMLY ROUGH SURFACE WITH COMPLEX RELATIVE PERMITTIVITY.","authors":"Mostafa A Karam, Ryan S McDonough","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>This study provides explicit mathematical formulations for the bistatic scattering coefficient from a randomly rough surface with a complex relative permittivity based on the following analytic models: small perturbation model (SPM), physical optics model (PO), and Kirchhoff approximation model (KA). Then it addresses the two shortcomings associated with each of the three models: i) limited applicability domain, and ii) null predicted values for the cross polarized bistatic scattering coefficients within plane of incidence. The plane of incidence contains both backscattering direction and forward (specular reflection) direction which are of interest to the spectrum community.</p>","PeriodicalId":93013,"journal":{"name":"ITU journal : ICT discoveries","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7323588/pdf/nihms-1585761.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38099208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}