The threats of botnets are becoming a growing concern infecting more and more computers every day. Although botnets can be detected from their behavioral patterns, it is becoming more challenging to differentiate the behavior between the malicious traffic and the legitimate traffic as with the advancement of the technologies the malicious traffics are following the similar behavioral patterns of benign traffics. The detection of malicious traffic largely depends on the traffic features that are being used to feed in the detection process. Selecting the best features for effective botnet detection is the main contribution of this paper. At the very beginning, we show the impact of different features on botnet detection process. Then we propose several heuristics to select the best features from a handful of possible features. Some proposed heuristics are truly feature-based and some are group-based, thus generating different accuracy levels. We also analyze time complexity of each heuristic and provide a detailed performance analysis. As working with all combinations of a large number of features is not feasible, some heuristics work by grouping the features based on their similarity in patterns and checking all combinations within the groups of small number of features which improves the time complexity by a large margin. Through experiments we show the efficacy of the proposed feature selection heuristics. The result shows that some heuristics outperform state-of-the-art feature selection algorithms.
{"title":"On Feature Selection Algorithms for Effective Botnet Detection","authors":"Meher Afroz, Muntaka Ibnath, Ashikur Rahman, Jakia Sultana, Raqeebir Rab","doi":"10.1007/s10922-024-09817-9","DOIUrl":"https://doi.org/10.1007/s10922-024-09817-9","url":null,"abstract":"<p>The threats of botnets are becoming a growing concern infecting more and more computers every day. Although botnets can be detected from their behavioral patterns, it is becoming more challenging to differentiate the behavior between the malicious traffic and the legitimate traffic as with the advancement of the technologies the malicious traffics are following the similar behavioral patterns of benign traffics. The detection of malicious traffic largely depends on the traffic features that are being used to feed in the detection process. Selecting the best features for effective botnet detection is the main contribution of this paper. At the very beginning, we show the impact of different features on botnet detection process. Then we propose several heuristics to select the best features from a handful of possible features. Some proposed heuristics are truly feature-based and some are group-based, thus generating different accuracy levels. We also analyze time complexity of each heuristic and provide a detailed performance analysis. As working with all combinations of a large number of features is not feasible, some heuristics work by grouping the features based on their similarity in patterns and checking all combinations within the groups of small number of features which improves the time complexity by a large margin. Through experiments we show the efficacy of the proposed feature selection heuristics. The result shows that some heuristics outperform state-of-the-art feature selection algorithms.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":"25 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140563554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cybersecurity has become an increasingly important field as cyber threats continue to grow in number and complexity. The NICE framework, developed by NIST, provides a structured approach to cybersecurity education. Despite the publication of cybersecurity frameworks, scenario design in cybersecurity is not yet governed by structured design principles, leading to ambiguous learning outcomes. This research uses the NICE framework to provide structure design and development of a cyber range and the relevant scenarios. The proposed methodology and research results can assist the scenario design in cybersecurity and as a methodological procedure for evaluation. Finally, the research provides a better understanding of the NICE framework and demonstrates how it can assist in creating practical cybersecurity scenarios.
{"title":"Towards NICE-by-Design Cybersecurity Learning Environments: A Cyber Range for SOC Teams","authors":"Stylianos Karagiannis, Emmanouil Magkos, Eleftherios Karavaras, Antonios Karnavas, Maria Nefeli Nikiforos, Christoforos Ntantogian","doi":"10.1007/s10922-024-09816-w","DOIUrl":"https://doi.org/10.1007/s10922-024-09816-w","url":null,"abstract":"<p>Cybersecurity has become an increasingly important field as cyber threats continue to grow in number and complexity. The NICE framework, developed by NIST, provides a structured approach to cybersecurity education. Despite the publication of cybersecurity frameworks, scenario design in cybersecurity is not yet governed by structured design principles, leading to ambiguous learning outcomes. This research uses the NICE framework to provide structure design and development of a cyber range and the relevant scenarios. The proposed methodology and research results can assist the scenario design in cybersecurity and as a methodological procedure for evaluation. Finally, the research provides a better understanding of the NICE framework and demonstrates how it can assist in creating practical cybersecurity scenarios.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":"48 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140563469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-03DOI: 10.1007/s10922-024-09814-y
Sajid Alam, Javier Jose Diaz Rivera, Mir Muhammad Suleman Sarwar, Afaq Muhammad, Wang-Cheol Song
The recent advancements in network systems, including Software-Defined Networking (SDN), Network Functions Virtualization (NFV), and cloud networking, have significantly enhanced network management. These technologies increase efficiency, reduce manual efforts, and improve agility in deploying new services. They also enable scalable network resources, facilitate handling demand surges, and provide efficient access to innovative solutions. Despite these advancements, the performance of interconnected nodes is still influenced by the heterogeneity of network infrastructure and the capabilities of physical links. This work introduces a comprehensive solution addressing these challenges through Intent-Based Networking (IBN). Our approach utilizes IBN for defining high-level service requirements (QoS) tailored to individual node specifications. Further, we integrate a Graph Neural Network (GNN) to model the network’s overlay topology and understand the behavior of nodes and links. This integration enables the translation of defined intents into optimal paths between end-to-end nodes, ensuring efficient path selection. Additionally, our system incorporates Deep Deterministic Policy Gradients (DDPG) for dynamic weight calculation of QoS metrics to adjust the link cost assigned to network paths based on performance metrics, ensuring the network adapts to the specified QoS intents. The proposed solution has been implemented as an IBN system design comprising an intent definition manager, a GNN model for optimal path selection, an Off-Platform Application (OPA) for policy creation, an assurance module consisting of the DDPG mechanism, and a real-time monitoring system. This design ensures continuous efficient path selection assurance, dynamically adapting to changing conditions and maintaining optimal service levels per the defined intents.
{"title":"Assuring Efficient Path Selection in an Intent-Based Networking System: A Graph Neural Networks and Deep Reinforcement Learning Approach","authors":"Sajid Alam, Javier Jose Diaz Rivera, Mir Muhammad Suleman Sarwar, Afaq Muhammad, Wang-Cheol Song","doi":"10.1007/s10922-024-09814-y","DOIUrl":"https://doi.org/10.1007/s10922-024-09814-y","url":null,"abstract":"<p>The recent advancements in network systems, including Software-Defined Networking (SDN), Network Functions Virtualization (NFV), and cloud networking, have significantly enhanced network management. These technologies increase efficiency, reduce manual efforts, and improve agility in deploying new services. They also enable scalable network resources, facilitate handling demand surges, and provide efficient access to innovative solutions. Despite these advancements, the performance of interconnected nodes is still influenced by the heterogeneity of network infrastructure and the capabilities of physical links. This work introduces a comprehensive solution addressing these challenges through Intent-Based Networking (IBN). Our approach utilizes IBN for defining high-level service requirements (QoS) tailored to individual node specifications. Further, we integrate a Graph Neural Network (GNN) to model the network’s overlay topology and understand the behavior of nodes and links. This integration enables the translation of defined intents into optimal paths between end-to-end nodes, ensuring efficient path selection. Additionally, our system incorporates Deep Deterministic Policy Gradients (DDPG) for dynamic weight calculation of QoS metrics to adjust the link cost assigned to network paths based on performance metrics, ensuring the network adapts to the specified QoS intents. The proposed solution has been implemented as an IBN system design comprising an intent definition manager, a GNN model for optimal path selection, an Off-Platform Application (OPA) for policy creation, an assurance module consisting of the DDPG mechanism, and a real-time monitoring system. This design ensures continuous efficient path selection assurance, dynamically adapting to changing conditions and maintaining optimal service levels per the defined intents.\u0000</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":"94 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140597357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-01DOI: 10.1007/s10922-024-09809-9
Balázs Sonkoly, Bálint György Nagy, János Dóka, Zsófia Kecskés-Solymosi, János Czentye, Bence Formanek, Dávid Jocha, Balázs Péter Gerő
Augmented Reality (AR) applications can reshape our society enabling novel ways of interactions and immersive experiences in many fields. However, multi-user and collaborative AR applications pose several challenges. The expected user experience requires accurate position and orientation information for each device and precise synchronization of the respective coordinate systems in real-time. Unlike mobile phones or AR glasses running on battery with constrained resource capacity, cloud and edge platforms can provide the computing power for the core functions under the hood. In this paper, we propose a novel edge cloud based platform for multi-user AR applications realizing an essential coordination service among the users. The latency critical, computation intensive Simultaneous Localization And Mapping (SLAM) function is offloaded from the device to the edge cloud infrastructure. Our solution is built on open-source SLAM libraries and the Robot Operating System (ROS). Our contribution is threefold. First, we propose an extensible, edge cloud based AR architecture. Second, we develop a proof-of-concept prototype supporting multiple devices and building on an AI-based SLAM selection component. Third, a dedicated measurement methodology is described, including energy consumption aspects as well, and the overall performance of the system is evaluated via real experiments.
增强现实(AR)应用可以重塑我们的社会,在许多领域实现新颖的交互方式和身临其境的体验。然而,多用户和协作式 AR 应用带来了一些挑战。预期的用户体验需要每个设备的准确位置和方向信息,以及各自坐标系的实时精确同步。与资源容量有限的手机或 AR 眼镜不同,云平台和边缘平台可以为引擎盖下的核心功能提供计算能力。在本文中,我们为多用户 AR 应用程序提出了一种基于边缘云的新型平台,以实现用户之间的基本协调服务。延迟关键、计算密集的同步定位和映射(SLAM)功能从设备卸载到边缘云基础设施。我们的解决方案基于开源 SLAM 库和机器人操作系统(ROS)。我们的贡献体现在三个方面。首先,我们提出了一种基于边缘云的可扩展 AR 架构。其次,我们开发了一个概念验证原型,支持多种设备并基于人工智能的 SLAM 选择组件。第三,我们介绍了一种专门的测量方法,包括能耗方面,并通过实际实验对系统的整体性能进行了评估。
{"title":"An Edge Cloud Based Coordination Platform for Multi-user AR Applications","authors":"Balázs Sonkoly, Bálint György Nagy, János Dóka, Zsófia Kecskés-Solymosi, János Czentye, Bence Formanek, Dávid Jocha, Balázs Péter Gerő","doi":"10.1007/s10922-024-09809-9","DOIUrl":"https://doi.org/10.1007/s10922-024-09809-9","url":null,"abstract":"<p>Augmented Reality (AR) applications can reshape our society enabling novel ways of interactions and immersive experiences in many fields. However, multi-user and collaborative AR applications pose several challenges. The expected user experience requires accurate position and orientation information for each device and precise synchronization of the respective coordinate systems in real-time. Unlike mobile phones or AR glasses running on battery with constrained resource capacity, cloud and edge platforms can provide the computing power for the core functions under the hood. In this paper, we propose a novel edge cloud based platform for multi-user AR applications realizing an essential coordination service among the users. The latency critical, computation intensive Simultaneous Localization And Mapping (SLAM) function is offloaded from the device to the edge cloud infrastructure. Our solution is built on open-source SLAM libraries and the Robot Operating System (ROS). Our contribution is threefold. First, we propose an extensible, edge cloud based AR architecture. Second, we develop a proof-of-concept prototype supporting multiple devices and building on an AI-based SLAM selection component. Third, a dedicated measurement methodology is described, including energy consumption aspects as well, and the overall performance of the system is evaluated via real experiments.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":"96 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140597473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-29DOI: 10.1007/s10922-024-09812-0
Imad Eddine Said, Lamri Sayad, Djamil Aissani
The use of Network Function Virtualization is constantly increasing in Cloud environments, especially for next-generation networks such as 5G. In this context, the definition of a deployment scheme defining for each Virtual Network Function (VNF) the appropriate server in order to meet the quality of service requirements. This problem is known in the literature as virtual fetwork function placement. However, proper deployment of VNFs on servers can minimize the number of servers used, but may increase service latency. In this article, we propose a multi-objective integer linear programming model to solve the problem of network function placement. The objective is to find the best compromise between minimizing end-to-end total latency for users and reducing the number of servers used, while ensuring that the maximum number of VNFs is connected in the network. Our proposal to solve the NP-hard problem involves developing an algorithm based on the Particle Swarm Optimization metaheuristic to obtain a polynomial time resolution. By performing tests on a simple VNF deployment problem, we validated the relevance of our optimization model and demonstrated the effectiveness of our algorithm. The results obtained showed that our method provides feasible solutions very close to the exact optimal solutions.
{"title":"Placement Optimization of Virtual Network Functions in a Cloud Computing Environment","authors":"Imad Eddine Said, Lamri Sayad, Djamil Aissani","doi":"10.1007/s10922-024-09812-0","DOIUrl":"https://doi.org/10.1007/s10922-024-09812-0","url":null,"abstract":"<p>The use of Network Function Virtualization is constantly increasing in Cloud environments, especially for next-generation networks such as 5G. In this context, the definition of a deployment scheme defining for each Virtual Network Function (VNF) the appropriate server in order to meet the quality of service requirements. This problem is known in the literature as virtual fetwork function placement. However, proper deployment of VNFs on servers can minimize the number of servers used, but may increase service latency. In this article, we propose a multi-objective integer linear programming model to solve the problem of network function placement. The objective is to find the best compromise between minimizing end-to-end total latency for users and reducing the number of servers used, while ensuring that the maximum number of VNFs is connected in the network. Our proposal to solve the NP-hard problem involves developing an algorithm based on the Particle Swarm Optimization metaheuristic to obtain a polynomial time resolution. By performing tests on a simple VNF deployment problem, we validated the relevance of our optimization model and demonstrated the effectiveness of our algorithm. The results obtained showed that our method provides feasible solutions very close to the exact optimal solutions.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":"34 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140323464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-28DOI: 10.1007/s10922-024-09811-1
Mayssa Dardour, Mohamed Mosbah, Toufik Ahmed
The Intelligent Transportation System protocol stack has revolutionized traffic efficiency and road safety applications in vehicular environments. This is due to the incorporation of the 802.11p standard in the 5.9 GHz band. This study introduces a novel architecture for vehicular communications. It employs Internet Protocol version 4 multicast over the 5.9 GHz band allocated for Intelligent Transportation Systems. The frequency band was designated by the European Telecommunications Standards Institute. Our proposed architecture addresses challenges in a specific urban use case. The use case involves a Level Crossing in Bordeaux. It focuses on the broadcast of Cooperative Awareness Message (CAM) and Decentralized Environmental Notification Message (DENM) to enhance road-user safety. To prevent accidents, we present an algorithm for CAM and DENM dissemination that ensures timely alerts for sudden vehicle blockage emergencies. Moreover, we introduce a comprehensive and optimized train braking strategy to further minimize accident risks. This strategy aims to provide efficient and timely train deceleration, allowing sufficient time for road users to clear the Level Crossing and mitigating the potential risk for collisions. We analyze End-to-End delay and Packet Reception Ratio to gauge our system’s performance. We also compare our edge-server-based architecture with cloud-based alternatives, showcasing improved latency and Packet Loss Rate in our approach. The obtained results illustrate the effectiveness of our edge-server-based architecture in the context of Intelligent Transportation Systems, particularly utilizing the 5.9 GHz band technology. The findings of this study provide a foundation for future deployments and improvements in urban environments, fostering safer and more reliable transportation systems.
{"title":"Improving Emergency Response: An In-Depth Analysis of an ITS-G5 Messaging Strategy for Bus Blockage Emergencies at Level Crossings","authors":"Mayssa Dardour, Mohamed Mosbah, Toufik Ahmed","doi":"10.1007/s10922-024-09811-1","DOIUrl":"https://doi.org/10.1007/s10922-024-09811-1","url":null,"abstract":"<p>The Intelligent Transportation System protocol stack has revolutionized traffic efficiency and road safety applications in vehicular environments. This is due to the incorporation of the 802.11p standard in the 5.9 GHz band. This study introduces a novel architecture for vehicular communications. It employs Internet Protocol version 4 multicast over the 5.9 GHz band allocated for Intelligent Transportation Systems. The frequency band was designated by the European Telecommunications Standards Institute. Our proposed architecture addresses challenges in a specific urban use case. The use case involves a Level Crossing in Bordeaux. It focuses on the broadcast of Cooperative Awareness Message (CAM) and Decentralized Environmental Notification Message (DENM) to enhance road-user safety. To prevent accidents, we present an algorithm for CAM and DENM dissemination that ensures timely alerts for sudden vehicle blockage emergencies. Moreover, we introduce a comprehensive and optimized train braking strategy to further minimize accident risks. This strategy aims to provide efficient and timely train deceleration, allowing sufficient time for road users to clear the Level Crossing and mitigating the potential risk for collisions. We analyze End-to-End delay and Packet Reception Ratio to gauge our system’s performance. We also compare our edge-server-based architecture with cloud-based alternatives, showcasing improved latency and Packet Loss Rate in our approach. The obtained results illustrate the effectiveness of our edge-server-based architecture in the context of Intelligent Transportation Systems, particularly utilizing the 5.9 GHz band technology. The findings of this study provide a foundation for future deployments and improvements in urban environments, fostering safer and more reliable transportation systems.\u0000</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":"15 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140323477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-26DOI: 10.1007/s10922-024-09810-2
Siamak Azadiabad, Ferhat Khendek
In the context of Network Function Virtualization (NFV), Network Services (NS) are realized by interconnecting Virtual Network Functions (VNF) using Virtual Links (VL). Service dependability is an important characteristic of NSs. Service dependability includes service availability, reliability, and continuity attributes. The NFV framework brings new challenges to NS dependability because of the resource-sharing possibility, NS and VNF elasticity, and the dynamicity in the NFV environment. There has been a lot of work on the dependability of NSs in the context of NFV. Existing works address different dependability attributes and use different methods and mechanisms in the proposed solutions. Thus, it is difficult to evaluate and compare these solutions, determine the well-investigated areas, and identify gaps. In this paper, we investigate the state of the art for NS dependability in the context of NFV and survey existing solutions. We define a taxonomy that represents the different aspects of NS dependability and their relations in this context. We analyze 102 papers published in the past 5 years and map them to our proposed taxonomy to understand what is addressed in the proposed solution and how. We identify the gaps and discuss potential future work directions.
{"title":"Dependability of Network Services in the Context of NFV: A Taxonomy and State of the Art Classification","authors":"Siamak Azadiabad, Ferhat Khendek","doi":"10.1007/s10922-024-09810-2","DOIUrl":"https://doi.org/10.1007/s10922-024-09810-2","url":null,"abstract":"<p>In the context of Network Function Virtualization (NFV), Network Services (NS) are realized by interconnecting Virtual Network Functions (VNF) using Virtual Links (VL). Service dependability is an important characteristic of NSs. Service dependability includes service availability, reliability, and continuity attributes. The NFV framework brings new challenges to NS dependability because of the resource-sharing possibility, NS and VNF elasticity, and the dynamicity in the NFV environment. There has been a lot of work on the dependability of NSs in the context of NFV. Existing works address different dependability attributes and use different methods and mechanisms in the proposed solutions. Thus, it is difficult to evaluate and compare these solutions, determine the well-investigated areas, and identify gaps. In this paper, we investigate the state of the art for NS dependability in the context of NFV and survey existing solutions. We define a taxonomy that represents the different aspects of NS dependability and their relations in this context. We analyze 102 papers published in the past 5 years and map them to our proposed taxonomy to understand what is addressed in the proposed solution and how. We identify the gaps and discuss potential future work directions.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":"358 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140303051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-22DOI: 10.1007/s10922-024-09813-z
Kelson Carvalho Santos, Rodrigo Sanches Miani, Flávio de Oliveira Silva
The development of Intrusion Detection Systems using Machine Learning techniques (ML-based IDS) has emerged as an important research topic in the cybersecurity field. However, there is a noticeable absence of systematic studies to comprehend the usability of such systems in real-world applications. This paper analyzes the impact of data preprocessing techniques on the performance of ML-based IDS using two public datasets, UNSW-NB15 and CIC-IDS2017. Specifically, we evaluated the effects of data cleaning, encoding, and normalization techniques on the performance of binary and multiclass intrusion detection models. This work investigates the impact of data preprocessing techniques on the performance of ML-based IDS and how the performance of different ML-based IDS is affected by data preprocessing techniques. To this end, we implemented a machine learning pipeline to apply the data preprocessing techniques in different scenarios to answer such questions. The findings analyzed using the Friedman statistical test and Nemenyi post-hoc test revealed significant differences in groups of data preprocessing techniques and ML-based IDS, according to the evaluation metrics. However, these differences were not observed in multiclass scenarios for data preprocessing techniques. Additionally, ML-based IDS exhibited varying performances in binary and multiclass classifications. Therefore, our investigation presents insights into the efficacy of different data preprocessing techniques for building robust and accurate intrusion detection models.
利用机器学习技术开发入侵检测系统(基于 ML 的 IDS)已成为网络安全领域的一个重要研究课题。然而,在了解此类系统在实际应用中的可用性方面,却明显缺乏系统性的研究。本文利用 UNSW-NB15 和 CIC-IDS2017 这两个公开数据集,分析了数据预处理技术对基于 ML 的 IDS 性能的影响。具体来说,我们评估了数据清理、编码和规范化技术对二元和多类入侵检测模型性能的影响。这项工作研究了数据预处理技术对基于 ML 的 IDS 性能的影响,以及不同基于 ML 的 IDS 的性能如何受到数据预处理技术的影响。为此,我们实施了一个机器学习管道,在不同场景中应用数据预处理技术来回答这些问题。使用 Friedman 统计检验和 Nemenyi 事后检验分析的结果显示,根据评估指标,数据预处理技术组和基于 ML 的 IDS 组之间存在显著差异。不过,在数据预处理技术的多类情况下,没有观察到这些差异。此外,基于 ML 的 IDS 在二分类和多分类中表现出不同的性能。因此,我们的研究深入揭示了不同数据预处理技术在构建稳健、准确的入侵检测模型方面的功效。
{"title":"Evaluating the Impact of Data Preprocessing Techniques on the Performance of Intrusion Detection Systems","authors":"Kelson Carvalho Santos, Rodrigo Sanches Miani, Flávio de Oliveira Silva","doi":"10.1007/s10922-024-09813-z","DOIUrl":"https://doi.org/10.1007/s10922-024-09813-z","url":null,"abstract":"<p>The development of Intrusion Detection Systems using Machine Learning techniques (ML-based IDS) has emerged as an important research topic in the cybersecurity field. However, there is a noticeable absence of systematic studies to comprehend the usability of such systems in real-world applications. This paper analyzes the impact of data preprocessing techniques on the performance of ML-based IDS using two public datasets, UNSW-NB15 and CIC-IDS2017. Specifically, we evaluated the effects of data cleaning, encoding, and normalization techniques on the performance of binary and multiclass intrusion detection models. This work investigates the impact of data preprocessing techniques on the performance of ML-based IDS and how the performance of different ML-based IDS is affected by data preprocessing techniques. To this end, we implemented a machine learning pipeline to apply the data preprocessing techniques in different scenarios to answer such questions. The findings analyzed using the Friedman statistical test and Nemenyi post-hoc test revealed significant differences in groups of data preprocessing techniques and ML-based IDS, according to the evaluation metrics. However, these differences were not observed in multiclass scenarios for data preprocessing techniques. Additionally, ML-based IDS exhibited varying performances in binary and multiclass classifications. Therefore, our investigation presents insights into the efficacy of different data preprocessing techniques for building robust and accurate intrusion detection models.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":"12 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140202089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-21DOI: 10.1007/s10922-024-09815-x
Nathan Harris, Sajad Khorsandroo
OpenFlow-compliant commodity switches face challenges in efficiently managing flow rules due to the limited capacity of expensive high-speed memories used to store them. The accumulation of inactive flows can disrupt ongoing communication, necessitating an optimized approach to flow rule timeouts. This paper proposes Delayed Dynamic Timeout (DDT), a Reinforcement Learning-based approach to dynamically adjust flow rule timeouts and enhance the utilization of a switch’s flow table(s) for improved efficiency. Despite the dynamic nature of network traffic, our DDT algorithm leverages advancements in Reinforcement Learning algorithms to adapt and achieve flow-specific optimization objectives. The evaluation results demonstrate that DDT outperforms static timeout values in terms of both flow rule match rate and flow rule activity. By continuously adapting to changing network conditions, DDT showcases the potential of Reinforcement Learning algorithms to effectively optimize flow rule management. This research contributes to the advancement of flow rule optimization techniques and highlights the feasibility of applying Reinforcement Learning in the context of SDN.
由于用于存储流量规则的昂贵高速存储器容量有限,兼容 OpenFlow 的商品交换机在有效管理流量规则方面面临挑战。不活动流量的累积会中断正在进行的通信,因此需要一种优化的流量规则超时方法。本文提出了延迟动态超时(DDT),这是一种基于强化学习的方法,可动态调整流量规则超时,提高交换机流量表的利用率,从而提高效率。尽管网络流量是动态的,但我们的 DDT 算法充分利用了强化学习算法的先进性,以适应和实现特定流量的优化目标。评估结果表明,DDT 在流量规则匹配率和流量规则活动方面都优于静态超时值。通过持续适应不断变化的网络条件,DDT 展示了强化学习算法有效优化流量规则管理的潜力。这项研究有助于推动流规则优化技术的发展,并强调了在 SDN 背景下应用强化学习的可行性。
{"title":"DDT: A Reinforcement Learning Approach to Dynamic Flow Timeout Assignment in Software Defined Networks","authors":"Nathan Harris, Sajad Khorsandroo","doi":"10.1007/s10922-024-09815-x","DOIUrl":"https://doi.org/10.1007/s10922-024-09815-x","url":null,"abstract":"<p>OpenFlow-compliant commodity switches face challenges in efficiently managing flow rules due to the limited capacity of expensive high-speed memories used to store them. The accumulation of inactive flows can disrupt ongoing communication, necessitating an optimized approach to flow rule timeouts. This paper proposes Delayed Dynamic Timeout (DDT), a Reinforcement Learning-based approach to dynamically adjust flow rule timeouts and enhance the utilization of a switch’s flow table(s) for improved efficiency. Despite the dynamic nature of network traffic, our DDT algorithm leverages advancements in Reinforcement Learning algorithms to adapt and achieve flow-specific optimization objectives. The evaluation results demonstrate that DDT outperforms static timeout values in terms of both flow rule match rate and flow rule activity. By continuously adapting to changing network conditions, DDT showcases the potential of Reinforcement Learning algorithms to effectively optimize flow rule management. This research contributes to the advancement of flow rule optimization techniques and highlights the feasibility of applying Reinforcement Learning in the context of SDN.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":"163 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140202088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In an Internet of Things (IoT) environment, multiple objects usually interact with one another to meet a complex user’s request. This involves the composition of several atomic IoT services. Given a large number of functionally equivalent services with different Quality of Service (QoS) values, the service composition problem remains one of the main challenges in IoT environments. This paper presents a Discrete Adaptive Lion Optimization Algorithm (DALOA) to select IoT services in a composition process while considering global user QoS constraints. DALOA is based on the Lion Optimization Algorithm (LOA) and developed by combining several LOA operators, such as roaming, mating, and migration. First, DALOA divides the initial population into two sub-populations: pride and nomad, and each sub-population has its search strategies. Second, the roaming nomad process follows a random searching mode (strong exploration) to avoid being trapped in local optima. Third, the roaming pride searching mode represents strong local research, ensuring more efficient exploitation. Four, mating (mating pride, mating nomad) allows for information sharing between members of the same population. Finally, the migration operator is used to ensure population diversity by allowing information sharing between the pride and the nomad. The simulation results show that DALOA obtains the best compositional optimality and finds the near-optimal composition of the IoT services in a reasonable execution time compared to other approaches. Indeed, the combination of the previous operators provides a good trade-off between exploration and exploitation.
在物联网(IoT)环境中,多个对象通常会相互影响,以满足用户的复杂要求。这涉及到多个原子物联网服务的组合。鉴于存在大量功能等同、服务质量(QoS)值不同的服务,服务组合问题仍然是物联网环境中的主要挑战之一。本文提出了一种离散自适应狮子优化算法(DALOA),用于在组合过程中选择物联网服务,同时考虑全局用户 QoS 约束。DALOA 以狮子优化算法(LOA)为基础,结合了多个 LOA 算子(如漫游、交配和迁移)而开发。首先,DALOA 将初始种群分为两个子种群:狮群和游牧群,每个子种群都有自己的搜索策略。其次,游牧过程遵循随机搜索模式(强探索),以避免陷入局部最优。第三,"骄傲 "的漫游搜索模式代表了强大的局部研究,确保更有效的开发。第四,交配(交配骄傲、交配游牧)允许同一种群成员之间共享信息。最后,迁移算子允许骄傲和游牧之间共享信息,从而确保种群多样性。模拟结果表明,与其他方法相比,DALOA 能获得最佳的优化组合,并能在合理的执行时间内找到接近最优的物联网服务组合。事实上,结合之前的运算符可以在探索和利用之间取得良好的平衡。
{"title":"A Discrete Adaptive Lion Optimization Algorithm for QoS-Driven IoT Service Composition with Global Constraints","authors":"Souhila Ait Hacène Ouhadda, Samia Chibani Sadouki, Achour Achroufene, Abdelkamel Tari","doi":"10.1007/s10922-024-09808-w","DOIUrl":"https://doi.org/10.1007/s10922-024-09808-w","url":null,"abstract":"<p>In an Internet of Things (IoT) environment, multiple objects usually interact with one another to meet a complex user’s request. This involves the composition of several atomic IoT services. Given a large number of functionally equivalent services with different Quality of Service (QoS) values, the service composition problem remains one of the main challenges in IoT environments. This paper presents a Discrete Adaptive Lion Optimization Algorithm (DALOA) to select IoT services in a composition process while considering global user QoS constraints. DALOA is based on the Lion Optimization Algorithm (LOA) and developed by combining several LOA operators, such as roaming, mating, and migration. First, DALOA divides the initial population into two sub-populations: pride and nomad, and each sub-population has its search strategies. Second, the roaming nomad process follows a random searching mode (strong exploration) to avoid being trapped in local optima. Third, the roaming pride searching mode represents strong local research, ensuring more efficient exploitation. Four, mating (mating pride, mating nomad) allows for information sharing between members of the same population. Finally, the migration operator is used to ensure population diversity by allowing information sharing between the pride and the nomad. The simulation results show that DALOA obtains the best compositional optimality and finds the near-optimal composition of the IoT services in a reasonable execution time compared to other approaches. Indeed, the combination of the previous operators provides a good trade-off between exploration and exploitation.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":"10 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140115897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}