For performing various predictive analytics tasks for real-time mission-critical applications, Federated Learning (FL) have emerged as the go-to machine learning paradigm for its ability to leverage perform machine learning workloads on resource-constrained edge devices. For such FL applications working under stringent deadlines, the overall local training time needs to be minimized, which consists of the retrieval delay, i.e., the delay in fetching the data from the IoT devices to the FL clients as well as the time consumed in training the local models. Since the latter component is mostly uniform among the FL clients, we have to minimize the retrieval delay to reduce the local training time. To that end, we formulate the Client Assignment Problem (CAP) as an intelligent assignment of selected IoT devices to each FL client such that the FL client may retrieve training data from these IoT devices with minimal retrieval delay. CAP must perform assignments for each FL client considering its relative distances from each IoT device such that each FL client does not experience an arbitrarily large retrieval delay in fetching data from a remotely placed IoT device. We prove that CAP is NP-Hard, and as such, obtaining a polynomial time solution to CAP is infeasible. To deal with the challenges faced by such heuristics approaches, we propose Deep Reinforcement Learning-based algorithms to produce near-optimal solution to CAP. We demonstrate that our algorithms outperform the state of the art in reducing the local training time, while producing a near-optimal solution.
为了在实时关键任务应用中执行各种预测分析任务,联邦学习(Federated Learning,FL)因其能够在资源有限的边缘设备上利用执行机器学习工作负载的能力而成为机器学习的首选范例。对于此类在严格期限内工作的 FL 应用程序,需要最大限度地减少整体本地训练时间,其中包括检索延迟(即从物联网设备向 FL 客户端获取数据的延迟)以及训练本地模型所消耗的时间。由于后一部分在 FL 客户端之间大多是一致的,因此我们必须尽量减少检索延迟,以缩短本地训练时间。为此,我们将客户分配问题(CAP)表述为将选定的物联网设备智能分配给每个 FL 客户端,以便 FL 客户端能以最小的检索延迟从这些物联网设备中检索训练数据。CAP 必须考虑每个 FL 客户端与每个物联网设备的相对距离,为每个 FL 客户端执行分配,使每个 FL 客户端在从远程放置的物联网设备获取数据时不会出现任意大的检索延迟。我们证明 CAP 是 NP-Hard,因此,获得 CAP 的多项式时间解决方案是不可行的。为了应对此类启发式方法所面临的挑战,我们提出了基于深度强化学习的算法,以产生接近最优的 CAP 解决方案。我们证明,我们的算法在减少局部训练时间方面优于现有技术,同时还能生成近乎最优的解决方案。
{"title":"Reinforcement Learning for Real-Time Federated Learning for Resource-Constrained Edge Cluster","authors":"Kolichala Rajashekar, Souradyuti Paul, Sushanta Karmakar, Subhajit Sidhanta","doi":"10.1007/s10922-024-09857-1","DOIUrl":"https://doi.org/10.1007/s10922-024-09857-1","url":null,"abstract":"<p>For performing various predictive analytics tasks for real-time mission-critical applications, Federated Learning (FL) have emerged as the go-to machine learning paradigm for its ability to leverage perform machine learning workloads on resource-constrained edge devices. For such FL applications working under stringent deadlines, the overall <i>local training time</i> needs to be minimized, which consists of the <i>retrieval delay</i>, i.e., the delay in fetching the data from the IoT devices to the FL clients as well as the time consumed in training the local models. Since the latter component is mostly uniform among the FL clients, we have to minimize the retrieval delay to reduce the local training time. To that end, we formulate the Client Assignment Problem (CAP) as an intelligent assignment of selected IoT devices to each FL client such that the FL client may retrieve training data from these IoT devices with minimal retrieval delay. CAP must perform assignments for each FL client considering its relative distances from each IoT device such that each FL client does not experience an arbitrarily large retrieval delay in fetching data from a remotely placed IoT device. We prove that CAP is NP-Hard, and as such, obtaining a polynomial time solution to CAP is infeasible. To deal with the challenges faced by such heuristics approaches, we propose Deep Reinforcement Learning-based algorithms to produce near-optimal solution to CAP. We demonstrate that our algorithms outperform the state of the art in reducing the local training time, while producing a near-optimal solution.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":"1 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142216604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-13DOI: 10.1007/s10922-024-09868-y
Thiago Valentim, Gustavo Callou, Cleunio França, Eduardo Tavares
Internet of Things (IoT) allows distinct elements of an environment to be remotely monitored using existing network infrastructures, creating a prominent integration of disparate computing systems. Such an integration commonly results in efficient data collection and processing. Indeed, the adoption of IoT can improve communication in gathering and transmitting data, especially in locations that deal with connectivity challenges. For instance, hospitals have adopted IoT to collect and transmit patient data to health professionals, as critical patients must be monitored uninterruptedly. As a consequence, healthcare systems typically require high availability, in which connectivity is essential for critical medical decisions. Some works have conceived techniques to assess the availability of Internet of Medical Things (IoMT) systems, but the joint assessment of performance and availability is generally neglected. This paper presents a modeling approach based on stochastic Petri nets (SPN) and reliability block diagrams (RBD) to evaluate IoMT systems. The proposed technique evaluates availability and response time of the communication between devices in an IoMT architecture. Experimental results show the practical feasibility of the proposed approach, in which a sensitivity analysis is adopted to indicate the components with the most significant impact on the system operation. Our approach contributes to the state of the art as an additional technique to evaluate different system designs before modifying or implementing the real system or a prototype.
物联网(IoT)允许利用现有的网络基础设施对环境中的不同元素进行远程监控,从而将不同的计算系统集成在一起。这种整合通常可以实现高效的数据收集和处理。事实上,采用物联网可以改善数据收集和传输方面的通信,尤其是在面临连接挑战的地方。例如,医院采用物联网来收集病人数据并将其传输给医疗专业人员,因为必须不间断地监控危重病人。因此,医疗保健系统通常需要高可用性,其中连接性对关键医疗决策至关重要。一些著作构想了评估医疗物联网(IoMT)系统可用性的技术,但一般都忽略了对性能和可用性的联合评估。本文介绍了一种基于随机 Petri 网(SPN)和可靠性框图(RBD)的建模方法,用于评估 IoMT 系统。所提出的技术可评估 IoMT 架构中设备间通信的可用性和响应时间。实验结果表明了所提方法的实际可行性,其中采用了敏感性分析来指出对系统运行影响最大的组件。作为在修改或实施真实系统或原型之前评估不同系统设计的一项附加技术,我们的方法为技术发展做出了贡献。
{"title":"Availability and Performance Assessment of IoMT Systems: A Stochastic Modeling Approach","authors":"Thiago Valentim, Gustavo Callou, Cleunio França, Eduardo Tavares","doi":"10.1007/s10922-024-09868-y","DOIUrl":"https://doi.org/10.1007/s10922-024-09868-y","url":null,"abstract":"<p>Internet of Things (IoT) allows distinct elements of an environment to be remotely monitored using existing network infrastructures, creating a prominent integration of disparate computing systems. Such an integration commonly results in efficient data collection and processing. Indeed, the adoption of IoT can improve communication in gathering and transmitting data, especially in locations that deal with connectivity challenges. For instance, hospitals have adopted IoT to collect and transmit patient data to health professionals, as critical patients must be monitored uninterruptedly. As a consequence, healthcare systems typically require high availability, in which connectivity is essential for critical medical decisions. Some works have conceived techniques to assess the availability of Internet of Medical Things (IoMT) systems, but the joint assessment of performance and availability is generally neglected. This paper presents a modeling approach based on stochastic Petri nets (SPN) and reliability block diagrams (RBD) to evaluate IoMT systems. The proposed technique evaluates availability and response time of the communication between devices in an IoMT architecture. Experimental results show the practical feasibility of the proposed approach, in which a sensitivity analysis is adopted to indicate the components with the most significant impact on the system operation. Our approach contributes to the state of the art as an additional technique to evaluate different system designs before modifying or implementing the real system or a prototype.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":"2 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142257611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-12DOI: 10.1007/s10922-024-09867-z
Alexandro Marcelo Zacaron, Daniel Matheus Brandão Lent, Vitor Gabriel da Silva Ruffo, Luiz Fernando Carvalho, Mario Lemes Proença
Software-defined Networking (SDN) is a modern network management paradigm that decouples the data and control planes. The centralized control plane offers comprehensive control and orchestration over the network infrastructure. Although SDN provides better control over traffic flow, ensuring network security and service availability remains challenging. This paper presents an anomaly-based intrusion detection system (IDS) for monitoring and securing SDN networks. The system utilizes deep learning models to identify anomalous traffic behavior. When an anomaly is detected, a mitigation module blocks suspicious communications and restores the network to its normal state. Three versions of the proposed solution were implemented and compared: the traditional Generative Adversarial Network (GAN), Deep Convolutional GAN (DCGAN), and Wasserstein GAN with Gradient Penalty (WGAN-GP). These models were incorporated into the system’s detection structure and tested on two benchmark datasets. The first is emulated, and the second is the well-known CICDDoS2019 dataset. The results indicate that the IDS adequately identified potential threats, regardless of the deep learning algorithm. Although the traditional GAN is a simpler model, it could still efficiently detect when the network was under attack and was considerably faster than the other models. Additionally, the employed mitigation strategy successfully dropped over 89% of anomalous flows in the emulated dataset and over 99% in the public dataset, preventing the effects of the threats from being accentuated and jeopardizing the proper functioning of the SDN network.
软件定义网络(Software-defined Networking,SDN)是一种将数据平面和控制平面分离的现代网络管理范式。集中式控制平面可对网络基础设施进行全面控制和协调。虽然 SDN 可以更好地控制流量,但确保网络安全和服务可用性仍是一项挑战。本文介绍了一种基于异常的入侵检测系统(IDS),用于监控和保护 SDN 网络。该系统利用深度学习模型来识别异常流量行为。检测到异常时,缓解模块会阻止可疑通信,并将网络恢复到正常状态。我们实施并比较了所提解决方案的三个版本:传统生成对抗网络(GAN)、深度卷积 GAN(DCGAN)和带梯度惩罚的 Wasserstein GAN(WGAN-GP)。这些模型被纳入系统的检测结构,并在两个基准数据集上进行了测试。第一个是模拟数据集,第二个是著名的 CICDDoS2019 数据集。结果表明,无论采用哪种深度学习算法,IDS 都能充分识别潜在威胁。虽然传统的 GAN 是一种更简单的模型,但它仍能在网络受到攻击时有效地检测到,而且速度比其他模型快得多。此外,所采用的缓解策略在仿真数据集中成功拦截了 89% 以上的异常流,在公共数据集中拦截了 99% 以上的异常流,从而防止了威胁的影响加剧,危及 SDN 网络的正常运行。
{"title":"Generative Adversarial Network Models for Anomaly Detection in Software-Defined Networks","authors":"Alexandro Marcelo Zacaron, Daniel Matheus Brandão Lent, Vitor Gabriel da Silva Ruffo, Luiz Fernando Carvalho, Mario Lemes Proença","doi":"10.1007/s10922-024-09867-z","DOIUrl":"https://doi.org/10.1007/s10922-024-09867-z","url":null,"abstract":"<p>Software-defined Networking (SDN) is a modern network management paradigm that decouples the data and control planes. The centralized control plane offers comprehensive control and orchestration over the network infrastructure. Although SDN provides better control over traffic flow, ensuring network security and service availability remains challenging. This paper presents an anomaly-based intrusion detection system (IDS) for monitoring and securing SDN networks. The system utilizes deep learning models to identify anomalous traffic behavior. When an anomaly is detected, a mitigation module blocks suspicious communications and restores the network to its normal state. Three versions of the proposed solution were implemented and compared: the traditional Generative Adversarial Network (GAN), Deep Convolutional GAN (DCGAN), and Wasserstein GAN with Gradient Penalty (WGAN-GP). These models were incorporated into the system’s detection structure and tested on two benchmark datasets. The first is emulated, and the second is the well-known CICDDoS2019 dataset. The results indicate that the IDS adequately identified potential threats, regardless of the deep learning algorithm. Although the traditional GAN is a simpler model, it could still efficiently detect when the network was under attack and was considerably faster than the other models. Additionally, the employed mitigation strategy successfully dropped over 89% of anomalous flows in the emulated dataset and over 99% in the public dataset, preventing the effects of the threats from being accentuated and jeopardizing the proper functioning of the SDN network.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":"42 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142216603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As a result of the rapid advancement of technology, the Internet of Things (IoT) has emerged as an essential research question, capable of collecting and sending data through a network between linked items without the need for human interaction. However, these interconnected devices often encounter challenges related to data security, encompassing aspects of confidentiality, integrity, availability, authentication, and privacy, particularly when facing potential intruders. Addressing this concern, our study propose a novel host-based intrusion detection system grounded in machine learning. Our approach incorporates a feature selection (FS) technique based on the correlation between features and a ranking function utilizing Support Vector Machine (SVM). The experimentation, conducted on the NSL-KDD dataset, demonstrates the efficacy of our methodology. The results showcase superiority over comparable approaches in both binary and multi-class classification scenarios, achieving remarkable accuracy rates of 99.094% and 99.11%, respectively. This underscores the potential of our proposed system in enhancing security measures for IoT devices.
{"title":"Attack Detection in IoT Network Using Support Vector Machine and Improved Feature Selection Technique","authors":"Noura Ben Henda, Amina Msolli, Imen Haggui, Abdelhamid Helali, Hassen Maaref","doi":"10.1007/s10922-024-09871-3","DOIUrl":"https://doi.org/10.1007/s10922-024-09871-3","url":null,"abstract":"<p>As a result of the rapid advancement of technology, the Internet of Things (IoT) has emerged as an essential research question, capable of collecting and sending data through a network between linked items without the need for human interaction. However, these interconnected devices often encounter challenges related to data security, encompassing aspects of confidentiality, integrity, availability, authentication, and privacy, particularly when facing potential intruders. Addressing this concern, our study propose a novel host-based intrusion detection system grounded in machine learning. Our approach incorporates a feature selection (FS) technique based on the correlation between features and a ranking function utilizing Support Vector Machine (SVM). The experimentation, conducted on the NSL-KDD dataset, demonstrates the efficacy of our methodology. The results showcase superiority over comparable approaches in both binary and multi-class classification scenarios, achieving remarkable accuracy rates of 99.094% and 99.11%, respectively. This underscores the potential of our proposed system in enhancing security measures for IoT devices.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":"13 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142216602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-10DOI: 10.1007/s10922-024-09869-x
P. Remya krishnan, Ritesh Koushik
Rapid development and deployment of VANET necessitate solutions to support its safe applications. One of the major threats to VANET is the Sybil attack that uses numerous fake identities to spread misleading information around the network, resulting in traffic jams, accidents, and theft. Most of the existing solutions to Sybil attacks in VANET concentrate mainly on detecting the presence of Sybil attacks and identifying the virtual Sybil nodes. Though solutions exist to detect the Sybil attacker node, the attack scenarios where multiple Sybil attackers generate the Sybil nodes collaboratively persist as a research gap that needs to be addressed effectively. In this paper, we concentrate on detecting multiple Sybil attackers and the Sybil nodes generated in VANET using a decentralized, distance-based strategy. Despite the performance verification of the proposed technique using simulation, in this paper, we evaluate it in a real-time test-bed environment of VANET to verify its practical applicability.
{"title":"Decentralized Distance-based Strategy for Detection of Sybil Attackers and Sybil Nodes in VANET","authors":"P. Remya krishnan, Ritesh Koushik","doi":"10.1007/s10922-024-09869-x","DOIUrl":"https://doi.org/10.1007/s10922-024-09869-x","url":null,"abstract":"<p>Rapid development and deployment of VANET necessitate solutions to support its safe applications. One of the major threats to VANET is the Sybil attack that uses numerous fake identities to spread misleading information around the network, resulting in traffic jams, accidents, and theft. Most of the existing solutions to Sybil attacks in VANET concentrate mainly on detecting the presence of Sybil attacks and identifying the virtual Sybil nodes. Though solutions exist to detect the Sybil attacker node, the attack scenarios where multiple Sybil attackers generate the Sybil nodes collaboratively persist as a research gap that needs to be addressed effectively. In this paper, we concentrate on detecting multiple Sybil attackers and the Sybil nodes generated in VANET using a decentralized, distance-based strategy. Despite the performance verification of the proposed technique using simulation, in this paper, we evaluate it in a real-time test-bed environment of VANET to verify its practical applicability.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":"11 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142216605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-06DOI: 10.1007/s10922-024-09863-3
Huda Alrammah, Yi Gu, Daqing Yun, Ning Zhang
Cloud computing has become the most popular distributed paradigm with massive computing resources and a large data storage capacity to run large-scale scientific workflow applications without the need to own any infrastructure. Scheduling workflows in a distributed system is a well-known NP-complete problem, which has become even more challenging with a dynamic and heterogeneous pool of resources in a cloud computing platform. The aim of this work is to design efficient and effective scheduling algorithms for multi-objective optimization of large-scale scientific workflows in cloud environments. We propose two novel genetic algorithm (GA)-based scheduling algorithms to assign workflow tasks to different cloud resources in order to simultaneously optimize makespan, monetary cost, and energy consumption. One is multi-objective optimization for makespan, cost and energy (MOMCE), which combines the strengths of two widely adopted solutions, genetic algorithm and particle swarm optimization, for multi-objective optimization problems. The other is pareto dominance for makespan, cost and energy (PDMCE), which is based on genetic algorithm and non-dominated solutions to achieve a better convergence and a uniform distribution of the approximate Pareto front. The proposed solutions are evaluated by an extensive set of different workflow applications and cloud environments, and compared with other existing methods in the literature to show the performance stability and superiority. We also conduct performance evaluation and comparison between MOMCE and PDMCE for different criteria.
{"title":"Tri-objective Optimization for Large-Scale Workflow Scheduling and Execution in Clouds","authors":"Huda Alrammah, Yi Gu, Daqing Yun, Ning Zhang","doi":"10.1007/s10922-024-09863-3","DOIUrl":"https://doi.org/10.1007/s10922-024-09863-3","url":null,"abstract":"<p>Cloud computing has become the most popular distributed paradigm with massive computing resources and a large data storage capacity to run large-scale scientific workflow applications without the need to own any infrastructure. Scheduling workflows in a distributed system is a well-known NP-complete problem, which has become even more challenging with a dynamic and heterogeneous pool of resources in a cloud computing platform. The aim of this work is to design efficient and effective scheduling algorithms for multi-objective optimization of large-scale scientific workflows in cloud environments. We propose two novel genetic algorithm (GA)-based scheduling algorithms to assign workflow tasks to different cloud resources in order to simultaneously optimize makespan, monetary cost, and energy consumption. One is multi-objective optimization for makespan, cost and energy (MOMCE), which combines the strengths of two widely adopted solutions, genetic algorithm and particle swarm optimization, for multi-objective optimization problems. The other is pareto dominance for makespan, cost and energy (PDMCE), which is based on genetic algorithm and non-dominated solutions to achieve a better convergence and a uniform distribution of the approximate Pareto front. The proposed solutions are evaluated by an extensive set of different workflow applications and cloud environments, and compared with other existing methods in the literature to show the performance stability and superiority. We also conduct performance evaluation and comparison between MOMCE and PDMCE for different criteria.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":"4 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142216607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-06DOI: 10.1007/s10922-024-09866-0
Mohamed Amine Ould Rabah, Hamza Drid, Mohamed Rahouti, Nadjib Lazaar
Intelligent software-defined network (SDN) in unmanned aerial vehicles (UAVs) is an emerging research area to enhance UAV communication networks’ performance, security, and efficiency. By incorporating artificial intelligence (AI) and machine learning (ML) algorithms, SDN-based UAV networks enable real-time decision-making, proactive network management, and dynamic resource allocation. These advancements improve network performance, reduce latency, and enhance network efficiency. Moreover, AI-based security mechanisms can swiftly detect and mitigate potential threats, bolstering UAV networks’ overall security. Integrating intelligent SDN in UAVs holds tremendous potential for revolutionizing the UAV communication and networking field. This paper comprehensively discusses the solutions available for UAV-based intelligent SDNs. It provides an in-depth exploration of UAVs and SDNs and presents a comprehensive analysis of the evolution from traditional networking environments to UAV-based SDN environments. Our research primarily focuses on UAV communication’s performance, security, latency, and efficiency. It includes a taxonomy, comparison, and analysis of existing ML solutions specifically designed for UAV-based SDNs.
无人机(UAV)中的智能软件定义网络(SDN)是一个新兴的研究领域,旨在提高无人机通信网络的性能、安全性和效率。通过结合人工智能(AI)和机器学习(ML)算法,基于 SDN 的无人机网络可实现实时决策、主动网络管理和动态资源分配。这些先进技术可改善网络性能、减少延迟并提高网络效率。此外,基于人工智能的安全机制可以迅速检测和缓解潜在威胁,从而增强无人机网络的整体安全性。在无人机中集成智能 SDN 具有巨大潜力,可彻底改变无人机通信和网络领域。本文全面讨论了基于无人机的智能 SDN 解决方案。它深入探讨了无人机和 SDN,并全面分析了从传统网络环境到基于无人机的 SDN 环境的演变。我们的研究主要关注无人机通信的性能、安全性、延迟和效率。它包括对专门为基于无人机的 SDN 设计的现有 ML 解决方案进行分类、比较和分析。
{"title":"Empowering UAV Communications with AI-Assisted Software-Defined Networks: A Review on Performance, Security, and Efficiency","authors":"Mohamed Amine Ould Rabah, Hamza Drid, Mohamed Rahouti, Nadjib Lazaar","doi":"10.1007/s10922-024-09866-0","DOIUrl":"https://doi.org/10.1007/s10922-024-09866-0","url":null,"abstract":"<p>Intelligent software-defined network (SDN) in unmanned aerial vehicles (UAVs) is an emerging research area to enhance UAV communication networks’ performance, security, and efficiency. By incorporating artificial intelligence (AI) and machine learning (ML) algorithms, SDN-based UAV networks enable real-time decision-making, proactive network management, and dynamic resource allocation. These advancements improve network performance, reduce latency, and enhance network efficiency. Moreover, AI-based security mechanisms can swiftly detect and mitigate potential threats, bolstering UAV networks’ overall security. Integrating intelligent SDN in UAVs holds tremendous potential for revolutionizing the UAV communication and networking field. This paper comprehensively discusses the solutions available for UAV-based intelligent SDNs. It provides an in-depth exploration of UAVs and SDNs and presents a comprehensive analysis of the evolution from traditional networking environments to UAV-based SDN environments. Our research primarily focuses on UAV communication’s performance, security, latency, and efficiency. It includes a taxonomy, comparison, and analysis of existing ML solutions specifically designed for UAV-based SDNs.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":"64 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142216606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-05DOI: 10.1007/s10922-024-09842-8
Dincy R. Arikkat, P. Vinod, K. A. Rafidha Rehiman, Rabeeba Abdul Rasheed, Mauro Conti
Network traffic analysis is essential for enhancing network security and management. Integrating Machine Learning and Explainable Artificial Intelligence (XAI) offers a promising avenue for exploring darknet traffic. XAI’s integration into security domains paves the way to enriching our understanding of network traffic patterns and extracting valuable insights for security purposes. This investigation delves into the intricacies of darknet traffic classification by analyzing the datasets ISCXTor2016 and CIC-Darknet2020. By employing XAI techniques, we identify the most crucial features for accurate network traffic categorization. We conduct an in-depth analysis of darknet traffic models by utilizing explainable tools such as SHAP, LIME, Permutation Importance, and Counterfactual Explanations. Our experimental results highlight Protocol as the crucial factor in the ISXCTor2016 traffic classification, Source Port in the ISCXTor2016 application identification, and IdleMax in the CIC-Darknet2020 traffic classification. Additionally, our analysis encompassed the extraction of Cyber Threat Intelligence from the IP addresses within the network traffic. We explored the prevalent malware types and discerned specific targeted countries. Furthermore, a comprehensive exploration was conducted on the sophisticated attack techniques employed by adversaries. Our analysis identified T1071 as a frequently employed attack technique in which adversaries utilize OSI application layer protocols to communicate, strategically evading detection and network filtering measures.
{"title":"XAITrafficIntell: Interpretable Cyber Threat Intelligence for Darknet Traffic Analysis","authors":"Dincy R. Arikkat, P. Vinod, K. A. Rafidha Rehiman, Rabeeba Abdul Rasheed, Mauro Conti","doi":"10.1007/s10922-024-09842-8","DOIUrl":"https://doi.org/10.1007/s10922-024-09842-8","url":null,"abstract":"<p>Network traffic analysis is essential for enhancing network security and management. Integrating Machine Learning and Explainable Artificial Intelligence (XAI) offers a promising avenue for exploring darknet traffic. XAI’s integration into security domains paves the way to enriching our understanding of network traffic patterns and extracting valuable insights for security purposes. This investigation delves into the intricacies of darknet traffic classification by analyzing the datasets ISCXTor2016 and CIC-Darknet2020. By employing XAI techniques, we identify the most crucial features for accurate network traffic categorization. We conduct an in-depth analysis of darknet traffic models by utilizing explainable tools such as SHAP, LIME, Permutation Importance, and Counterfactual Explanations. Our experimental results highlight <i>Protocol</i> as the crucial factor in the ISXCTor2016 traffic classification, <i>Source Port</i> in the ISCXTor2016 application identification, and <i>IdleMax</i> in the CIC-Darknet2020 traffic classification. Additionally, our analysis encompassed the extraction of Cyber Threat Intelligence from the IP addresses within the network traffic. We explored the prevalent malware types and discerned specific targeted countries. Furthermore, a comprehensive exploration was conducted on the sophisticated attack techniques employed by adversaries. Our analysis identified T1071 as a frequently employed attack technique in which adversaries utilize OSI application layer protocols to communicate, strategically evading detection and network filtering measures.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":"101 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142216628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the advent of 5G and beyond, the mobile network operator is integrated with edge computing capabilities along with the cloud. This paradigm requires the application at UE to consist of multiple microservices that are appropriately placed at the edge/cloud with dynamic relocation to enhance the overall Quality of Service (QoS) of the application. In this work, a Microservice-Scheduler is developed to dynamically relocate an application’s microservices between the edge and cloud server. The relocation decision is based on the CPU utilization of the edge server. The developed Microservice-Scheduler is integrated in ETSI compliant 5G testbed. The deployment is evaluated by analyzing the different scenarios obtained while monitoring the completion time of the microservices. It is observed that in the majority of scenarios, relocating microservices between the edge and cloud server outperforms the edge-only and cloud-only approaches. In addition, the dynamic relocation mechanism aids in better utilization of the resources while enhancing the overall QoS of the application’s microservices.
随着 5G 及更先进技术的出现,移动网络运营商将边缘计算能力与云相结合。这种模式要求 UE 上的应用由多个微服务组成,这些微服务被适当地放置在边缘/云上,并进行动态重新定位,以提高应用的整体服务质量(QoS)。在这项工作中,开发了一种微服务调度程序,用于在边缘和云服务器之间动态重新定位应用程序的微服务。迁移决策基于边缘服务器的 CPU 利用率。开发的微服务调度程序集成在符合 ETSI 标准的 5G 测试平台中。通过分析监测微服务完成时间时获得的不同场景,对部署进行了评估。据观察,在大多数场景中,在边缘和云服务器之间重新定位微服务的效果优于仅在边缘和仅在云的方法。此外,动态重新定位机制有助于更好地利用资源,同时提高应用程序微服务的整体服务质量。
{"title":"Dynamic Microservice Provisioning in 5G Networks Using Edge–Cloud Continuum","authors":"Priyal Thakkar, Ashish Singh Patel, Gaurav Shukla, Arzad Alam Kherani, Brejesh Lall","doi":"10.1007/s10922-024-09859-z","DOIUrl":"https://doi.org/10.1007/s10922-024-09859-z","url":null,"abstract":"<p>With the advent of 5G and beyond, the mobile network operator is integrated with edge computing capabilities along with the cloud. This paradigm requires the application at UE to consist of multiple microservices that are appropriately placed at the edge/cloud with dynamic relocation to enhance the overall Quality of Service (QoS) of the application. In this work, a <i>Microservice-Scheduler</i> is developed to dynamically relocate an application’s microservices between the edge and cloud server. The relocation decision is based on the CPU utilization of the edge server. The developed <i>Microservice-Scheduler</i> is integrated in ETSI compliant 5G testbed. The deployment is evaluated by analyzing the different scenarios obtained while monitoring the completion time of the microservices. It is observed that in the majority of scenarios, relocating microservices between the edge and cloud server outperforms the edge-only and cloud-only approaches. In addition, the dynamic relocation mechanism aids in better utilization of the resources while enhancing the overall QoS of the application’s microservices.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":"59 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142216629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wireless energy transfer (WET) technology has been proven to mitigate the energy shortage challenge faced by the Internet of Things (IoT), which encompasses sensor networks. Exploiting a Mobile Charger (MC) to energize critical sensors provides a new dimension to maintain continual network operations. Still, existing solutions are not robust as they suffer from high charging delays at the sensor end due to inefficient scheduling. Moreover, charging efficiency is degraded in those schemes due to fixed charging thresholds and ignoring scheduling feasibility conditions. Thus, intelligent scheduling for an MC is needed based on decision-making through multiple network performance-affecting attributes, but blending multiple attributes together for wise scheduling decision-making remains challenging, which is overlooked in previous research. Fortunately, Multi-Criteria Decision Making (MCDM) is best-fit herein for considering numerous attributes and picking the most suitable sensor node to charge next. To this end, we have proposed solving the scheduling problem by combining two MCDM techniques, i.e., Combinative Distance Based Assessment (CODAS) and the Best Worst Method (BWM). The attributes used for the decision are the distance to MC, energy consumption rate, the remaining energy of nodes, and neighborhood criticality. The relative weights of all considered network attributes are calculated by BWM, which is followed by CODAS to select the most appropriate node to be charged next. To make the scheme more realistic and practical in time-critical applications, the dynamic threshold of nodes is calculated along with formulation scheduling feasibility conditions. Simulation results demonstrate the efficiency of the proposed scheme over the competing approaches on various performance parameters.
事实证明,无线能量传输技术(WET)可以缓解包括传感器网络在内的物联网(IoT)所面临的能源短缺挑战。利用移动充电器(MC)为关键传感器供电为维持网络的持续运行提供了一个新的维度。然而,现有的解决方案并不强大,因为它们在传感器端会因调度效率低下而导致充电延迟。此外,由于固定充电阈值和忽略调度可行性条件,这些方案的充电效率也会降低。因此,需要通过多个影响网络性能的属性来决策 MC 的智能调度,但将多个属性融合在一起进行明智的调度决策仍具有挑战性,这一点在以往的研究中被忽视了。幸运的是,多标准决策(MCDM)是考虑多种属性并选择下一个最适合充电的传感器节点的最佳方法。为此,我们建议结合两种 MCDM 技术(即基于距离的组合评估 (CODAS) 和最佳最差法 (BWM))来解决调度问题。用于决策的属性包括到 MC 的距离、能耗率、节点剩余能量和邻域临界度。BWM 计算所有考虑过的网络属性的相对权重,然后用 CODAS 选择最合适的节点进行下一步充电。为了使该方案在时间紧迫的应用中更加现实和实用,在计算节点动态阈值的同时,还制定了调度可行性条件。仿真结果表明,在各种性能参数上,拟议方案的效率优于其他竞争方案。
{"title":"Towards Intelligent Decision Making for Charging Scheduling in Rechargeable Wireless Sensor Networks","authors":"Abhinav Tomar, Raj Anwit, Piyush Nawnath Raut, Gaurav Singal","doi":"10.1007/s10922-024-09861-5","DOIUrl":"https://doi.org/10.1007/s10922-024-09861-5","url":null,"abstract":"<p>Wireless energy transfer (WET) technology has been proven to mitigate the energy shortage challenge faced by the Internet of Things (IoT), which encompasses sensor networks. Exploiting a Mobile Charger (MC) to energize critical sensors provides a new dimension to maintain continual network operations. Still, existing solutions are not robust as they suffer from high charging delays at the sensor end due to inefficient scheduling. Moreover, charging efficiency is degraded in those schemes due to fixed charging thresholds and ignoring scheduling feasibility conditions. Thus, intelligent scheduling for an MC is needed based on decision-making through multiple network performance-affecting attributes, but blending multiple attributes together for wise scheduling decision-making remains challenging, which is overlooked in previous research. Fortunately, Multi-Criteria Decision Making (MCDM) is best-fit herein for considering numerous attributes and picking the most suitable sensor node to charge next. To this end, we have proposed solving the scheduling problem by combining two MCDM techniques, i.e., Combinative Distance Based Assessment (CODAS) and the Best Worst Method (BWM). The attributes used for the decision are the distance to MC, energy consumption rate, the remaining energy of nodes, and neighborhood criticality. The relative weights of all considered network attributes are calculated by BWM, which is followed by CODAS to select the most appropriate node to be charged next. To make the scheme more realistic and practical in time-critical applications, the dynamic threshold of nodes is calculated along with formulation scheduling feasibility conditions. Simulation results demonstrate the efficiency of the proposed scheme over the competing approaches on various performance parameters.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":"23 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142216630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}