Pub Date : 2024-12-09DOI: 10.1109/LNET.2024.3512658
C Kiruthika;E. S. Gopi
Deep learning-based CSI compression has shown its efficacy for massive multiple-input multiple-output networks, and on the other hand, federated learning (FL) excels the conventional centralized learning by avoiding privacy leakage issues and training communication overhead. The realization of an FL-based CSI feedback network consumes more computational resources and time, and the continuous reporting of local models to the base station results in overhead. To overcome these issues, in this letter, we propose a FBCNet. The proposed FBCNet combines the advantages of the novel fusion basis (FB) technique and the fully connected complex-valued neural network (CNet) based on gradient (G) and non-gradient (NG) approaches. The experimental results show the advantages of both CNet and FB individually over the existing techniques. FBCNet, the combination of both FB and CNet, outperforms the existing federated averaging-based CNet (FedCNet) with improvement in reconstruction performance, less complexity, reduced training time, and low transmission overhead. For the distributed array-line of sight topology at the compression ratio (CR) of 20:1, it is noted that the NMSE and the cosine similarity of FedCNet-G are −8.2837 dB, 0.9262; FedCNet-NG are −3.5291 dB, 0.8452; proposed FB are −26.8621, 0.9653. Also the NMSE and the cosine similarity of the proposed FBCNet-G are −19.7521, 0.9307; FBCNet-NG are −24.0442, 0.9539 at a high CR of 64:1.
{"title":"FBCNet: Fusion Basis Complex-Valued Neural Network for CSI Compression in Massive MIMO Networks","authors":"C Kiruthika;E. S. Gopi","doi":"10.1109/LNET.2024.3512658","DOIUrl":"https://doi.org/10.1109/LNET.2024.3512658","url":null,"abstract":"Deep learning-based CSI compression has shown its efficacy for massive multiple-input multiple-output networks, and on the other hand, federated learning (FL) excels the conventional centralized learning by avoiding privacy leakage issues and training communication overhead. The realization of an FL-based CSI feedback network consumes more computational resources and time, and the continuous reporting of local models to the base station results in overhead. To overcome these issues, in this letter, we propose a FBCNet. The proposed FBCNet combines the advantages of the novel fusion basis (FB) technique and the fully connected complex-valued neural network (CNet) based on gradient (G) and non-gradient (NG) approaches. The experimental results show the advantages of both CNet and FB individually over the existing techniques. FBCNet, the combination of both FB and CNet, outperforms the existing federated averaging-based CNet (FedCNet) with improvement in reconstruction performance, less complexity, reduced training time, and low transmission overhead. For the distributed array-line of sight topology at the compression ratio (CR) of 20:1, it is noted that the NMSE and the cosine similarity of FedCNet-G are −8.2837 dB, 0.9262; FedCNet-NG are −3.5291 dB, 0.8452; proposed FB are −26.8621, 0.9653. Also the NMSE and the cosine similarity of the proposed FBCNet-G are −19.7521, 0.9307; FBCNet-NG are −24.0442, 0.9539 at a high CR of 64:1.","PeriodicalId":100628,"journal":{"name":"IEEE Networking Letters","volume":"6 4","pages":"262-266"},"PeriodicalIF":0.0,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143388486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-09DOI: 10.1109/LNET.2024.3514357
Hajar Moudoud;Zakaria Abou El Houda;Bouziane Brik
The evolution of Open Radio Access Networks (O-RAN) is crucial for the deployment and operation of 6G networks, providing flexibility and interoperability through its disaggregated and open architecture. However, this openness introduces new security issues. To address these challenges, we propose a novel Zero-Trust architecture tailored for ORAN (ZTORAN). ZTORAN includes two main modules: (1) A blockchain-based decentralized trust management system for secure verification, authentication, and dynamic access control of xApps; and (2) A threat detection module that uses Federated Multi-Agent Reinforcement Learning (FMARL) to monitor network activities continuously and detects anomalies within the ORAN ecosystem. Through comprehensive simulations and evaluations, we demonstrate the effectiveness of ZTORAN in providing a resilient and secure framework for next-generation wireless networks.
{"title":"Zero Trust Security Architecture for 6G Open Radio Access Networks (ORAN)","authors":"Hajar Moudoud;Zakaria Abou El Houda;Bouziane Brik","doi":"10.1109/LNET.2024.3514357","DOIUrl":"https://doi.org/10.1109/LNET.2024.3514357","url":null,"abstract":"The evolution of Open Radio Access Networks (O-RAN) is crucial for the deployment and operation of 6G networks, providing flexibility and interoperability through its disaggregated and open architecture. However, this openness introduces new security issues. To address these challenges, we propose a novel Zero-Trust architecture tailored for ORAN (ZTORAN). ZTORAN includes two main modules: (1) A blockchain-based decentralized trust management system for secure verification, authentication, and dynamic access control of xApps; and (2) A threat detection module that uses Federated Multi-Agent Reinforcement Learning (FMARL) to monitor network activities continuously and detects anomalies within the ORAN ecosystem. Through comprehensive simulations and evaluations, we demonstrate the effectiveness of ZTORAN in providing a resilient and secure framework for next-generation wireless networks.","PeriodicalId":100628,"journal":{"name":"IEEE Networking Letters","volume":"6 4","pages":"272-275"},"PeriodicalIF":0.0,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143388570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-09DOI: 10.1109/LNET.2024.3512659
Jianwen Xu;Kaoru Ota;Mianxiong Dong
As a fundamental component of 6G, Device-to-Device (D2D) communication facilitates direct connections between devices without base stations. In order to support advanced AI applications in ubiquitous scenarios, in this letter, we propose an AI-centric D2D communication infrastructure upon mobile devices, addressing current challenges in bandwidth and transmission speed. This approach aims to leverage 6G’s potential to create more efficient, reliable, and intelligent wireless communication systems, bridging the gap between AI and next-generation D2D communication. The results from real-world case study and simulation show that our design can save time and improve efficiency in D2D transmission and on-device AI processing.
{"title":"AI-Centric D2D in 6G Networks","authors":"Jianwen Xu;Kaoru Ota;Mianxiong Dong","doi":"10.1109/LNET.2024.3512659","DOIUrl":"https://doi.org/10.1109/LNET.2024.3512659","url":null,"abstract":"As a fundamental component of 6G, Device-to-Device (D2D) communication facilitates direct connections between devices without base stations. In order to support advanced AI applications in ubiquitous scenarios, in this letter, we propose an AI-centric D2D communication infrastructure upon mobile devices, addressing current challenges in bandwidth and transmission speed. This approach aims to leverage 6G’s potential to create more efficient, reliable, and intelligent wireless communication systems, bridging the gap between AI and next-generation D2D communication. The results from real-world case study and simulation show that our design can save time and improve efficiency in D2D transmission and on-device AI processing.","PeriodicalId":100628,"journal":{"name":"IEEE Networking Letters","volume":"6 4","pages":"257-261"},"PeriodicalIF":0.0,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143388600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The adoption of the Industrial Internet of Things (IIoT) in industries necessitates advancements in energy efficiency and latency reduction, especially for resource-constrained devices. Services require specific Quality of Service (QoS) levels to function properly, and meeting a threshold QoS can be sufficient for smooth connectivity, reducing the need to maximize perceived QoS due to energy concerns. This is modeled as a satisfactory game, aiming to find minimal power allocation to meet target demands. Due to environmental uncertainties, achieving a Robust Satisfactory Equilibrium (RSE) can be challenging, leading to less satisfaction. We propose a fully distributed, environment-aware power control scheme to enhance satisfaction in dynamic environments. The proposed Robust Banach-Picard (RBP) learning scheme combines deep learning and federated learning to overcome channel and interference impacts and accelerate convergence. Extensive simulations evaluate the scheme under varying channel states and QoS demands, with discussions on convergence speed, energy efficiency, scalability, complexity, and violation rate.
{"title":"Serve Yourself! Federated Power Control for AI-Native 5G and Beyond","authors":"Saad Abouzahir;Essaid Sabir;Halima Elbiaze;Mohamed Sadik","doi":"10.1109/LNET.2024.3509792","DOIUrl":"https://doi.org/10.1109/LNET.2024.3509792","url":null,"abstract":"The adoption of the Industrial Internet of Things (IIoT) in industries necessitates advancements in energy efficiency and latency reduction, especially for resource-constrained devices. Services require specific Quality of Service (QoS) levels to function properly, and meeting a threshold QoS can be sufficient for smooth connectivity, reducing the need to maximize perceived QoS due to energy concerns. This is modeled as a satisfactory game, aiming to find minimal power allocation to meet target demands. Due to environmental uncertainties, achieving a Robust Satisfactory Equilibrium (RSE) can be challenging, leading to less satisfaction. We propose a fully distributed, environment-aware power control scheme to enhance satisfaction in dynamic environments. The proposed Robust Banach-Picard (RBP) learning scheme combines deep learning and federated learning to overcome channel and interference impacts and accelerate convergence. Extensive simulations evaluate the scheme under varying channel states and QoS demands, with discussions on convergence speed, energy efficiency, scalability, complexity, and violation rate.","PeriodicalId":100628,"journal":{"name":"IEEE Networking Letters","volume":"6 4","pages":"252-256"},"PeriodicalIF":0.0,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143388599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-27DOI: 10.1109/LNET.2024.3507792
Aviroop Ghosh;Saleh Yousefi;Thomas Kunz
The IEEE 802.1Qbv (80.21Qbv) standard is designed for traffic requiring deterministic and bounded latencies through strict periodic time synchronization, as specified by IEEE 802.1AS standard. However, internal clock drift in devices causes timing misalignment, introducing further challenges to 802.1Qbv scheduling. Existing solutions, using either complex optimization approaches or non-trivial scheduling heuristics, address this by scheduling frame transmissions only once they are guaranteed to have been fully received, even in the presence of clock drifts. However, this approach introduces additional delays that can impact deadline requirements. This letter analytically derives tight end-to-end latency bounds, allowing us to determine if stream deadlines for a given network will be violated without the need to solve for any scheduling algorithms. It also proposes an approach that results in tighter bounds based on information collected from the synchronization process. The analytical results are compared with simulation results, confirming their validity.
{"title":"Latency Bounds for TSN Scheduling in the Presence of Clock Synchronization","authors":"Aviroop Ghosh;Saleh Yousefi;Thomas Kunz","doi":"10.1109/LNET.2024.3507792","DOIUrl":"https://doi.org/10.1109/LNET.2024.3507792","url":null,"abstract":"The IEEE 802.1Qbv (80.21Qbv) standard is designed for traffic requiring deterministic and bounded latencies through strict periodic time synchronization, as specified by IEEE 802.1AS standard. However, internal clock drift in devices causes timing misalignment, introducing further challenges to 802.1Qbv scheduling. Existing solutions, using either complex optimization approaches or non-trivial scheduling heuristics, address this by scheduling frame transmissions only once they are guaranteed to have been fully received, even in the presence of clock drifts. However, this approach introduces additional delays that can impact deadline requirements. This letter analytically derives tight end-to-end latency bounds, allowing us to determine if stream deadlines for a given network will be violated without the need to solve for any scheduling algorithms. It also proposes an approach that results in tighter bounds based on information collected from the synchronization process. The analytical results are compared with simulation results, confirming their validity.","PeriodicalId":100628,"journal":{"name":"IEEE Networking Letters","volume":"7 1","pages":"41-45"},"PeriodicalIF":0.0,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10770262","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143645193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-20DOI: 10.1109/LNET.2024.3503289
Navid Keshtiarast;Oliver Renaldi;Marina Petrova
In this letter, we propose a novel Multi-Agent Deep Reinforcement Learning (MADRL) framework for MAC protocol design. Unlike centralized approaches, which rely on a single entity for decision-making, MADRL empowers individual network nodes to autonomously learn and optimize their MAC from local observations. Our framework is the first of a kind that enables distributed multi-agent learning within the ns-3 environment, and facilitates the design and synthesis of adaptive MAC protocols tailored to specific environmental conditions. We demonstrate the effectiveness of the MADRL framework through extensive simulations, showcasing superior performance compared to legacy protocols across diverse scenarios.
在这封信中,我们为 MAC 协议设计提出了一种新颖的多代理深度强化学习(MADRL)框架。与依赖单一实体进行决策的集中式方法不同,MADRL 使单个网络节点能够根据本地观测结果自主学习和优化其 MAC。我们的框架是首个能在 ns-3 环境中实现分布式多代理学习的框架,有助于设计和合成适应特定环境条件的自适应 MAC 协议。我们通过大量仿真证明了 MADRL 框架的有效性,并在各种场景中展示了与传统协议相比更优越的性能。
{"title":"Wireless MAC Protocol Synthesis and Optimization With Multi-Agent Distributed Reinforcement Learning","authors":"Navid Keshtiarast;Oliver Renaldi;Marina Petrova","doi":"10.1109/LNET.2024.3503289","DOIUrl":"https://doi.org/10.1109/LNET.2024.3503289","url":null,"abstract":"In this letter, we propose a novel Multi-Agent Deep Reinforcement Learning (MADRL) framework for MAC protocol design. Unlike centralized approaches, which rely on a single entity for decision-making, MADRL empowers individual network nodes to autonomously learn and optimize their MAC from local observations. Our framework is the first of a kind that enables distributed multi-agent learning within the ns-3 environment, and facilitates the design and synthesis of adaptive MAC protocols tailored to specific environmental conditions. We demonstrate the effectiveness of the MADRL framework through extensive simulations, showcasing superior performance compared to legacy protocols across diverse scenarios.","PeriodicalId":100628,"journal":{"name":"IEEE Networking Letters","volume":"6 4","pages":"242-246"},"PeriodicalIF":0.0,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143388624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-20DOI: 10.1109/LNET.2024.3503292
Ilias Chatzistefanidis;Andrea Leone;Navid Nikaein
This letter presents Maestro, a collaborative framework leveraging Large Language Models (LLMs) for automation of shared networks. Maestro enables conflict resolution and collaboration among stakeholders in a shared intent-based 6G network by abstracting diverse network infrastructures into declarative intents across business, service, and network planes. LLM-based agents negotiate resources, mediated by Maestro to achieve consensus that aligns multi-party business and network goals. Evaluation on a 5G Open RAN testbed reveals that integrating LLMs with optimization tools and contextual units builds autonomous agents with comparable accuracy to the state-of-the-art algorithms while being flexible to spatio-temporal business and network variability.
{"title":"Maestro: LLM-Driven Collaborative Automation of Intent-Based 6G Networks","authors":"Ilias Chatzistefanidis;Andrea Leone;Navid Nikaein","doi":"10.1109/LNET.2024.3503292","DOIUrl":"https://doi.org/10.1109/LNET.2024.3503292","url":null,"abstract":"This letter presents M<sc>aestro</small>, a collaborative framework leveraging Large Language Models (LLMs) for automation of shared networks. M<sc>aestro</small> enables conflict resolution and collaboration among stakeholders in a shared intent-based 6G network by abstracting diverse network infrastructures into declarative intents across business, service, and network planes. LLM-based agents negotiate resources, mediated by M<sc>aestro</small> to achieve consensus that aligns multi-party business and network goals. Evaluation on a 5G Open RAN testbed reveals that integrating LLMs with optimization tools and contextual units builds autonomous agents with comparable accuracy to the state-of-the-art algorithms while being flexible to spatio-temporal business and network variability.","PeriodicalId":100628,"journal":{"name":"IEEE Networking Letters","volume":"6 4","pages":"227-231"},"PeriodicalIF":0.0,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10758700","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143388623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-14DOI: 10.1109/LNET.2024.3499360
Sofia Barkatsa;Maria Diamanti;Panagiotis Charatsaris;Eirini Eleni Tsiropoulou;Symeon Papavassiliou
Federated Learning (FL), an emerging distributed Artificial Intelligence (AI) technique, is susceptible to jamming attacks during the wireless transmission of trained models. In this letter, we introduce a jamming attack mitigation mechanism for the uplink of wireless FL networks using the power-domain Non-Orthogonal Multiple Access (NOMA) technique. The problem of transmission power allocation for all clients (legitimate and malicious) is formulated and solved distributively as a Bayesian game with incomplete information. The clients aim to successfully transmit their model parameters, minimizing transmission time and consumed power, while having probabilistic knowledge about the malicious behavior of the other clients in the game.
{"title":"Jamming Attack Mitigation in Wireless Federated Learning Networks Using Bayesian Games","authors":"Sofia Barkatsa;Maria Diamanti;Panagiotis Charatsaris;Eirini Eleni Tsiropoulou;Symeon Papavassiliou","doi":"10.1109/LNET.2024.3499360","DOIUrl":"https://doi.org/10.1109/LNET.2024.3499360","url":null,"abstract":"Federated Learning (FL), an emerging distributed Artificial Intelligence (AI) technique, is susceptible to jamming attacks during the wireless transmission of trained models. In this letter, we introduce a jamming attack mitigation mechanism for the uplink of wireless FL networks using the power-domain Non-Orthogonal Multiple Access (NOMA) technique. The problem of transmission power allocation for all clients (legitimate and malicious) is formulated and solved distributively as a Bayesian game with incomplete information. The clients aim to successfully transmit their model parameters, minimizing transmission time and consumed power, while having probabilistic knowledge about the malicious behavior of the other clients in the game.","PeriodicalId":100628,"journal":{"name":"IEEE Networking Letters","volume":"6 4","pages":"247-251"},"PeriodicalIF":0.0,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143388567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-12DOI: 10.1109/LNET.2024.3496842
Fabio Franchi;Fabio Graziosi;Francesco Smarra;Eleonora Di Fina
The exponential growth and complexity of geospatial data necessitate innovative management strategies to address the increasing computational demands of Geographical Information System (GIS) services. GIS is connected to the social context, and its use as a decision-support tool is gaining broader acceptance with the need to ensure high Quality of Service (QoS). While cloud computing offers new capabilities for GIS, the physical distance between cloud infrastructure and end-users often leads to high network latency, compromising QoS. Multi-Access Edge Computing (MEC) emerges as a promising solution to limit latency and enhance system performance, particularly for real-time and multi-device applications. However, integrating GIS services into edge-cloud architectures presents significant challenges in terms of task scheduling and service placement. This letter proposes a queueing theory-based model designed to optimize the performance of GIS workloads within edge-cloud architectures. The model, based on a closed Jackson network, is designed to assist in the efficient design and deployment of edge systems that meet QoS and Service Level Agreement (SLA) requirements. The proposed framework is validated through a real-world case study, with performance metrics such as throughput and response time evaluated to ensure optimal system sizing and performance. The results underscore the potential of this approach for designing scalable and efficient edge-cloud architectures tailored to geospatial services.
{"title":"Queue Modeling for Geospatial Service on Edge-Cloud Architecture","authors":"Fabio Franchi;Fabio Graziosi;Francesco Smarra;Eleonora Di Fina","doi":"10.1109/LNET.2024.3496842","DOIUrl":"https://doi.org/10.1109/LNET.2024.3496842","url":null,"abstract":"The exponential growth and complexity of geospatial data necessitate innovative management strategies to address the increasing computational demands of Geographical Information System (GIS) services. GIS is connected to the social context, and its use as a decision-support tool is gaining broader acceptance with the need to ensure high Quality of Service (QoS). While cloud computing offers new capabilities for GIS, the physical distance between cloud infrastructure and end-users often leads to high network latency, compromising QoS. Multi-Access Edge Computing (MEC) emerges as a promising solution to limit latency and enhance system performance, particularly for real-time and multi-device applications. However, integrating GIS services into edge-cloud architectures presents significant challenges in terms of task scheduling and service placement. This letter proposes a queueing theory-based model designed to optimize the performance of GIS workloads within edge-cloud architectures. The model, based on a closed Jackson network, is designed to assist in the efficient design and deployment of edge systems that meet QoS and Service Level Agreement (SLA) requirements. The proposed framework is validated through a real-world case study, with performance metrics such as throughput and response time evaluated to ensure optimal system sizing and performance. The results underscore the potential of this approach for designing scalable and efficient edge-cloud architectures tailored to geospatial services.","PeriodicalId":100628,"journal":{"name":"IEEE Networking Letters","volume":"7 1","pages":"36-40"},"PeriodicalIF":0.0,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143645196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-07DOI: 10.1109/LNET.2024.3493942
Getahun Metaferia;Frezewd Lemma
Software-defined Networking (SDN) is an innovative network architecture tailored to address the modern demands of network virtualization and cloud computing, which require features such as programmability, flexibility, agility, and openness to foster innovation. However, this architecture also brings forth new security challenges, particularly due to the separation of the data plane from the control plane. Our investigation centers on a specific vulnerability termed link fabrication, which can lead to topology poisoning. A compromised network topology can cause substantial disruptions across the entire network infrastructure. Through a systematic survey, we identified that significant research efforts have been directed towards mitigating link fabrication attacks. We classified the existing studies into six categories of vulnerabilities: Host-based, port amnesia, invisible assailant attack, topology freezing, switch-based link fabrication, and link latency. Furthermore, our survey highlights several open challenges in areas such as Programmable dataplane, dedicated attack trees and threat models, active defense and mitigation strategies, as well as controller awareness and machine learning. To address the vulnerabilities identified, we propose the implementation of a distance-bounding protocol concept at the control plane as a potential solution.
{"title":"Relay Type Link Fabrication Attack in SDN: A Review","authors":"Getahun Metaferia;Frezewd Lemma","doi":"10.1109/LNET.2024.3493942","DOIUrl":"https://doi.org/10.1109/LNET.2024.3493942","url":null,"abstract":"Software-defined Networking (SDN) is an innovative network architecture tailored to address the modern demands of network virtualization and cloud computing, which require features such as programmability, flexibility, agility, and openness to foster innovation. However, this architecture also brings forth new security challenges, particularly due to the separation of the data plane from the control plane. Our investigation centers on a specific vulnerability termed link fabrication, which can lead to topology poisoning. A compromised network topology can cause substantial disruptions across the entire network infrastructure. Through a systematic survey, we identified that significant research efforts have been directed towards mitigating link fabrication attacks. We classified the existing studies into six categories of vulnerabilities: Host-based, port amnesia, invisible assailant attack, topology freezing, switch-based link fabrication, and link latency. Furthermore, our survey highlights several open challenges in areas such as Programmable dataplane, dedicated attack trees and threat models, active defense and mitigation strategies, as well as controller awareness and machine learning. To address the vulnerabilities identified, we propose the implementation of a distance-bounding protocol concept at the control plane as a potential solution.","PeriodicalId":100628,"journal":{"name":"IEEE Networking Letters","volume":"7 1","pages":"51-55"},"PeriodicalIF":0.0,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143645187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}