Pub Date : 2025-12-11DOI: 10.1109/OJCOMS.2025.3642974
Javier Villegas;Sergio Fortes;Juan Cantizani-Estepa;Javier Albert-Smet;Raúl Martín-Cuerdo;Raquel Barco
Cellular network operation strongly relies in the operator’s capacity to manage failures and optimize the networks for their efficient and proper functioning. For this, Machine Learning (ML) and Artificial Intelligence (AI) models are deployed to detect and correct problems and inefficiencies in the networks. However, as network operation carries on, new technologies are continuously deployed which, alongside the changes in the networks’ environment introduce new unexpected issues and variations in the metrics, reducing the performance of the models. Thus, the used models require being constantly updated, making necessary for operators to optimize their development process. Taking this into consideration, this work proposes a system for labeling clusters with issues based on graphs without prior information. Moreover, as the generated labels are quantitative, they can be used to identify the same issues across several datasets, allowing the application of transfer learning methods to carry knowledge from older datasets to newer ones. The system output has been evaluated using data from two different real-world cellular networks, assessing the capacity of the system to generate accurate and descriptive labels, as well as the labels applicability for transfer learning applications by identifying issues across different datasets.
{"title":"A System for Automatic, Quantitative and Visual Labeling for Failure Management in Cellular Network Data Clusters","authors":"Javier Villegas;Sergio Fortes;Juan Cantizani-Estepa;Javier Albert-Smet;Raúl Martín-Cuerdo;Raquel Barco","doi":"10.1109/OJCOMS.2025.3642974","DOIUrl":"https://doi.org/10.1109/OJCOMS.2025.3642974","url":null,"abstract":"Cellular network operation strongly relies in the operator’s capacity to manage failures and optimize the networks for their efficient and proper functioning. For this, Machine Learning (ML) and Artificial Intelligence (AI) models are deployed to detect and correct problems and inefficiencies in the networks. However, as network operation carries on, new technologies are continuously deployed which, alongside the changes in the networks’ environment introduce new unexpected issues and variations in the metrics, reducing the performance of the models. Thus, the used models require being constantly updated, making necessary for operators to optimize their development process. Taking this into consideration, this work proposes a system for labeling clusters with issues based on graphs without prior information. Moreover, as the generated labels are quantitative, they can be used to identify the same issues across several datasets, allowing the application of transfer learning methods to carry knowledge from older datasets to newer ones. The system output has been evaluated using data from two different real-world cellular networks, assessing the capacity of the system to generate accurate and descriptive labels, as well as the labels applicability for transfer learning applications by identifying issues across different datasets.","PeriodicalId":33803,"journal":{"name":"IEEE Open Journal of the Communications Society","volume":"6 ","pages":"10351-10364"},"PeriodicalIF":6.3,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11297768","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145778140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-11DOI: 10.1109/OJCOMS.2025.3643607
Gunasekaran Raja;Sudhakar Theerthagiri;Kathiroli Raja;Janani Alagar Ramanujam;Tejesshree Sadhasivam;Priyadarshni Vasudevan;Paventhan Arumugam;Sunde Ali Khowaja;Kapal Dev
As Vehicular Networks evolve toward the Intelligent Internet of Vehicles (IoV), ensuring quantum-resilient security has become essential. The current 5G Authentication and Key Agreement (AKA) protocol, although well-established, relies on classical cryptographic primitives such as AES, Message Authentication Codes (MACs), and Key Derivation Functions (KDFs), which are increasingly vulnerable to advances in quantum computing. To mitigate this, we propose PQAKA, a Post-Quantum Authentication and Key Agreement protocol tailored for 5G-based Vehicle-to-Everything (V2X) communication. By integrating the National Institute of Standards and Technology (NIST)-standardized Module-Lattice Key Encapsulation Mechanism (ML-KEM), the proposed scheme achieves mutual authentication and forward secrecy against classical and quantum adversaries while maintaining compatibility with existing Access and Mobility Management Function (AMF)/Authentication Server Function (AUSF)/Unified Data Management (UDM) entities. Mapped to the 5G control plane, the protocol ensures that vehicles are authenticated before accessing Internet-based services such as navigation, traffic & weather updates, and over-the-air software delivery. Formal verification through ProVerif validates the correctness and security guarantees of the PQAKA protocol, while the informal analysis substantiates its resilience against a spectrum of adversarial vectors. Under hostile threat conditions, PQAKA achieves an authentication success rate of 72%, indicating its potential in quantum-resilient vehicular communication architectures.
{"title":"PQAKA: Post Quantum Authentication and Key Agreement Protocol for Intelligent Internet of Vehicles Over 5G","authors":"Gunasekaran Raja;Sudhakar Theerthagiri;Kathiroli Raja;Janani Alagar Ramanujam;Tejesshree Sadhasivam;Priyadarshni Vasudevan;Paventhan Arumugam;Sunde Ali Khowaja;Kapal Dev","doi":"10.1109/OJCOMS.2025.3643607","DOIUrl":"https://doi.org/10.1109/OJCOMS.2025.3643607","url":null,"abstract":"As Vehicular Networks evolve toward the Intelligent Internet of Vehicles (IoV), ensuring quantum-resilient security has become essential. The current 5G Authentication and Key Agreement (AKA) protocol, although well-established, relies on classical cryptographic primitives such as AES, Message Authentication Codes (MACs), and Key Derivation Functions (KDFs), which are increasingly vulnerable to advances in quantum computing. To mitigate this, we propose PQAKA, a Post-Quantum Authentication and Key Agreement protocol tailored for 5G-based Vehicle-to-Everything (V2X) communication. By integrating the National Institute of Standards and Technology (NIST)-standardized Module-Lattice Key Encapsulation Mechanism (ML-KEM), the proposed scheme achieves mutual authentication and forward secrecy against classical and quantum adversaries while maintaining compatibility with existing Access and Mobility Management Function (AMF)/Authentication Server Function (AUSF)/Unified Data Management (UDM) entities. Mapped to the 5G control plane, the protocol ensures that vehicles are authenticated before accessing Internet-based services such as navigation, traffic & weather updates, and over-the-air software delivery. Formal verification through ProVerif validates the correctness and security guarantees of the PQAKA protocol, while the informal analysis substantiates its resilience against a spectrum of adversarial vectors. Under hostile threat conditions, PQAKA achieves an authentication success rate of 72%, indicating its potential in quantum-resilient vehicular communication architectures.","PeriodicalId":33803,"journal":{"name":"IEEE Open Journal of the Communications Society","volume":"7 ","pages":"196-210"},"PeriodicalIF":6.3,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11298475","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-10DOI: 10.1109/OJCOMS.2025.3642642
Md. Kamrul Hossain;Walid Aljoby
Intent-Based Networking (IBN) often leverages the programmability of Software-Defined Networking (SDN) to simplify network management. However, significant challenges remain in automating the entire pipeline, from user-specified high-level intents to device-specific low-level configurations. Existing solutions often rely on rigid, rule-based translators and fixed APIs, limiting extensibility and adaptability. By contrast, recent advances in large language models (LLMs) offer a promising pathway that leverages natural language understanding and flexible reasoning. However, it is unclear to what extent LLMs can perform IBN tasks. To address this, we introduce $boldsymbol {IBNBench}$ , a first-of-its-kind benchmarking suite comprising eight datasets: Intent2Flow-ODL, Intent2Flow-ONOS, Intent2Flow-Ryu, Intent2Flow-Floodlight, FlowConflict-ODL, FlowConflict-ONOS, FlowConflict-Ryu, and FlowConflict-Floodlight. These datasets are specifically designed for evaluating LLMs performance in intent translation and conflict detection tasks within the industry-grade and research-focused SDN controllers such as ODL, ONOS, Ryu, and Floodlight. Our results provide the first comprehensive comparison of 33 open-source LLMs on IBNBench and related datasets, revealing a wide range of performance outcomes. However, while these results demonstrate the potential of LLMs for isolated IBN tasks, integrating LLMs into a fully autonomous IBN pipeline remains unexplored. Thus, our second contribution is $boldsymbol {NetIntent}$ , a unified and adaptable framework that leverages LLMs to automate the full IBN lifecycle, including translation, activation, and assurance within SDN systems. NetIntent orchestrates both LLM and non-LLM agents, supporting dynamic re-prompting and contextual feedback to robustly execute user-defined intents with minimal human intervention. Our implementation of NetIntent across ODL, ONOS, Ryu, and Floodlight achieves a consistent and adaptive end-to-end IBN realization.
{"title":"NetIntent: Leveraging Large Language Models for End-to-End Intent-Based SDN Automation","authors":"Md. Kamrul Hossain;Walid Aljoby","doi":"10.1109/OJCOMS.2025.3642642","DOIUrl":"https://doi.org/10.1109/OJCOMS.2025.3642642","url":null,"abstract":"Intent-Based Networking (IBN) often leverages the programmability of Software-Defined Networking (SDN) to simplify network management. However, significant challenges remain in automating the entire pipeline, from user-specified high-level intents to device-specific low-level configurations. Existing solutions often rely on rigid, rule-based translators and fixed APIs, limiting extensibility and adaptability. By contrast, recent advances in large language models (LLMs) offer a promising pathway that leverages natural language understanding and flexible reasoning. However, it is unclear to what extent LLMs can perform IBN tasks. To address this, we introduce <inline-formula> <tex-math>$boldsymbol {IBNBench}$ </tex-math></inline-formula>, a first-of-its-kind benchmarking suite comprising eight datasets: Intent2Flow-ODL, Intent2Flow-ONOS, Intent2Flow-Ryu, Intent2Flow-Floodlight, FlowConflict-ODL, FlowConflict-ONOS, FlowConflict-Ryu, and FlowConflict-Floodlight. These datasets are specifically designed for evaluating LLMs performance in intent translation and conflict detection tasks within the industry-grade and research-focused SDN controllers such as ODL, ONOS, Ryu, and Floodlight. Our results provide the first comprehensive comparison of 33 open-source LLMs on IBNBench and related datasets, revealing a wide range of performance outcomes. However, while these results demonstrate the potential of LLMs for isolated IBN tasks, integrating LLMs into a fully autonomous IBN pipeline remains unexplored. Thus, our second contribution is <inline-formula> <tex-math>$boldsymbol {NetIntent}$ </tex-math></inline-formula>, a unified and adaptable framework that leverages LLMs to automate the full IBN lifecycle, including translation, activation, and assurance within SDN systems. NetIntent orchestrates both LLM and non-LLM agents, supporting dynamic re-prompting and contextual feedback to robustly execute user-defined intents with minimal human intervention. Our implementation of NetIntent across ODL, ONOS, Ryu, and Floodlight achieves a consistent and adaptive end-to-end IBN realization.","PeriodicalId":33803,"journal":{"name":"IEEE Open Journal of the Communications Society","volume":"6 ","pages":"10512-10541"},"PeriodicalIF":6.3,"publicationDate":"2025-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11293797","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145830780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, Graph Neural Networks (GNNs) have emerged as a powerful tool for network modeling due to their ability to learn complex relationships in graph-structured data. However, the existing GNN-based network models are designed to predict only a single performance metric at a time, leading to computational inefficiencies. Moreover, existing approaches lack effective methodologies for interpreting the learned relationships in a networking context. We present MHNet, a multi-head GNN architecture capable of simultaneously predicting delay, jitter, and packet loss in network traffic flows. To train MHNet, we propose an adaptive optimization strategy that constructs a balanced update direction to update the weights of the model at each epoch using the normalized gradients of the individual loss functions correspond to performance metric outputs. To interpret the relationships learned by the model in the network graph, we construct a gradient-based analysis framework that integrates networking domain knowledge to assess the influence of input features on the prediction outputs. Experimental results show that MHNet achieves prediction accuracy comparable to the state-of-the-art RouteNet model across all metrics, while reducing inference-stage Floating Point Operations (FLOPs) cost by 67%. The interpretation analysis further reveals that MHNet mitigates oversmoothing and selectively focuses on the most relevant substructures of the network feature graph when predicting performance metrics for traffic flows.
{"title":"MHNet: A Multi-Head GNN Architecture for Efficient Network Modeling","authors":"Sandushan Ranaweera;Ren Ping Liu;Ying He;Beeshanga Jayawickrama","doi":"10.1109/OJCOMS.2025.3641933","DOIUrl":"https://doi.org/10.1109/OJCOMS.2025.3641933","url":null,"abstract":"In recent years, Graph Neural Networks (GNNs) have emerged as a powerful tool for network modeling due to their ability to learn complex relationships in graph-structured data. However, the existing GNN-based network models are designed to predict only a single performance metric at a time, leading to computational inefficiencies. Moreover, existing approaches lack effective methodologies for interpreting the learned relationships in a networking context. We present MHNet, a multi-head GNN architecture capable of simultaneously predicting delay, jitter, and packet loss in network traffic flows. To train MHNet, we propose an adaptive optimization strategy that constructs a balanced update direction to update the weights of the model at each epoch using the normalized gradients of the individual loss functions correspond to performance metric outputs. To interpret the relationships learned by the model in the network graph, we construct a gradient-based analysis framework that integrates networking domain knowledge to assess the influence of input features on the prediction outputs. Experimental results show that MHNet achieves prediction accuracy comparable to the state-of-the-art RouteNet model across all metrics, while reducing inference-stage Floating Point Operations (FLOPs) cost by 67%. The interpretation analysis further reveals that MHNet mitigates oversmoothing and selectively focuses on the most relevant substructures of the network feature graph when predicting performance metrics for traffic flows.","PeriodicalId":33803,"journal":{"name":"IEEE Open Journal of the Communications Society","volume":"6 ","pages":"10449-10464"},"PeriodicalIF":6.3,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11288051","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145830729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sixth-Generation (6G) networks aim to deliver unprecedented network performance by facilitating intelligent, ultra-low-latency, and massively connected applications that seamlessly integrate the physical and digital domains through context-aware operation. These applications work across physical and digital environments. Within this broader shift, digital twins (DTs) have demonstrated notable improvements in overall network performance by creating high-fidelity digital counterparts of physical 6G systems. These DTs give researchers and operators a way to view network behavior as it evolves, to forecast likely performance patterns, and – crucially – to adjust key processes such as beamforming, resource allocation, and interference management. Even so, the value of DT-based optimization is limited by several practical factors. Their effectiveness depends a great deal on access to reliable and sufficiently rich data, and the inherent complexity of 6G environments often makes accurate modeling and efficient resource coordination challenging. This paper examines how a range of generative artificial intelligence (GenAI) models can be used alongside DTs to strengthen resource allocation and improve security in 6G networks. It also sets out a GenAI-enabled DT framework for various 6G-enabling applications, highlighting the potential roles of different GenAI models in supporting semantic communications, the metaverse, integrated sensing and communication (ISAC), AI-generated content (AIGC), and reconfigurable intelligent surfaces (RIS). This paper concludes by drawing attention to emerging conceptual frameworks for DT–GenAI integration. It notes several research challenges that have yet to be resolved, and outlines future directions for deploying GenAI-augmented DTs to achieve intelligent, adaptive, and resilient 6G networks.
{"title":"A Survey on GenAI-Driven Digital Twins: Toward Intelligent 6G Networks and Metaverse Systems","authors":"Faisal Naeem;Mansoor Ali;Georges Kaddoum;Yasir Faheem;Yan Zhang;Merouane Debbah;Chau Yuen","doi":"10.1109/OJCOMS.2025.3641307","DOIUrl":"https://doi.org/10.1109/OJCOMS.2025.3641307","url":null,"abstract":"Sixth-Generation (6G) networks aim to deliver unprecedented network performance by facilitating intelligent, ultra-low-latency, and massively connected applications that seamlessly integrate the physical and digital domains through context-aware operation. These applications work across physical and digital environments. Within this broader shift, digital twins (DTs) have demonstrated notable improvements in overall network performance by creating high-fidelity digital counterparts of physical 6G systems. These DTs give researchers and operators a way to view network behavior as it evolves, to forecast likely performance patterns, and – crucially – to adjust key processes such as beamforming, resource allocation, and interference management. Even so, the value of DT-based optimization is limited by several practical factors. Their effectiveness depends a great deal on access to reliable and sufficiently rich data, and the inherent complexity of 6G environments often makes accurate modeling and efficient resource coordination challenging. This paper examines how a range of generative artificial intelligence (GenAI) models can be used alongside DTs to strengthen resource allocation and improve security in 6G networks. It also sets out a GenAI-enabled DT framework for various 6G-enabling applications, highlighting the potential roles of different GenAI models in supporting semantic communications, the metaverse, integrated sensing and communication (ISAC), AI-generated content (AIGC), and reconfigurable intelligent surfaces (RIS). This paper concludes by drawing attention to emerging conceptual frameworks for DT–GenAI integration. It notes several research challenges that have yet to be resolved, and outlines future directions for deploying GenAI-augmented DTs to achieve intelligent, adaptive, and resilient 6G networks.","PeriodicalId":33803,"journal":{"name":"IEEE Open Journal of the Communications Society","volume":"6 ","pages":"10365-10402"},"PeriodicalIF":6.3,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11282968","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145778173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-08DOI: 10.1109/OJCOMS.2025.3641519
Amarilton L. Magalhães;André L. F. De Almeida
Recent research has delved into advanced designs for reconfigurable intelligent surfaces (RIS) with integrated sensing functions. One promising concept is the hybrid RIS (HRIS), which blends sensing and reflecting meta-atoms. This enables HRIS to process signals, aiding in channel estimation (CE) and symbol detection tasks. This paper formulates novel semi-blind receivers for HRIS-aided wireless communications that enable joint symbol and CE at the HRIS and BS. The proposed receivers exploit a tensor coding at the transmit side, while capitalizing on the multilinear structures of the received signals. We develop iterative and closed-form receiver algorithms for joint estimation of the uplink channels and symbols at both the HRIS and the BS, enabling joint channel and symbol estimation functionalities. The proposed receivers offer symbol decoding capabilities to the HRIS and ensure ambiguity-free separate CE without requiring an a priori training stage. We also study identifiability conditions that provide a unique joint channel and symbol recovery, and discuss the computational complexities and tradeoffs involved in the proposed semi-blind receivers. Our findings demonstrate the competitive performances of the proposed solutions at the HRIS and the BS and unveil distinct performance trends based on the possible combinations of HRIS-BS receiver pairs. Finally, extensive numerical results elucidate the interplay between power splitting, symbol recovery, and CE accuracy in HRIS-assisted communications. Such insights are pivotal for optimizing receiver design and enhancing system performance in future HRIS deployments.
{"title":"Semi-Blind Receivers for Hybrid Reflecting and Sensing RIS","authors":"Amarilton L. Magalhães;André L. F. De Almeida","doi":"10.1109/OJCOMS.2025.3641519","DOIUrl":"https://doi.org/10.1109/OJCOMS.2025.3641519","url":null,"abstract":"Recent research has delved into advanced designs for reconfigurable intelligent surfaces (RIS) with integrated sensing functions. One promising concept is the hybrid RIS (HRIS), which blends sensing and reflecting meta-atoms. This enables HRIS to process signals, aiding in channel estimation (CE) and symbol detection tasks. This paper formulates novel semi-blind receivers for HRIS-aided wireless communications that enable joint symbol and CE at the HRIS and BS. The proposed receivers exploit a tensor coding at the transmit side, while capitalizing on the multilinear structures of the received signals. We develop iterative and closed-form receiver algorithms for joint estimation of the uplink channels and symbols at both the HRIS and the BS, enabling joint channel and symbol estimation functionalities. The proposed receivers offer symbol decoding capabilities to the HRIS and ensure ambiguity-free separate CE without requiring an a priori training stage. We also study identifiability conditions that provide a unique joint channel and symbol recovery, and discuss the computational complexities and tradeoffs involved in the proposed semi-blind receivers. Our findings demonstrate the competitive performances of the proposed solutions at the HRIS and the BS and unveil distinct performance trends based on the possible combinations of HRIS-BS receiver pairs. Finally, extensive numerical results elucidate the interplay between power splitting, symbol recovery, and CE accuracy in HRIS-assisted communications. Such insights are pivotal for optimizing receiver design and enhancing system performance in future HRIS deployments.","PeriodicalId":33803,"journal":{"name":"IEEE Open Journal of the Communications Society","volume":"6 ","pages":"10542-10566"},"PeriodicalIF":6.3,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11282960","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145830779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-03DOI: 10.1109/OJCOMS.2025.3639560
Fahad S. Alqurashi;Jiajie Xu;Luis Barreiro Goiriz;Mohamed-Slim Alouini
In this work, we report the deployment and experimental analysis of a hybrid free-space optical (FSO) and millimeter-wave (mmWave) backhaul system over a 1.2 km link to connect an under-served area. The parallel configuration of FSO and mmWave enables mutual backup during adverse weather, improving service availability to 99.10%, compared to 81.90% and 90.99% for standalone FSO and mmWave, respectively. Empirical measurements, supported by Monte Carlo simulations, confirm strong agreement with theoretical log-normal (FSO) and Gaussian (mmWave) models. Environmental analysis revealed that wind speed induces misalignment and power loss in FSO, while humidity significantly degrades mmWave performance but has minimal impact on FSO at 1550 nm. These complementary behaviors highlight the practicality of hybrid deployment, offering a cost-effective and resilient alternative to fiber for bridging the digital divide and ensuring high-speed connectivity in challenging environments.
{"title":"Enhancing Wireless Backhaul Networks With Parallel FSO-mmWave Systems: Experimental Analysis and Availability Assessment","authors":"Fahad S. Alqurashi;Jiajie Xu;Luis Barreiro Goiriz;Mohamed-Slim Alouini","doi":"10.1109/OJCOMS.2025.3639560","DOIUrl":"https://doi.org/10.1109/OJCOMS.2025.3639560","url":null,"abstract":"In this work, we report the deployment and experimental analysis of a hybrid free-space optical (FSO) and millimeter-wave (mmWave) backhaul system over a 1.2 km link to connect an under-served area. The parallel configuration of FSO and mmWave enables mutual backup during adverse weather, improving service availability to 99.10%, compared to 81.90% and 90.99% for standalone FSO and mmWave, respectively. Empirical measurements, supported by Monte Carlo simulations, confirm strong agreement with theoretical log-normal (FSO) and Gaussian (mmWave) models. Environmental analysis revealed that wind speed induces misalignment and power loss in FSO, while humidity significantly degrades mmWave performance but has minimal impact on FSO at 1550 nm. These complementary behaviors highlight the practicality of hybrid deployment, offering a cost-effective and resilient alternative to fiber for bridging the digital divide and ensuring high-speed connectivity in challenging environments.","PeriodicalId":33803,"journal":{"name":"IEEE Open Journal of the Communications Society","volume":"6 ","pages":"10219-10228"},"PeriodicalIF":6.3,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11277290","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-03DOI: 10.1109/OJCOMS.2025.3640594
Richard Ziegahn;Tho Le-Ngoc
Through simultaneous downlink and uplink transmission on the same frequency slot, in-band full duplex has the potential to double the spectral efficiency of communication systems, however, the potential is difficult to realize due to the strong self-interference (SI). The great number of antenna elements in massive MIMO has made spatial SI suppression a promising solution to SI suppression but this approach is challenged by the coupling of the transmit and receive beamforming problems. This paper applies a combined beamforming and reduced connectivity antenna selection approach to suppress SI while maintaining user directivity and reduce switching complexity. To solve the non-convex beamforming problem, Regularized Joint Linearly Constrained Minimum Variance (RJLCMV) is proposed which leverages disappearing regularization to provide deep SI nulling while avoiding the self-nulling problem. To solve the non-convex joint group antenna selection, we pose the problem as a block-sparse recovery problem and propose Hard Thresholding Pursuit-based Joint Group Antenna Selection (HTP-JGAS), an iterative method based on compressed sensing. Using measured SI channel data, RJLCMV decreases the probability of deep self-nulling by 49% compared to a standard alternating approach. By leveraging HTP-JGAS with RJLCMV, the probability of deep nulling is nearly eliminated compared to a sub-connected approach while the runtime is over two orders of magnitude faster than existing nature-inspired approaches. Furthermore, it is demonstrated that the proposed partial switching connectivity does not substantially reduce performance while providing a great reduction in hardware complexity.
{"title":"Full Duplex Transmit and Receive Beamforming With Block-Sparse Antenna Selection for Multi-User Massive MIMO","authors":"Richard Ziegahn;Tho Le-Ngoc","doi":"10.1109/OJCOMS.2025.3640594","DOIUrl":"https://doi.org/10.1109/OJCOMS.2025.3640594","url":null,"abstract":"Through simultaneous downlink and uplink transmission on the same frequency slot, in-band full duplex has the potential to double the spectral efficiency of communication systems, however, the potential is difficult to realize due to the strong self-interference (SI). The great number of antenna elements in massive MIMO has made spatial SI suppression a promising solution to SI suppression but this approach is challenged by the coupling of the transmit and receive beamforming problems. This paper applies a combined beamforming and reduced connectivity antenna selection approach to suppress SI while maintaining user directivity and reduce switching complexity. To solve the non-convex beamforming problem, Regularized Joint Linearly Constrained Minimum Variance (RJLCMV) is proposed which leverages disappearing regularization to provide deep SI nulling while avoiding the self-nulling problem. To solve the non-convex joint group antenna selection, we pose the problem as a block-sparse recovery problem and propose Hard Thresholding Pursuit-based Joint Group Antenna Selection (HTP-JGAS), an iterative method based on compressed sensing. Using measured SI channel data, RJLCMV decreases the probability of deep self-nulling by 49% compared to a standard alternating approach. By leveraging HTP-JGAS with RJLCMV, the probability of deep nulling is nearly eliminated compared to a sub-connected approach while the runtime is over two orders of magnitude faster than existing nature-inspired approaches. Furthermore, it is demonstrated that the proposed partial switching connectivity does not substantially reduce performance while providing a great reduction in hardware complexity.","PeriodicalId":33803,"journal":{"name":"IEEE Open Journal of the Communications Society","volume":"6 ","pages":"10287-10306"},"PeriodicalIF":6.3,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11277302","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145778133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-03DOI: 10.1109/OJCOMS.2025.3639583
Krishnendu S. Tharakan;Hayssam Dahrouj;Nour Kouzayha;Hesham ElSawy;Tareq Y. Al-Naffouri
Extended Reality (XR) applications offer immersive experiences across industrial, healthcare, educational, and entertainment sectors, but they demand ultra-low latency and high data rates that challenge current cellular infrastructure. This paper proposes a latency-aware mobile XR system comprising multi-antenna base stations (BSs) and edge servers, each equipped with limited fronthaul capacity and local caching. To minimize end-to-end latency, we develop a unified optimization framework that jointly addresses field of view (FOV) caching and rendering, BS selection, beamforming vector design, and edge server placement. The framework captures the inter-dependencies between user-specific FOVs, rendering decisions, and resource constraints such as computation capacity and power, ultimately enhancing the quality of personal experience (QoPE). We formulate the problem as a mixed-integer non-convex program and solve it using $ell _{0}$ -norm relaxation, successive convex approximation, and fractional programming. Reformulating it as a multiple choice multiple dimensional knapsack problem (MMKP), we apply Lagrangian dual decomposition to derive efficient solutions. Simulation results demonstrate that our approach significantly outperforms baseline algorithms. Notably, a 91% reduction in average delay is achieved when varying BS cache size, and a 94% improvement is observed over the greedy-edge method when adjusting edge server cache size. These results highlight the potential of the proposed method for scalable, delay-aware XR systems.
扩展现实(XR)应用程序在工业、医疗保健、教育和娱乐领域提供沉浸式体验,但它们需要超低延迟和高数据速率,这对当前的蜂窝基础设施构成了挑战。本文提出了一种由多天线基站(BSs)和边缘服务器组成的延迟感知移动XR系统,每个基站都配备有限的前传容量和本地缓存。为了最大限度地减少端到端延迟,我们开发了一个统一的优化框架,共同解决视场(FOV)缓存和渲染、BS选择、波束形成矢量设计和边缘服务器放置问题。该框架捕获特定于用户的fov、渲染决策和资源约束(如计算能力和功率)之间的相互依赖关系,最终提高个人体验的质量(QoPE)。我们将该问题表述为一个混合整数非凸规划,并使用$ well _{0}$范数松弛、连续凸逼近和分数规划来求解。将其转化为多选择多维背包问题(MMKP),应用拉格朗日对偶分解得到有效解。仿真结果表明,我们的方法明显优于基线算法。值得注意的是,当改变BS缓存大小时,平均延迟减少了91%,而当调整边缘服务器缓存大小时,与贪婪边缘方法相比,平均延迟减少了94%。这些结果突出了所提出的方法在可扩展、延迟感知的XR系统中的潜力。
{"title":"Cache-Enabled XR Systems: Delay-Aware Resource Allocation for Immersive Experience","authors":"Krishnendu S. Tharakan;Hayssam Dahrouj;Nour Kouzayha;Hesham ElSawy;Tareq Y. Al-Naffouri","doi":"10.1109/OJCOMS.2025.3639583","DOIUrl":"https://doi.org/10.1109/OJCOMS.2025.3639583","url":null,"abstract":"Extended Reality (XR) applications offer immersive experiences across industrial, healthcare, educational, and entertainment sectors, but they demand ultra-low latency and high data rates that challenge current cellular infrastructure. This paper proposes a latency-aware mobile XR system comprising multi-antenna base stations (BSs) and edge servers, each equipped with limited fronthaul capacity and local caching. To minimize end-to-end latency, we develop a unified optimization framework that jointly addresses field of view (FOV) caching and rendering, BS selection, beamforming vector design, and edge server placement. The framework captures the inter-dependencies between user-specific FOVs, rendering decisions, and resource constraints such as computation capacity and power, ultimately enhancing the quality of personal experience (QoPE). We formulate the problem as a mixed-integer non-convex program and solve it using <inline-formula> <tex-math>$ell _{0}$ </tex-math></inline-formula>-norm relaxation, successive convex approximation, and fractional programming. Reformulating it as a multiple choice multiple dimensional knapsack problem (MMKP), we apply Lagrangian dual decomposition to derive efficient solutions. Simulation results demonstrate that our approach significantly outperforms baseline algorithms. Notably, a 91% reduction in average delay is achieved when varying BS cache size, and a 94% improvement is observed over the greedy-edge method when adjusting edge server cache size. These results highlight the potential of the proposed method for scalable, delay-aware XR systems.","PeriodicalId":33803,"journal":{"name":"IEEE Open Journal of the Communications Society","volume":"6 ","pages":"10307-10321"},"PeriodicalIF":6.3,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11277275","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145778131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-03DOI: 10.1109/OJCOMS.2025.3640408
Ahmad M. Jaradat;Mohanad Alayedi;Hüseyin Arslan
The evolution of wireless communication is rapidly advancing beyond Fifth Generation (5G) systems toward Sixth Generation (6G) and future network paradigms. These networks aim to deliver unprecedented data rates, ultra-reliable low-latency communication (URLLC), massive connectivity, and seamless integration of terrestrial and non-terrestrial infrastructures. Efficient radio resource scheduling (RRS) is essential to meeting these demands while ensuring optimal performance in increasingly complex and heterogeneous environments. This survey presents a comprehensive and structured overview of RRS strategies for 5G, 6G, and beyond. Anchored in a unified taxonomy framework, it systematically classifies scheduling approaches across key dimensions, including scheduling methodology, network architecture, and service types. The paper explores a wide spectrum of techniques–from traditional heuristic algorithms to advanced solutions based on multiple-input multiple-output (MIMO), millimeter-wave (mmWave), network slicing, and cross-layer optimization. Special emphasis is placed on the transformative role of machine learning (ML) and artificial intelligence (AI), including supervised learning (SL), reinforcement learning (RL), and deep learning (DL)-based models for intelligent, adaptive scheduling. The survey also discusses emerging challenges such as joint sensing and communication scheduling, edge computing and localized resource allocation (RA), digital twin-assisted scheduling, multi-carrier scheduling, and quantum-assisted scheduling. By highlighting state-of-the-art techniques, open research gaps, and future directions, this survey serves as a valuable reference for researchers and practitioners aiming to develop scalable, secure, and intelligent RRS solutions for next-generation wireless systems.
{"title":"A Survey of Radio Resource Scheduling for 6G and Future Wireless Networks","authors":"Ahmad M. Jaradat;Mohanad Alayedi;Hüseyin Arslan","doi":"10.1109/OJCOMS.2025.3640408","DOIUrl":"https://doi.org/10.1109/OJCOMS.2025.3640408","url":null,"abstract":"The evolution of wireless communication is rapidly advancing beyond Fifth Generation (5G) systems toward Sixth Generation (6G) and future network paradigms. These networks aim to deliver unprecedented data rates, ultra-reliable low-latency communication (URLLC), massive connectivity, and seamless integration of terrestrial and non-terrestrial infrastructures. Efficient radio resource scheduling (RRS) is essential to meeting these demands while ensuring optimal performance in increasingly complex and heterogeneous environments. This survey presents a comprehensive and structured overview of RRS strategies for 5G, 6G, and beyond. Anchored in a unified taxonomy framework, it systematically classifies scheduling approaches across key dimensions, including scheduling methodology, network architecture, and service types. The paper explores a wide spectrum of techniques–from traditional heuristic algorithms to advanced solutions based on multiple-input multiple-output (MIMO), millimeter-wave (mmWave), network slicing, and cross-layer optimization. Special emphasis is placed on the transformative role of machine learning (ML) and artificial intelligence (AI), including supervised learning (SL), reinforcement learning (RL), and deep learning (DL)-based models for intelligent, adaptive scheduling. The survey also discusses emerging challenges such as joint sensing and communication scheduling, edge computing and localized resource allocation (RA), digital twin-assisted scheduling, multi-carrier scheduling, and quantum-assisted scheduling. By highlighting state-of-the-art techniques, open research gaps, and future directions, this survey serves as a valuable reference for researchers and practitioners aiming to develop scalable, secure, and intelligent RRS solutions for next-generation wireless systems.","PeriodicalId":33803,"journal":{"name":"IEEE Open Journal of the Communications Society","volume":"6 ","pages":"10191-10218"},"PeriodicalIF":6.3,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11277303","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}