This work explores the integration of Quantum Machine Learning (QML) and Quantum-Inspired (QI) techniques for optimizing End-to-End (E2E) network services in telecommunication systems, particularly focusing on 5G networks and beyond. The application of QML and QI algorithms is investigated, comparing their performance with classical Machine Learning (ML) approaches. The present study employs a hybrid framework combining quantum and classical computing leveraging the strengths of QML and QI, without the penalty of quantum hardware availability. This is particularized for the optimization of the Quality of Experience (QoE) over cellular networks. The framework comprises an estimator for obtaining the expected QoE based on user metrics, service settings, and cell configuration, and an optimizer that uses the estimation to choose the best cell and service configuration. Although the approach is applicable to any QoE-based network management, its implementation is particularized for the optimization of network configurations for Cloud Gaming services. Then, it is evaluated via performance metrics such as accuracy, model loading and inference times for the estimator, time to solution and solution score for the optimizer. The results indicate that QML models achieve similar or superior accuracy to classical ML models for estimation, while decreasing inference and loading times. Furthermore, potential for better performance is observed for higher-dimensional data, highlighting promising results for higher complexity problems. Thus, the results demonstrate the promising potential of QML in advancing network optimization, although challenges related to data availability and integration complexities between quantum and classical ML are identified as future research lines.
{"title":"Quantum-Based QoE Optimization in Advanced Cellular Networks: Integration and Cloud Gaming Use Case","authors":"Fatma Chaouech;Javier Villegas;António Pereira;Carlos Baena;Sergio Fortes;Raquel Barco;Dominic Gribben;Mohammad Dib;Alba Villarino;Aser Cortines;Román Orús","doi":"10.1109/OJCOMS.2025.3628485","DOIUrl":"https://doi.org/10.1109/OJCOMS.2025.3628485","url":null,"abstract":"This work explores the integration of Quantum Machine Learning (QML) and Quantum-Inspired (QI) techniques for optimizing End-to-End (E2E) network services in telecommunication systems, particularly focusing on 5G networks and beyond. The application of QML and QI algorithms is investigated, comparing their performance with classical Machine Learning (ML) approaches. The present study employs a hybrid framework combining quantum and classical computing leveraging the strengths of QML and QI, without the penalty of quantum hardware availability. This is particularized for the optimization of the Quality of Experience (QoE) over cellular networks. The framework comprises an estimator for obtaining the expected QoE based on user metrics, service settings, and cell configuration, and an optimizer that uses the estimation to choose the best cell and service configuration. Although the approach is applicable to any QoE-based network management, its implementation is particularized for the optimization of network configurations for Cloud Gaming services. Then, it is evaluated via performance metrics such as accuracy, model loading and inference times for the estimator, time to solution and solution score for the optimizer. The results indicate that QML models achieve similar or superior accuracy to classical ML models for estimation, while decreasing inference and loading times. Furthermore, potential for better performance is observed for higher-dimensional data, highlighting promising results for higher complexity problems. Thus, the results demonstrate the promising potential of QML in advancing network optimization, although challenges related to data availability and integration complexities between quantum and classical ML are identified as future research lines.","PeriodicalId":33803,"journal":{"name":"IEEE Open Journal of the Communications Society","volume":"6 ","pages":"9604-9618"},"PeriodicalIF":6.3,"publicationDate":"2025-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11224712","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145560736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-03DOI: 10.1109/OJCOMS.2025.3628482
Morteza Alijani;Wout Joseph;David Plets
Visible Light Positioning (VLP) has emerged as a promising technology for next-generation indoor positioning systems (IPS), particularly within the scope of sixth-generation (6G) wireless networks. Its attractiveness stems from leveraging existing lighting infrastructures equipped with light-emitting diodes (LEDs), enabling cost-efficient deployments and achieving high-precision positioning accuracy in the centimeter-to-decimeter range. However, widespread adoption of traditional VLP solutions faces significant barriers due to the increased costs and operational complexity associated with modulating LEDs, which consequently reduces illumination efficiency by lowering their radiant flux. To address these limitations, recent research has introduced the concept of unmodulated Visible Light Positioning (uVLP), which exploits Light Signals of Opportunity (LSOOP) emitted by unmodulated illumination sources such as conventional LEDs. This paradigm offers a cost-effective, low-infrastructure alternative for indoor positioning by eliminating the need for modulation hardware and maintaining lighting efficiency. This paper delineates the fundamental principles of uVLP, provides a comparative analysis of uVLP versus conventional VLP methods, and classifies existing uVLP techniques according to receiver technologies into intensity-based methods (e.g., photodiodes, solar cells, etc.) and imaging-based methods. Additionally, we propose a comprehensive taxonomy categorizing techniques into demultiplexed and undemultiplexed approaches. Within this structured framework, we critically review current advancements in uVLP, discuss prevailing challenges, and outline promising research directions essential for developing robust, scalable, and widely deployable uVLP solutions.
{"title":"Unmodulated Visible-Light Positioning: A Deep-Dive Into Techniques, Studies, and Future Prospects","authors":"Morteza Alijani;Wout Joseph;David Plets","doi":"10.1109/OJCOMS.2025.3628482","DOIUrl":"https://doi.org/10.1109/OJCOMS.2025.3628482","url":null,"abstract":"Visible Light Positioning (VLP) has emerged as a promising technology for next-generation indoor positioning systems (IPS), particularly within the scope of sixth-generation (6G) wireless networks. Its attractiveness stems from leveraging existing lighting infrastructures equipped with light-emitting diodes (LEDs), enabling cost-efficient deployments and achieving high-precision positioning accuracy in the centimeter-to-decimeter range. However, widespread adoption of traditional VLP solutions faces significant barriers due to the increased costs and operational complexity associated with modulating LEDs, which consequently reduces illumination efficiency by lowering their radiant flux. To address these limitations, recent research has introduced the concept of unmodulated Visible Light Positioning (uVLP), which exploits Light Signals of Opportunity (LSOOP) emitted by unmodulated illumination sources such as conventional LEDs. This paradigm offers a cost-effective, low-infrastructure alternative for indoor positioning by eliminating the need for modulation hardware and maintaining lighting efficiency. This paper delineates the fundamental principles of uVLP, provides a comparative analysis of uVLP versus conventional VLP methods, and classifies existing uVLP techniques according to receiver technologies into intensity-based methods (e.g., photodiodes, solar cells, etc.) and imaging-based methods. Additionally, we propose a comprehensive taxonomy categorizing techniques into demultiplexed and undemultiplexed approaches. Within this structured framework, we critically review current advancements in uVLP, discuss prevailing challenges, and outline promising research directions essential for developing robust, scalable, and widely deployable uVLP solutions.","PeriodicalId":33803,"journal":{"name":"IEEE Open Journal of the Communications Society","volume":"6 ","pages":"9448-9485"},"PeriodicalIF":6.3,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11224707","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145560731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-03DOI: 10.1109/OJCOMS.2025.3628345
Li Li;Hongbin Chen;Zhihui Guo
Autonomous aerial vehicles (AAVs) have been widely applied to target detection or data collection tasks in wireless sensor networks (WSNs). However, few existing studies considered resource overhead associated with the collaborative execution of both tasks. To overcome this limitation, this paper proposes a multi-AAV-enabled integrated sensing and communication (ISAC) scheme for WSNs. Each AAV serves as a dual-functional aerial platform for target detection and data collection. Considering non-simultaneous sensing and communication operations, we design a novel ISAC frame structure to optimize sensing and communication time allocation. To measure data freshness, the age of information (AoI) is introduced as a critical metric, while probability of detection (PD) is used to evaluate the sensing performance. We formulate a joint optimization problem to minimize the average AoI of sensors under the constraint of PD by coordinating AAV-sensor association, AAV flight trajectory, sensing and communication time allocation, sensing beamwidth, and AAV flight altitude. To tackle the non-convex mixed-integer programming challenge, the optimization problem is decomposed into four subproblems, for which we propose a multiple traveling salesman problem-based data collection scheduling strategy and develop a three-layer alternating optimization algorithm. Simulation results show that the proposed algorithm reduces the average AoI and total sensing time by up to 19% and 77%, respectively, compared to baseline schemes. Furthermore, for a fixed PD value, multi-AAV standalone sensing (SS) improves AoI performance by 38% over multi-AAV cooperative sensing (CS). In contrast, multi-AAV CS reduces the total sensing time by 40% compared to multi-AAV SS.
{"title":"AoI-Optimal Multi-AAV-Enabled ISAC for Target Detection and Data Collection in Wireless Sensor Networks","authors":"Li Li;Hongbin Chen;Zhihui Guo","doi":"10.1109/OJCOMS.2025.3628345","DOIUrl":"https://doi.org/10.1109/OJCOMS.2025.3628345","url":null,"abstract":"Autonomous aerial vehicles (AAVs) have been widely applied to target detection or data collection tasks in wireless sensor networks (WSNs). However, few existing studies considered resource overhead associated with the collaborative execution of both tasks. To overcome this limitation, this paper proposes a multi-AAV-enabled integrated sensing and communication (ISAC) scheme for WSNs. Each AAV serves as a dual-functional aerial platform for target detection and data collection. Considering non-simultaneous sensing and communication operations, we design a novel ISAC frame structure to optimize sensing and communication time allocation. To measure data freshness, the age of information (AoI) is introduced as a critical metric, while probability of detection (PD) is used to evaluate the sensing performance. We formulate a joint optimization problem to minimize the average AoI of sensors under the constraint of PD by coordinating AAV-sensor association, AAV flight trajectory, sensing and communication time allocation, sensing beamwidth, and AAV flight altitude. To tackle the non-convex mixed-integer programming challenge, the optimization problem is decomposed into four subproblems, for which we propose a multiple traveling salesman problem-based data collection scheduling strategy and develop a three-layer alternating optimization algorithm. Simulation results show that the proposed algorithm reduces the average AoI and total sensing time by up to 19% and 77%, respectively, compared to baseline schemes. Furthermore, for a fixed PD value, multi-AAV standalone sensing (SS) improves AoI performance by 38% over multi-AAV cooperative sensing (CS). In contrast, multi-AAV CS reduces the total sensing time by 40% compared to multi-AAV SS.","PeriodicalId":33803,"journal":{"name":"IEEE Open Journal of the Communications Society","volume":"6 ","pages":"9504-9522"},"PeriodicalIF":6.3,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11224683","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145510202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-30DOI: 10.1109/OJCOMS.2025.3626862
Mahmoud M. Salim;Suhail I. Al-Dharrab;Daniel Benevides Da Costa;Ali H. Muqaibel
The emerging demands of sixth-generation wireless networks, such as ultra-connectivity, native intelligence, and cross-domain convergence, are bringing renewed focus to cooperative non-orthogonal multiple access (C-NOMA) as a fundamental enabler of scalable, efficient, and intelligent communication systems. C-NOMA builds on the core benefits of NOMA by leveraging user cooperation and relay strategies to enhance spectral efficiency, coverage, and energy performance. This article presents a comprehensive and forward-looking survey on the integration of C-NOMA with key enabling technologies, including radio frequency energy harvesting, cognitive radio networks, reconfigurable intelligent surfaces, space-air-ground integrated networks, and integrated sensing and communication-assisted semantic communication. Foundational principles and relaying protocols are first introduced to establish the technical relevance of C-NOMA. Then, a focused investigation is conducted into protocol-level synergies, architectural models, and deployment strategies across these technologies. Beyond integration, this article emphasizes the orchestration of C-NOMA across future application domains such as digital twins, extended reality, and e-health. In addition, it provides an extensive and in-depth review of recent literature, categorized by relaying schemes, system models, performance metrics, and optimization paradigms, including model-based, heuristic, and AI-driven approaches. Finally, open challenges and future research directions are outlined, spanning standardization, security, and cross-layer design, positioning C-NOMA as a key pillar of intelligent next-generation network architectures.
{"title":"Cooperative NOMA Meets Emerging Technologies: A Survey for Next-Generation Wireless Networks","authors":"Mahmoud M. Salim;Suhail I. Al-Dharrab;Daniel Benevides Da Costa;Ali H. Muqaibel","doi":"10.1109/OJCOMS.2025.3626862","DOIUrl":"https://doi.org/10.1109/OJCOMS.2025.3626862","url":null,"abstract":"The emerging demands of sixth-generation wireless networks, such as ultra-connectivity, native intelligence, and cross-domain convergence, are bringing renewed focus to cooperative non-orthogonal multiple access (C-NOMA) as a fundamental enabler of scalable, efficient, and intelligent communication systems. C-NOMA builds on the core benefits of NOMA by leveraging user cooperation and relay strategies to enhance spectral efficiency, coverage, and energy performance. This article presents a comprehensive and forward-looking survey on the integration of C-NOMA with key enabling technologies, including radio frequency energy harvesting, cognitive radio networks, reconfigurable intelligent surfaces, space-air-ground integrated networks, and integrated sensing and communication-assisted semantic communication. Foundational principles and relaying protocols are first introduced to establish the technical relevance of C-NOMA. Then, a focused investigation is conducted into protocol-level synergies, architectural models, and deployment strategies across these technologies. Beyond integration, this article emphasizes the orchestration of C-NOMA across future application domains such as digital twins, extended reality, and e-health. In addition, it provides an extensive and in-depth review of recent literature, categorized by relaying schemes, system models, performance metrics, and optimization paradigms, including model-based, heuristic, and AI-driven approaches. Finally, open challenges and future research directions are outlined, spanning standardization, security, and cross-layer design, positioning C-NOMA as a key pillar of intelligent next-generation network architectures.","PeriodicalId":33803,"journal":{"name":"IEEE Open Journal of the Communications Society","volume":"6 ","pages":"9247-9286"},"PeriodicalIF":6.3,"publicationDate":"2025-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11222748","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-30DOI: 10.1109/OJCOMS.2025.3627389
Haneya Naeem Qureshi;Ali Imran
Traditional neural networks struggle when trained on limited data and lack inherent interpretability. This limitation arises because conventional NNs operate as tabula rasa, relying entirely on the volume and quality of training data for learning, without any inherent knowledge or structure to guide them. Inspired by natural neural networks in living beings, which exhibit innate intelligence through purpose-driven design even before learning begins, we propose a novel framework, WINET (White box Interpretable Neural Networks). Unlike traditional neural networks, WINET integrates domain knowledge directly into the architecture from the outset, enabling relatively robust learning even with minimal training data. Our experiments reveal that, much like human learning, WINET requires significantly less training data than traditional neural networks, while maintaining resilience against training data scarcity. Additionally, WINET enhances interpretability—an essential attribute for AI models involved in critical decision making—where conventional neural networks often fall short. To validate WINET’s effectiveness, we first compare it qualitatively with existing interpretable models, then quantitatively apply it to predicting mobile network coverage, a complex task influenced by both controlled and random variables. We compared WINET against two common alternatives to system modeling—(i) analytical models and (ii) conventional AI (black box neural networks)—using both simulated and real data. Results show that conventional AI exhibits a drastic performance drop with scarce data (realistic drive test data), with Mean Squared Error increasing by 2200%. The analytical model also performs poorly. In contrast, WINET shows superior generalization and resilience to limited training data in unseen test scenarios.
{"title":"Training Resilient AI Models With Rich Interpretations From Highly Scarce Data","authors":"Haneya Naeem Qureshi;Ali Imran","doi":"10.1109/OJCOMS.2025.3627389","DOIUrl":"https://doi.org/10.1109/OJCOMS.2025.3627389","url":null,"abstract":"Traditional neural networks struggle when trained on limited data and lack inherent interpretability. This limitation arises because conventional NNs operate as tabula rasa, relying entirely on the volume and quality of training data for learning, without any inherent knowledge or structure to guide them. Inspired by natural neural networks in living beings, which exhibit innate intelligence through purpose-driven design even before learning begins, we propose a novel framework, WINET (White box Interpretable Neural Networks). Unlike traditional neural networks, WINET integrates domain knowledge directly into the architecture from the outset, enabling relatively robust learning even with minimal training data. Our experiments reveal that, much like human learning, WINET requires significantly less training data than traditional neural networks, while maintaining resilience against training data scarcity. Additionally, WINET enhances interpretability—an essential attribute for AI models involved in critical decision making—where conventional neural networks often fall short. To validate WINET’s effectiveness, we first compare it qualitatively with existing interpretable models, then quantitatively apply it to predicting mobile network coverage, a complex task influenced by both controlled and random variables. We compared WINET against two common alternatives to system modeling—(i) analytical models and (ii) conventional AI (black box neural networks)—using both simulated and real data. Results show that conventional AI exhibits a drastic performance drop with scarce data (realistic drive test data), with Mean Squared Error increasing by 2200%. The analytical model also performs poorly. In contrast, WINET shows superior generalization and resilience to limited training data in unseen test scenarios.","PeriodicalId":33803,"journal":{"name":"IEEE Open Journal of the Communications Society","volume":"6 ","pages":"9587-9603"},"PeriodicalIF":6.3,"publicationDate":"2025-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11222735","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145560730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-27DOI: 10.1109/OJCOMS.2025.3624260
Ahmed Nasser;Abdulkadir Celik;Ahmed M. Eltawil
Reconfigurable intelligent surface (RIS) partitioning offers a strategic solution for serving multiple users equipment (UEs) simultaneously in blockage-prone wireless environments, leveraging multiple-input multiple-output (MIMO) and non-orthogonal multiple access (NOMA) technologies within the millimeter-wave (mmWave) spectrum. However, deploying RIS partitioning (RISP) in MIMO-NOMA is hindered by the exacerbated computational complexity involved in acquiring accurate channel state information (CSI). This paper proposes a novel learning framework based on multi-agent deep reinforcement learning (MA-DRL) that maximizes the sum rate of the RISP-aided MIMO-NOMA system without requiring UEs’ CSI. The proposed framework jointly optimizes RIS phase shifts, number of RIS partitions, UEs’ beamformers, and NOMA power allocation (PA), transforming the challenge into a high-dimensional combinatorial optimization problem. The MA-DRL algorithm integrates double deep Q networks (DDQN) and deep deterministic policy gradient (DDPG) agents, where each UE acts as a DDQN agent optimizing its beamformer, while the RIS serves as a DDPG agent handling partitioning and power control. An experimental testbed is developed to gather real-world data for training and evaluation. Results show that the MA-DRL algorithm closely approaches optimal performance, trailing the exhaustive search by only 8%, while reducing complexity by 95% and improving the sum rate by an average of 18% compared to traditional full RIS setups.
{"title":"Multi-Agent DRL for RIS Partitioning, Beam Selection, and Power Control in MIMO-NOMA System","authors":"Ahmed Nasser;Abdulkadir Celik;Ahmed M. Eltawil","doi":"10.1109/OJCOMS.2025.3624260","DOIUrl":"https://doi.org/10.1109/OJCOMS.2025.3624260","url":null,"abstract":"Reconfigurable intelligent surface (RIS) partitioning offers a strategic solution for serving multiple users equipment (UEs) simultaneously in blockage-prone wireless environments, leveraging multiple-input multiple-output (MIMO) and non-orthogonal multiple access (NOMA) technologies within the millimeter-wave (mmWave) spectrum. However, deploying RIS partitioning (RISP) in MIMO-NOMA is hindered by the exacerbated computational complexity involved in acquiring accurate channel state information (CSI). This paper proposes a novel learning framework based on multi-agent deep reinforcement learning (MA-DRL) that maximizes the sum rate of the RISP-aided MIMO-NOMA system without requiring UEs’ CSI. The proposed framework jointly optimizes RIS phase shifts, number of RIS partitions, UEs’ beamformers, and NOMA power allocation (PA), transforming the challenge into a high-dimensional combinatorial optimization problem. The MA-DRL algorithm integrates double deep Q networks (DDQN) and deep deterministic policy gradient (DDPG) agents, where each UE acts as a DDQN agent optimizing its beamformer, while the RIS serves as a DDPG agent handling partitioning and power control. An experimental testbed is developed to gather real-world data for training and evaluation. Results show that the MA-DRL algorithm closely approaches optimal performance, trailing the exhaustive search by only 8%, while reducing complexity by 95% and improving the sum rate by an average of 18% compared to traditional full RIS setups.","PeriodicalId":33803,"journal":{"name":"IEEE Open Journal of the Communications Society","volume":"6 ","pages":"9073-9089"},"PeriodicalIF":6.3,"publicationDate":"2025-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11218162","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145405236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Time-Sensitive Networking (TSN) is an enhancement of Ethernet. It provides real-time capabilities in Layer-2 networks and guarantees quality of service (QoS) for data streams. TSN defines three different configuration models that specify how QoS requirements are signaled and reservations are admitted. End stations and bridges that use the same configuration model and are under the same administrative control form a so-called TSN domain within a larger Ethernet network. The procedure for QoS signaling of inter-domain streams, i.e., streams that are transmitted through different TSN domains, is challenging and not specified by standardization. The contribution of this work is manifold. First, we propose a novel unified signaling scheme for inter-domain communication with TSN. It relies only on TSN signaling protocols standardized by IEEE. Second, we show how the unified signaling scheme can be used to enable inter-domain QoS signaling across non-TSN domains, i.e., domains that natively do not support TSN-based admission control. Third, we present a model to calculate the delay of TSN QoS signaling. Fourth, we evaluate different TSN signaling approaches in a single and multiple domains. Our results indicate that centralized signaling causes shorter delay than distributed signaling. Furthermore, we demonstrate that the delay of centralized signaling mainly depends on the available compute power for configuration calculations.
{"title":"A Unified Inter-Domain QoS Signaling Scheme for Time-Sensitive Networking","authors":"Lukas Osswald;Steffen Lindner;Lukas Bechtel;Michael Menth","doi":"10.1109/OJCOMS.2025.3626051","DOIUrl":"https://doi.org/10.1109/OJCOMS.2025.3626051","url":null,"abstract":"Time-Sensitive Networking (TSN) is an enhancement of Ethernet. It provides real-time capabilities in Layer-2 networks and guarantees quality of service (QoS) for data streams. TSN defines three different configuration models that specify how QoS requirements are signaled and reservations are admitted. End stations and bridges that use the same configuration model and are under the same administrative control form a so-called TSN domain within a larger Ethernet network. The procedure for QoS signaling of inter-domain streams, i.e., streams that are transmitted through different TSN domains, is challenging and not specified by standardization. The contribution of this work is manifold. First, we propose a novel unified signaling scheme for inter-domain communication with TSN. It relies only on TSN signaling protocols standardized by IEEE. Second, we show how the unified signaling scheme can be used to enable inter-domain QoS signaling across non-TSN domains, i.e., domains that natively do not support TSN-based admission control. Third, we present a model to calculate the delay of TSN QoS signaling. Fourth, we evaluate different TSN signaling approaches in a single and multiple domains. Our results indicate that centralized signaling causes shorter delay than distributed signaling. Furthermore, we demonstrate that the delay of centralized signaling mainly depends on the available compute power for configuration calculations.","PeriodicalId":33803,"journal":{"name":"IEEE Open Journal of the Communications Society","volume":"6 ","pages":"9190-9205"},"PeriodicalIF":6.3,"publicationDate":"2025-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11218864","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-27DOI: 10.1109/OJCOMS.2025.3626212
Wenjie Liu;Panos Papadimitratos
With the rise of applications that rely on terrestrial and satellite infrastructures (e.g., and crowd-sourced Wi-Fi, Bluetooth, cellular, and IP databases) for positioning, ensuring their integrity and security is paramount. However, we demonstrate that these applications are susceptible to low-cost attacks (less than 50), including Wi-Fi spoofing combined with jamming, as well as more sophisticated coordinated location spoofing. These attacks manipulate position data to control or undermine functionality, leading to user scams or service manipulation. Therefore, we propose a countermeasure to detect and thwart such attacks by utilizing readily available, redundant positioning information from off-the-shelf platforms. Our method extends the receiver autonomous integrity monitoring (RAIM) framework by incorporating opportunistic information, including data from onboard sensors and terrestrial infrastructure signals, and, naturally,. We theoretically show that the fusion of heterogeneous signals improves resilience against sophisticated adversaries on multiple fronts. Experimental evaluations show the effectiveness of the proposed scheme in improving detection accuracy by 62% at most compared to baseline schemes and restoring accurate positioning.
{"title":"Coordinated Position Falsification Attacks and Countermeasures for Location-Based Services","authors":"Wenjie Liu;Panos Papadimitratos","doi":"10.1109/OJCOMS.2025.3626212","DOIUrl":"https://doi.org/10.1109/OJCOMS.2025.3626212","url":null,"abstract":"With the rise of applications that rely on terrestrial and satellite infrastructures (e.g., and crowd-sourced Wi-Fi, Bluetooth, cellular, and IP databases) for positioning, ensuring their integrity and security is paramount. However, we demonstrate that these applications are susceptible to low-cost attacks (less than 50), including Wi-Fi spoofing combined with jamming, as well as more sophisticated coordinated location spoofing. These attacks manipulate position data to control or undermine functionality, leading to user scams or service manipulation. Therefore, we propose a countermeasure to detect and thwart such attacks by utilizing readily available, redundant positioning information from off-the-shelf platforms. Our method extends the receiver autonomous integrity monitoring (RAIM) framework by incorporating opportunistic information, including data from onboard sensors and terrestrial infrastructure signals, and, naturally,. We theoretically show that the fusion of heterogeneous signals improves resilience against sophisticated adversaries on multiple fronts. Experimental evaluations show the effectiveness of the proposed scheme in improving detection accuracy by 62% at most compared to baseline schemes and restoring accurate positioning.","PeriodicalId":33803,"journal":{"name":"IEEE Open Journal of the Communications Society","volume":"6 ","pages":"9229-9246"},"PeriodicalIF":6.3,"publicationDate":"2025-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11218853","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-27DOI: 10.1109/OJCOMS.2025.3625731
Md Saheed Ullah;Dennis W. Prather;Xiao-Feng Qi
Modern wireless networks face increasingly complex radio propagation environments and electromagnetic congestion, motivating the use of ultra-large-scale phased arrays to enhance coverage and robustness via diverse beam configurations. However, as array sizes grow, channel state information (CSI) acquisition becomes a bottleneck — per-element estimation suffers from low SINR, while per-beam estimation, though theoretically superior due to mmWave sparsity and beamforming gain, is hindered by limitations in conventional beamformer implementations for full beamspace exploration. To overcome this, we propose a sensing-inspired beamspace interference rejection combiner (BIRC). Leveraging a photonically-enabled imaging receiver architecture, BIRC selects dominant beams from a full-dimensional analog beamspace using power-based rules and digitally synthesizes the interference rejection combiner in the reduced beamspace. This hybrid approach achieves both analog beamforming gain for accurate CSI estimation and digital inversion gain for interference suppression. Simulations using the NYU Wireless Simulator (NYUSIM) model show that BIRC approaches the performance of ideal CSI regardless of array size, enabling large arrays to support extended range or reduced power in interference-limited environments.
{"title":"BIRC: Beamspace Interference Rejection With Beam Selection for Uplink Massive-MIMO at mmWave","authors":"Md Saheed Ullah;Dennis W. Prather;Xiao-Feng Qi","doi":"10.1109/OJCOMS.2025.3625731","DOIUrl":"https://doi.org/10.1109/OJCOMS.2025.3625731","url":null,"abstract":"Modern wireless networks face increasingly complex radio propagation environments and electromagnetic congestion, motivating the use of ultra-large-scale phased arrays to enhance coverage and robustness via diverse beam configurations. However, as array sizes grow, channel state information (CSI) acquisition becomes a bottleneck — per-element estimation suffers from low SINR, while per-beam estimation, though theoretically superior due to mmWave sparsity and beamforming gain, is hindered by limitations in conventional beamformer implementations for full beamspace exploration. To overcome this, we propose a sensing-inspired beamspace interference rejection combiner (BIRC). Leveraging a photonically-enabled imaging receiver architecture, BIRC selects dominant beams from a full-dimensional analog beamspace using power-based rules and digitally synthesizes the interference rejection combiner in the reduced beamspace. This hybrid approach achieves both analog beamforming gain for accurate CSI estimation and digital inversion gain for interference suppression. Simulations using the NYU Wireless Simulator (NYUSIM) model show that BIRC approaches the performance of ideal CSI regardless of array size, enabling large arrays to support extended range or reduced power in interference-limited environments.","PeriodicalId":33803,"journal":{"name":"IEEE Open Journal of the Communications Society","volume":"6 ","pages":"9158-9169"},"PeriodicalIF":6.3,"publicationDate":"2025-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11218146","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-24DOI: 10.1109/OJCOMS.2025.3625534
Ehsan Eslami;Walaa Hamouda
Network traffic classification (NTC) is vital for efficient network management, security, and performance optimization, particularly with 5G/6G technologies. Traditional methods, such as deep packet inspection (DPI) and port-based identification, struggle with the rise of encrypted traffic and dynamic port allocations. Supervised learning methods provide viable alternatives but rely on large labeled datasets, which are difficult to acquire given the diversity and volume of network traffic. Meanwhile, unsupervised learning methods, while less reliant on labeled data, often exhibit lower accuracy. To address these limitations, we propose a novel framework that first leverages Self-Supervised Learning (SSL) with techniques such as autoencoders (AE) or Tabular Contrastive Learning (TabCL) to generate pseudo-labels from extensive unlabeled datasets, addressing the challenge of limited labeled data. We then apply traffic-adapted Confident Learning (CL) to refine these pseudo-labels, enhancing classification precision by mitigating the impact of noise. Our proposed framework offers a generalizable solution that minimizes the need for extensive labeled data while delivering high accuracy. Extensive simulations and evaluations using three datasets (ISCX VPN-nonVPN, self-generated dataset, and UCDavis–QUIC) demonstrate that our method achieves superior accuracy compared to state-of-the-art techniques in classifying network traffic.
{"title":"Network Traffic Classification Using Self-Supervised Learning and Confident Learning","authors":"Ehsan Eslami;Walaa Hamouda","doi":"10.1109/OJCOMS.2025.3625534","DOIUrl":"https://doi.org/10.1109/OJCOMS.2025.3625534","url":null,"abstract":"Network traffic classification (NTC) is vital for efficient network management, security, and performance optimization, particularly with 5G/6G technologies. Traditional methods, such as deep packet inspection (DPI) and port-based identification, struggle with the rise of encrypted traffic and dynamic port allocations. Supervised learning methods provide viable alternatives but rely on large labeled datasets, which are difficult to acquire given the diversity and volume of network traffic. Meanwhile, unsupervised learning methods, while less reliant on labeled data, often exhibit lower accuracy. To address these limitations, we propose a novel framework that first leverages Self-Supervised Learning (SSL) with techniques such as autoencoders (AE) or Tabular Contrastive Learning (TabCL) to generate pseudo-labels from extensive unlabeled datasets, addressing the challenge of limited labeled data. We then apply traffic-adapted Confident Learning (CL) to refine these pseudo-labels, enhancing classification precision by mitigating the impact of noise. Our proposed framework offers a generalizable solution that minimizes the need for extensive labeled data while delivering high accuracy. Extensive simulations and evaluations using three datasets (ISCX VPN-nonVPN, self-generated dataset, and UCDavis–QUIC) demonstrate that our method achieves superior accuracy compared to state-of-the-art techniques in classifying network traffic.","PeriodicalId":33803,"journal":{"name":"IEEE Open Journal of the Communications Society","volume":"6 ","pages":"9100-9120"},"PeriodicalIF":6.3,"publicationDate":"2025-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11217197","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}