This article addresses the joint power allocation and channel assignment (JPACA) problem in uplink non-orthogonal multiple access (NOMA) networks, an essential consideration for enhancing the performance of wireless communication systems. We introduce a novel methodology that integrates convex optimization (CO) and machine learning (ML) techniques to optimize resource allocation efficiently and effectively. Initially, we develop a CO-based algorithm that employs an alternating optimization strategy to iteratively solve for channel and power allocation, ensuring quality of service (QoS) while maximizing the system’s sum-rate. To overcome the inherent challenges of real-time application due to computational complexity, we further propose a ML-based approach that utilizes a stacking ensemble model combining convolutional neural network (CNN), feed-forward neural network (FNN), and random forest (RF). This model is trained on a dataset generated via the CO algorithm to predict optimal resource allocation in real-time scenarios. Simulation results demonstrate that our proposed methods not only reduce the computational load significantly but also maintain high system performance, closely approximating the results of more computationally intensive exhaustive search methods. The dual approach presented not only enhances computational efficiency but also aligns with the evolving demands of future wireless networks, marking a significant step towards intelligent and adaptive resource management in NOMA systems.
本文探讨了上行非正交多址(NOMA)网络中的联合功率分配和信道分配(JPACA)问题,这是提高无线通信系统性能的一个基本考虑因素。我们介绍了一种新颖的方法,它整合了凸优化(CO)和机器学习(ML)技术,能高效地优化资源分配。首先,我们开发了一种基于 CO 的算法,该算法采用交替优化策略迭代解决信道和功率分配问题,在确保服务质量(QoS)的同时最大化系统总速率。为了克服实时应用因计算复杂性而面临的固有挑战,我们进一步提出了一种基于 ML 的方法,该方法利用了结合卷积神经网络 (CNN)、前馈神经网络 (FNN) 和随机森林 (RF) 的堆叠集合模型。该模型通过 CO 算法生成的数据集进行训练,以预测实时场景中的最优资源分配。仿真结果表明,我们提出的方法不仅能显著降低计算负荷,还能保持较高的系统性能,与计算密集型穷举搜索方法的结果非常接近。所提出的双重方法不仅提高了计算效率,而且符合未来无线网络不断发展的需求,标志着向 NOMA 系统中的智能和自适应资源管理迈出了重要一步。
{"title":"Resource Allocation in NOMA Networks: Convex Optimization and Stacking Ensemble Machine Learning","authors":"Vali Ghanbarzadeh;Mohammadreza Zahabi;Hamid Amiriara;Farahnaz Jafari;Georges Kaddoum","doi":"10.1109/OJCOMS.2024.3450207","DOIUrl":"https://doi.org/10.1109/OJCOMS.2024.3450207","url":null,"abstract":"This article addresses the joint power allocation and channel assignment (JPACA) problem in uplink non-orthogonal multiple access (NOMA) networks, an essential consideration for enhancing the performance of wireless communication systems. We introduce a novel methodology that integrates convex optimization (CO) and machine learning (ML) techniques to optimize resource allocation efficiently and effectively. Initially, we develop a CO-based algorithm that employs an alternating optimization strategy to iteratively solve for channel and power allocation, ensuring quality of service (QoS) while maximizing the system’s sum-rate. To overcome the inherent challenges of real-time application due to computational complexity, we further propose a ML-based approach that utilizes a stacking ensemble model combining convolutional neural network (CNN), feed-forward neural network (FNN), and random forest (RF). This model is trained on a dataset generated via the CO algorithm to predict optimal resource allocation in real-time scenarios. Simulation results demonstrate that our proposed methods not only reduce the computational load significantly but also maintain high system performance, closely approximating the results of more computationally intensive exhaustive search methods. The dual approach presented not only enhances computational efficiency but also aligns with the evolving demands of future wireless networks, marking a significant step towards intelligent and adaptive resource management in NOMA systems.","PeriodicalId":33803,"journal":{"name":"IEEE Open Journal of the Communications Society","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10648742","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142143683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-26DOI: 10.1109/OJCOMS.2024.3449691
Omar Naserallah;Sherif B. Azmy;Nizar Zorba;Hossam S. Hassanein
Edge sensing (ES) systems employ users’ owned smart devices with built-in sensors to gather data from users’ surrounding environments and use their processors to carry out edge computing tasks. Therefore, ES is emerging as a potential solution for remote sensing challenges. Additionally, ES systems are recognized for their favorable characteristics, including efficient time and cost management, scalability, and the ability to gather real-time data. To improve the performance of ES systems, enormous efforts have been made to enhance the quality of data (QoD) and the systems’ spatiotemporal coverage. Moreover, the research community has focused on developing better incentive schemes, as user incentivization is essential for enhancing system performance. In this study, we assess the impact of users’ mobility and availability on the spatiotemporal coverage and QoD of ES systems, taking into account the heterogeneity of users. We propose a distribution-aware and learning-based dynamic incentive scheme. Specifically, we consider the randomness of users’ mobility and velocity using a 2-dimensional random waypoint (RWP) model and support the learning-based incentive scheme with a long short-term memory (LSTM) model. The LSTM model utilizes the users’ historical data to predict their availability to perform the sensing tasks. The learning-based incentive scheme is further used to enhance system performance and effectively manage the trade-off between quality and cost, by recruiting users based on the required quality and cost constraints, to meet the minimum quality requirement within a constrained incentivization budget.
边缘传感(ES)系统利用用户拥有的内置传感器的智能设备从用户周围环境中收集数据,并利用其处理器执行边缘计算任务。因此,ES 正在成为应对遥感挑战的潜在解决方案。此外,ES 系统还因其高效的时间和成本管理、可扩展性以及收集实时数据的能力等有利特性而备受认可。为了提高 ES 系统的性能,人们在提高数据质量(QoD)和系统的时空覆盖范围方面做出了巨大努力。此外,研究界还致力于开发更好的激励方案,因为用户激励对于提高系统性能至关重要。在本研究中,我们评估了用户的移动性和可用性对 ES 系统时空覆盖范围和 QoD 的影响,同时考虑到了用户的异质性。我们提出了一种基于分布感知和学习的动态激励方案。具体来说,我们使用二维随机航点(RWP)模型考虑了用户移动性和速度的随机性,并使用长短期记忆(LSTM)模型支持基于学习的激励方案。LSTM 模型利用用户的历史数据来预测他们是否可以执行传感任务。基于学习的激励方案进一步用于提高系统性能和有效管理质量与成本之间的权衡,根据所需的质量和成本约束条件招募用户,在受限的激励预算内满足最低质量要求。
{"title":"Novel Distribution-Aware and Learning-Based Dynamic Scheme for Efficient User Incentivization in Edge Sensing Systems","authors":"Omar Naserallah;Sherif B. Azmy;Nizar Zorba;Hossam S. Hassanein","doi":"10.1109/OJCOMS.2024.3449691","DOIUrl":"10.1109/OJCOMS.2024.3449691","url":null,"abstract":"Edge sensing (ES) systems employ users’ owned smart devices with built-in sensors to gather data from users’ surrounding environments and use their processors to carry out edge computing tasks. Therefore, ES is emerging as a potential solution for remote sensing challenges. Additionally, ES systems are recognized for their favorable characteristics, including efficient time and cost management, scalability, and the ability to gather real-time data. To improve the performance of ES systems, enormous efforts have been made to enhance the quality of data (QoD) and the systems’ spatiotemporal coverage. Moreover, the research community has focused on developing better incentive schemes, as user incentivization is essential for enhancing system performance. In this study, we assess the impact of users’ mobility and availability on the spatiotemporal coverage and QoD of ES systems, taking into account the heterogeneity of users. We propose a distribution-aware and learning-based dynamic incentive scheme. Specifically, we consider the randomness of users’ mobility and velocity using a 2-dimensional random waypoint (RWP) model and support the learning-based incentive scheme with a long short-term memory (LSTM) model. The LSTM model utilizes the users’ historical data to predict their availability to perform the sensing tasks. The learning-based incentive scheme is further used to enhance system performance and effectively manage the trade-off between quality and cost, by recruiting users based on the required quality and cost constraints, to meet the minimum quality requirement within a constrained incentivization budget.","PeriodicalId":33803,"journal":{"name":"IEEE Open Journal of the Communications Society","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10648608","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142209472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-23DOI: 10.1109/OJCOMS.2024.3449241
Javane Rostampoor;Raviraj S. Adve;Ali Afana;Yahia A. Eldemerdash Ahmed
This paper introduces an innovative predictive caching strategy tailored to a real-world dataset, specifically the Facebook video dataset. Making caching decisions for the dataset is challenging due to its dynamic nature, where users’ content requests vary over time without fitting into any known models. Traditional caching strategies, which often rely on a constant pool of files, do not suit this dataset as content is requested by users, and then its popularity fades over time; furthermore, the list of available content changes. We propose a two-stage predictive caching strategy. Initially, it forecasts the number of user requests using content features and historical request data, achieved through training a long short-term memory (LSTM) network. Then, we employ our proposed extended Cox proportional hazard (E-CPH) model to predict the survival probability of content. This facilitates proactive content caching. Caching new content is made possible by the timely eviction of content unlikely to be requested again. To incorporate the predicted content popularity and its life cycle into the caching decision, we introduce a partially observable Markov decision process (POMDP)-based caching strategy. Here, the survival probability of content contributes to the belief state of the associated content which leads to our believed predicted reward - a cache hit. The caching algorithm then stores the files based on their predicted believed reward taking into account both the popularity and survival probability predictions. Simulation results validate the efficacy of our proposed predictive caching method in enhancing the cache hit rate compared to conventional recurrent neural network (RNN)-based caching and policy-based caching approaches, such as least frequently used caching and its variants.
{"title":"Predictive Caching in Non-Stationary Environments: A Time Series Prediction and Survival Analysis Approach","authors":"Javane Rostampoor;Raviraj S. Adve;Ali Afana;Yahia A. Eldemerdash Ahmed","doi":"10.1109/OJCOMS.2024.3449241","DOIUrl":"https://doi.org/10.1109/OJCOMS.2024.3449241","url":null,"abstract":"This paper introduces an innovative predictive caching strategy tailored to a real-world dataset, specifically the Facebook video dataset. Making caching decisions for the dataset is challenging due to its dynamic nature, where users’ content requests vary over time without fitting into any known models. Traditional caching strategies, which often rely on a constant pool of files, do not suit this dataset as content is requested by users, and then its popularity fades over time; furthermore, the list of available content changes. We propose a two-stage predictive caching strategy. Initially, it forecasts the number of user requests using content features and historical request data, achieved through training a long short-term memory (LSTM) network. Then, we employ our proposed extended Cox proportional hazard (E-CPH) model to predict the survival probability of content. This facilitates proactive content caching. Caching new content is made possible by the timely eviction of content unlikely to be requested again. To incorporate the predicted content popularity and its life cycle into the caching decision, we introduce a partially observable Markov decision process (POMDP)-based caching strategy. Here, the survival probability of content contributes to the belief state of the associated content which leads to our believed predicted reward - a cache hit. The caching algorithm then stores the files based on their predicted believed reward taking into account both the popularity and survival probability predictions. Simulation results validate the efficacy of our proposed predictive caching method in enhancing the cache hit rate compared to conventional recurrent neural network (RNN)-based caching and policy-based caching approaches, such as least frequently used caching and its variants.","PeriodicalId":33803,"journal":{"name":"IEEE Open Journal of the Communications Society","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10644108","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142143684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Integrated Access and Backhaul (IAB) technology promises to facilitate cost-effective deployments of 5G New Radio (NR) systems operating in both sub-6 GHz and millimeter-wave (mmWave) bands. As full-duplex wireless systems are in their infancy, initial deployments of IAB networks may need to rely on half-duplex operation to coordinate transmissions between access and backhaul links. However, the use of half-duplex operation not only makes the scheduling of links in the IAB networks interdependent, but also the number of their feasible combinations grows exponentially with the network size, thereby posing challenges to the efficient design of such systems. In this paper, by accounting for mmWave radio characteristics, we propose a joint resource allocation and link scheduling framework to enhance the user equipment (UE) throughput in multi-hop in-band IAB systems. We keep the problem in the form of linear programming type for the feasibility of the practical applications. We show that the increased number of uplink and downlink transmission time interval (TTI) configurations does not result in improved UE throughput as compared to two-TTI configuration. Further, we demonstrate that in-band IAB systems tend to be backhaul-limited, and the utilization of multi-beam functionality at the IAB-donor alleviates this limitation by doubling the average UE throughput. Finally, we show that the use of proportional-fair allocations allows the average UE throughput to be improved by around 10% as compared to the max-min allocations.
{"title":"Analysis of Duplexing Patterns in Multi-Hop mmWave Integrated Access and Backhaul Systems","authors":"Nikita Tafintsev;Dmitri Moltchanov;Wei Mao;Hosein Nikopour;Shu-Ping Yeh;Shilpa Talwar;Mikko Valkama;Sergey Andreev","doi":"10.1109/OJCOMS.2024.3449234","DOIUrl":"https://doi.org/10.1109/OJCOMS.2024.3449234","url":null,"abstract":"Integrated Access and Backhaul (IAB) technology promises to facilitate cost-effective deployments of 5G New Radio (NR) systems operating in both sub-6 GHz and millimeter-wave (mmWave) bands. As full-duplex wireless systems are in their infancy, initial deployments of IAB networks may need to rely on half-duplex operation to coordinate transmissions between access and backhaul links. However, the use of half-duplex operation not only makes the scheduling of links in the IAB networks interdependent, but also the number of their feasible combinations grows exponentially with the network size, thereby posing challenges to the efficient design of such systems. In this paper, by accounting for mmWave radio characteristics, we propose a joint resource allocation and link scheduling framework to enhance the user equipment (UE) throughput in multi-hop in-band IAB systems. We keep the problem in the form of linear programming type for the feasibility of the practical applications. We show that the increased number of uplink and downlink transmission time interval (TTI) configurations does not result in improved UE throughput as compared to two-TTI configuration. Further, we demonstrate that in-band IAB systems tend to be backhaul-limited, and the utilization of multi-beam functionality at the IAB-donor alleviates this limitation by doubling the average UE throughput. Finally, we show that the use of proportional-fair allocations allows the average UE throughput to be improved by around 10% as compared to the max-min allocations.","PeriodicalId":33803,"journal":{"name":"IEEE Open Journal of the Communications Society","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10644135","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142143802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-22DOI: 10.1109/OJCOMS.2024.3447839
Muhammad Usman Khan;Enrico Testi;Marco Chiani;Enrico Paolini
Cell-free massive MIMO (CF-mMIMO) networks leverage seamless cooperation among numerous access points to serve a large number of users over the same time/frequency resources. This paper addresses the challenges of pilot and data power control, as well as pilot assignment, in the uplink of a cell-free massive MIMO (CF-mMIMO) network, where the number of users significantly exceeds that of the available orthogonal pilots. We first derive the closed-form expression of the achievable uplink rate of a user. Subsequently, harnessing the universal function approximation capability of artificial neural networks, we introduce a novel multi-task deep learning-based approach for joint power control and pilot assignment, aiming to maximize the minimum user rate. Our proposed method entails the design and unsupervised training of a deep neural network (DNN), employing a custom loss function specifically tailored to perform joint power control and pilot assignment, while simultaneously limiting the total network power usage. Extensive simulations demonstrate that our method outperforms the existing power control and pilot assignment strategies in terms of achievable network throughput, minimum user rate, and per-user energy consumption. The model versatility and adaptability are assessed by simulating two different scenarios, namely a urban macro (UMa) and an industrial one.
{"title":"Joint Power Control and Pilot Assignment in Cell-Free Massive MIMO Using Deep Learning","authors":"Muhammad Usman Khan;Enrico Testi;Marco Chiani;Enrico Paolini","doi":"10.1109/OJCOMS.2024.3447839","DOIUrl":"https://doi.org/10.1109/OJCOMS.2024.3447839","url":null,"abstract":"Cell-free massive MIMO (CF-mMIMO) networks leverage seamless cooperation among numerous access points to serve a large number of users over the same time/frequency resources. This paper addresses the challenges of pilot and data power control, as well as pilot assignment, in the uplink of a cell-free massive MIMO (CF-mMIMO) network, where the number of users significantly exceeds that of the available orthogonal pilots. We first derive the closed-form expression of the achievable uplink rate of a user. Subsequently, harnessing the universal function approximation capability of artificial neural networks, we introduce a novel multi-task deep learning-based approach for joint power control and pilot assignment, aiming to maximize the minimum user rate. Our proposed method entails the design and unsupervised training of a deep neural network (DNN), employing a custom loss function specifically tailored to perform joint power control and pilot assignment, while simultaneously limiting the total network power usage. Extensive simulations demonstrate that our method outperforms the existing power control and pilot assignment strategies in terms of achievable network throughput, minimum user rate, and per-user energy consumption. The model versatility and adaptability are assessed by simulating two different scenarios, namely a urban macro (UMa) and an industrial one.","PeriodicalId":33803,"journal":{"name":"IEEE Open Journal of the Communications Society","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10643563","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142137548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent years have witnessed the Open Radio Access Network (RAN) paradigm transforming the fundamental ways cellular systems are deployed, managed, and optimized. This shift is led by concepts such as openness, softwarization, programmability, interoperability, and intelligence of the network, which have emerged in wired networks through Software-defined Networking (SDN) but lag behind in cellular systems. The realization of the Open RAN vision into practical architectures, intelligent data-driven control loops, and efficient software implementations, however, is a multifaceted challenge, which requires (i) datasets to train Artificial Intelligence (AI) and Machine Learning (ML) models; (ii) facilities to test models without disrupting production networks; (iii) continuous and automated validation of the RAN software; and (iv) significant testing and integration efforts. This paper is a tutorial on how Colosseum—the world’s largest wireless network emulator with hardware in the loop—can provide the research infrastructure and tools to fill the gap between the Open RAN vision, and the deployment and commercialization of open and programmable networks. We describe how Colosseum implements an Open RAN digital twin through a high-fidelity Radio Frequency (RF) channel emulator and endto- end softwarized O-RAN and 5G-compliant protocol stacks, thus allowing users to reproduce and experiment upon topologies representative of real-world cellular deployments. Then, we detail the twinning infrastructure of Colosseum, as well as the automation pipelines for RF and protocol stack twinning. Finally, we showcase a broad range of Open RAN use cases implemented on Colosseum, including the real-time connection between the digital twin and real-world networks, and the development, prototyping, and testing of AI/ML solutions for Open RAN.
近年来,开放式无线接入网(RAN)范例改变了蜂窝系统部署、管理和优化的基本方式。这一转变由开放性、软化、可编程性、互操作性和网络智能化等概念引领,这些概念已通过软件定义网络(SDN)在有线网络中出现,但在蜂窝系统中却相对滞后。然而,要将开放式 RAN 的愿景转化为实用的架构、智能数据驱动的控制回路和高效的软件实施,却是一项多方面的挑战,这需要:(i) 用于训练人工智能 (AI) 和机器学习 (ML) 模型的数据集;(ii) 在不中断生产网络的情况下测试模型的设施;(iii) RAN 软件的持续和自动验证;以及 (iv) 大量的测试和集成工作。本文将介绍 Colosseum(世界上最大的无线网络仿真器)如何提供研究基础设施和工具,以填补开放式 RAN 愿景与开放式可编程网络的部署和商业化之间的空白。我们将介绍 Colosseum 如何通过高保真射频(RF)信道仿真器和端到端软化 O-RAN 和 5G 兼容协议栈实现开放 RAN 数字孪生,从而让用户能够重现和实验代表真实世界蜂窝部署的拓扑结构。然后,我们将详细介绍 Colosseum 的孪生基础设施,以及射频和协议栈孪生的自动化管道。最后,我们将展示在 Colosseum 上实现的各种开放式 RAN 用例,包括数字孪生和真实世界网络之间的实时连接,以及开放式 RAN 的 AI/ML 解决方案的开发、原型设计和测试。
{"title":"Colosseum: The Open RAN Digital Twin","authors":"Michele Polese;Leonardo Bonati;Salvatore D'Oro;Pedram Johari;Davide Villa;Sakthivel Velumani;Rajeev Gangula;Maria Tsampazi;Clifton Paul Robinson;Gabriele Gemmi;Andrea Lacava;Stefano Maxenti;Hai Cheng;Tommaso Melodia","doi":"10.1109/OJCOMS.2024.3447472","DOIUrl":"https://doi.org/10.1109/OJCOMS.2024.3447472","url":null,"abstract":"Recent years have witnessed the Open Radio Access Network (RAN) paradigm transforming the fundamental ways cellular systems are deployed, managed, and optimized. This shift is led by concepts such as openness, softwarization, programmability, interoperability, and intelligence of the network, which have emerged in wired networks through Software-defined Networking (SDN) but lag behind in cellular systems. The realization of the Open RAN vision into practical architectures, intelligent data-driven control loops, and efficient software implementations, however, is a multifaceted challenge, which requires (i) datasets to train Artificial Intelligence (AI) and Machine Learning (ML) models; (ii) facilities to test models without disrupting production networks; (iii) continuous and automated validation of the RAN software; and (iv) significant testing and integration efforts. This paper is a tutorial on how Colosseum—the world’s largest wireless network emulator with hardware in the loop—can provide the research infrastructure and tools to fill the gap between the Open RAN vision, and the deployment and commercialization of open and programmable networks. We describe how Colosseum implements an Open RAN digital twin through a high-fidelity Radio Frequency (RF) channel emulator and endto- end softwarized O-RAN and 5G-compliant protocol stacks, thus allowing users to reproduce and experiment upon topologies representative of real-world cellular deployments. Then, we detail the twinning infrastructure of Colosseum, as well as the automation pipelines for RF and protocol stack twinning. Finally, we showcase a broad range of Open RAN use cases implemented on Colosseum, including the real-time connection between the digital twin and real-world networks, and the development, prototyping, and testing of AI/ML solutions for Open RAN.","PeriodicalId":33803,"journal":{"name":"IEEE Open Journal of the Communications Society","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10643670","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142169672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-21DOI: 10.1109/OJCOMS.2024.3447042
Hyosang Ju;Jisang Park;Donghun Lee;Min Jang;Juho Lee;Sang-Hyo Kim
In this paper, a new design of concatenated polar codes is proposed. By concatenating an outer code with polar codes, the distance spectrum can be improved, leading to enhanced decoding performance of vanilla polar codes. In the 5G New Radio standard, both cyclic redundancy check precoding and systematic single-parity-check precoding schemes are adopted and this combination provides stable decoding performance over a wide range of coding parameters. We focus on the design of single-paritycheck precoded polar codes. For the special systematic pre-coding scheme, code construction depends solely on the selection of information and parity bits from the source bits. Since the conventional parity bit selection criteria can draw weaknesses for some coding parameters, we develop new criteria that enhance the protection of weak source bits under the successive cancelation decoding. The simulation results demonstrate that the proposed design consistently outperforms the conventional one across a wide range of coding parameters. The improvement is more pronounced in short-length codes.
{"title":"On Improving the Design of Parity-Check Polar Codes","authors":"Hyosang Ju;Jisang Park;Donghun Lee;Min Jang;Juho Lee;Sang-Hyo Kim","doi":"10.1109/OJCOMS.2024.3447042","DOIUrl":"https://doi.org/10.1109/OJCOMS.2024.3447042","url":null,"abstract":"In this paper, a new design of concatenated polar codes is proposed. By concatenating an outer code with polar codes, the distance spectrum can be improved, leading to enhanced decoding performance of vanilla polar codes. In the 5G New Radio standard, both cyclic redundancy check precoding and systematic single-parity-check precoding schemes are adopted and this combination provides stable decoding performance over a wide range of coding parameters. We focus on the design of single-paritycheck precoded polar codes. For the special systematic pre-coding scheme, code construction depends solely on the selection of information and parity bits from the source bits. Since the conventional parity bit selection criteria can draw weaknesses for some coding parameters, we develop new criteria that enhance the protection of weak source bits under the successive cancelation decoding. The simulation results demonstrate that the proposed design consistently outperforms the conventional one across a wide range of coding parameters. The improvement is more pronounced in short-length codes.","PeriodicalId":33803,"journal":{"name":"IEEE Open Journal of the Communications Society","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10643177","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142169593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-21DOI: 10.1109/OJCOMS.2024.3447152
Syed Asad Ullah;Aamir Mahmood;Ali Arshad Nasir;Mikael Gidlund;Syed Ali Hassan
Given the rising demand for low-power sensing, integrating additional devices into an existing wireless infrastructure calls for innovative energy- and spectrum-efficient wireless connectivity strategies. In this respect, wireless-powered or energy-harvesting symbiotic radio (EHSR) is gaining attention for establishing the secondary relationship with the primary wireless systems in terms of RF EH and opportunistically sharing the spectrum or schedule. In this paper, assuming the commensalistic relationship with the primary system, we consider the energy-efficient optimization of such an EHSR by intelligently making EH and transmission decisions under the inherent nonlinearity of the EH circuitry and dynamics of pre-scheduled primary devices. We present a state-of-the-art deep reinforcement learning (DRL)-engineered, energy-efficient transmission strategy, which intelligently orchestrates EHSR’s uplink transmissions, leveraging the cognitive radio-inspired non-orthogonal multiple access (CR-NOMA) scheme. We first formulate the energy efficiency (EE) optimization metric for EHSR considering the nonlinear EH model, and then we decompose the inherently complex, non-convex problem into two optimization layers. The strategy first derives the optimal transmit power and time-sharing coefficient parameters, using convex optimization. Subsequently, these inferred parameters are substituted in the subsequent layer, where the optimization problem with continuous action space is addressed via a DRL framework, named modified deep deterministic policy gradient (MDDPG). Simulation results reveal that, compared to the baseline DDPG algorithm, our proposed solution provides a 6% EE gain with the linear EH model and approximately a 7% EE gain with the non-linear EH model.
{"title":"DRL-Driven Optimization of a Wireless Powered Symbiotic Radio With Nonlinear EH Model","authors":"Syed Asad Ullah;Aamir Mahmood;Ali Arshad Nasir;Mikael Gidlund;Syed Ali Hassan","doi":"10.1109/OJCOMS.2024.3447152","DOIUrl":"https://doi.org/10.1109/OJCOMS.2024.3447152","url":null,"abstract":"Given the rising demand for low-power sensing, integrating additional devices into an existing wireless infrastructure calls for innovative energy- and spectrum-efficient wireless connectivity strategies. In this respect, wireless-powered or energy-harvesting symbiotic radio (EHSR) is gaining attention for establishing the secondary relationship with the primary wireless systems in terms of RF EH and opportunistically sharing the spectrum or schedule. In this paper, assuming the commensalistic relationship with the primary system, we consider the energy-efficient optimization of such an EHSR by intelligently making EH and transmission decisions under the inherent nonlinearity of the EH circuitry and dynamics of pre-scheduled primary devices. We present a state-of-the-art deep reinforcement learning (DRL)-engineered, energy-efficient transmission strategy, which intelligently orchestrates EHSR’s uplink transmissions, leveraging the cognitive radio-inspired non-orthogonal multiple access (CR-NOMA) scheme. We first formulate the energy efficiency (EE) optimization metric for EHSR considering the nonlinear EH model, and then we decompose the inherently complex, non-convex problem into two optimization layers. The strategy first derives the optimal transmit power and time-sharing coefficient parameters, using convex optimization. Subsequently, these inferred parameters are substituted in the subsequent layer, where the optimization problem with continuous action space is addressed via a DRL framework, named modified deep deterministic policy gradient (MDDPG). Simulation results reveal that, compared to the baseline DDPG algorithm, our proposed solution provides a 6% EE gain with the linear EH model and approximately a 7% EE gain with the non-linear EH model.","PeriodicalId":33803,"journal":{"name":"IEEE Open Journal of the Communications Society","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10643143","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142137547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-21DOI: 10.1109/OJCOMS.2024.3447157
Ravindra S. Tomar;Mandar R. Nalavade;Gaurav S. Kasbekar
In dense millimeter wave (mmWave) networks, user association, i.e., the task of selecting the access point (AP) that each arriving user should join, significantly impacts the network performance. We consider a dense mmWave network in which each AP has multiple channels and can simultaneously serve different users using different channels. The different channels of an AP are susceptible to both blockage, which is common to all the channels of an AP, and frequency-selective fading, which is, in general, different for different channels. In each time slot, a user arrives with some probability. Our objective is to design a user association scheme for selecting the AP that each arriving user should join, so as to minimize the long-term total average holding cost incurred within the system, and thereby achieve low average delays experienced by users. This problem is an instance of the restless multi-armed bandit problem, and is provably hard to solve. We prove that the problem is Whittle indexable and present a method for calculating the Whittle indices corresponding to the different APs by solving linear systems of equations. We propose a user association policy under which, when a user arrives, it associates with the AP that has the lowest Whittle index in that time slot. Our extensive simulation results demonstrate that our proposed Whittle index-based policy outperforms user association policies proposed in prior research in terms of the average delay, average cost, as well as Jain’s fairness index (JFI).
{"title":"User Association in Dense Millimeter Wave Networks With Multi-Channel Access Points Using the Whittle Index","authors":"Ravindra S. Tomar;Mandar R. Nalavade;Gaurav S. Kasbekar","doi":"10.1109/OJCOMS.2024.3447157","DOIUrl":"https://doi.org/10.1109/OJCOMS.2024.3447157","url":null,"abstract":"In dense millimeter wave (mmWave) networks, user association, i.e., the task of selecting the access point (AP) that each arriving user should join, significantly impacts the network performance. We consider a dense mmWave network in which each AP has multiple channels and can simultaneously serve different users using different channels. The different channels of an AP are susceptible to both blockage, which is common to all the channels of an AP, and frequency-selective fading, which is, in general, different for different channels. In each time slot, a user arrives with some probability. Our objective is to design a user association scheme for selecting the AP that each arriving user should join, so as to minimize the long-term total average holding cost incurred within the system, and thereby achieve low average delays experienced by users. This problem is an instance of the restless multi-armed bandit problem, and is provably hard to solve. We prove that the problem is Whittle indexable and present a method for calculating the Whittle indices corresponding to the different APs by solving linear systems of equations. We propose a user association policy under which, when a user arrives, it associates with the AP that has the lowest Whittle index in that time slot. Our extensive simulation results demonstrate that our proposed Whittle index-based policy outperforms user association policies proposed in prior research in terms of the average delay, average cost, as well as Jain’s fairness index (JFI).","PeriodicalId":33803,"journal":{"name":"IEEE Open Journal of the Communications Society","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10643173","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142117825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-20DOI: 10.1109/OJCOMS.2024.3446457
Manjuladevi Vasudevan;Murat Yuksel
With recent advancements in the telecommunication industry and the deployment of 5G networks, radio propagation modeling is considered a fundamental task in planning and optimization. Accurate and efficient models of radio propagation enable the estimation of Path Loss (PL) or Received Signal Strength (RSS), which is used in a variety of practical applications including the construction of radio coverage maps and localization. Traditional PL models use fundamental physics laws and regression-based models, which can be guided with measurements. In general, these methods have small computational complexity and have been highly successful in attaining accurate models for settings with trivial environmental complexity (e.g., clear weather or no clutter). However, attaining high accuracy in radio propagation modeling at complex settings (e.g., an urban setting with many buildings and obstacles) has required ray tracing, which computationally complex. Recently, the wireless community has been studying Machine Learning (ML)-based modeling algorithms to find a middle-ground. ML algorithms have become faster to execute and, more importantly, more radio data measurements have become available with the increased deployment of wireless devices. In this survey, we explore the recent advancements in the use of ML for modeling and predicting radio coverage and PL.
随着电信行业的最新进展和 5G 网络的部署,无线电传播建模被认为是规划和优化的一项基本任务。准确、高效的无线电传播模型可用于估算路径损耗(PL)或接收信号强度(RSS),而路径损耗或接收信号强度可用于各种实际应用,包括构建无线电覆盖图和定位。传统的路径损耗模型使用基本物理定律和基于回归的模型,可通过测量结果进行指导。一般来说,这些方法的计算复杂度较小,在环境复杂度较低(如天气晴朗或无杂波)的情况下,能非常成功地获得精确的模型。然而,要在复杂环境(如有许多建筑物和障碍物的城市环境)中获得高精度的无线电传播模型,就需要进行光线追踪,而光线追踪的计算复杂度很高。最近,无线界一直在研究基于机器学习(ML)的建模算法,以寻找中间地带。ML 算法的执行速度越来越快,更重要的是,随着无线设备部署的增加,无线电数据测量也越来越多。在本调查中,我们将探讨使用 ML 对无线电覆盖和 PL 进行建模和预测的最新进展。
{"title":"Machine Learning for Radio Propagation Modeling: A Comprehensive Survey","authors":"Manjuladevi Vasudevan;Murat Yuksel","doi":"10.1109/OJCOMS.2024.3446457","DOIUrl":"https://doi.org/10.1109/OJCOMS.2024.3446457","url":null,"abstract":"With recent advancements in the telecommunication industry and the deployment of 5G networks, radio propagation modeling is considered a fundamental task in planning and optimization. Accurate and efficient models of radio propagation enable the estimation of Path Loss (PL) or Received Signal Strength (RSS), which is used in a variety of practical applications including the construction of radio coverage maps and localization. Traditional PL models use fundamental physics laws and regression-based models, which can be guided with measurements. In general, these methods have small computational complexity and have been highly successful in attaining accurate models for settings with trivial environmental complexity (e.g., clear weather or no clutter). However, attaining high accuracy in radio propagation modeling at complex settings (e.g., an urban setting with many buildings and obstacles) has required ray tracing, which computationally complex. Recently, the wireless community has been studying Machine Learning (ML)-based modeling algorithms to find a middle-ground. ML algorithms have become faster to execute and, more importantly, more radio data measurements have become available with the increased deployment of wireless devices. In this survey, we explore the recent advancements in the use of ML for modeling and predicting radio coverage and PL.","PeriodicalId":33803,"journal":{"name":"IEEE Open Journal of the Communications Society","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10640063","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142117831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}