Pub Date : 2024-11-20DOI: 10.1109/TMLCN.2024.3503543
Houssem Sifaou;Osvaldo Simeone
In many wireless application scenarios, acquiring labeled data can be prohibitively costly, requiring complex optimization processes or measurement campaigns. Semi-supervised learning leverages unlabeled samples to augment the available dataset by assigning synthetic labels obtained via machine learning (ML)-based predictions. However, treating the synthetic labels as true labels may yield worse-performing models as compared to models trained using only labeled data. Inspired by the recently developed prediction-powered inference (PPI) framework, this work investigates how to leverage the synthetic labels produced by an ML model, while accounting for the inherent bias concerning true labels. To this end, we first review PPI and its recent extensions, namely tuned PPI and cross-prediction-powered inference (CPPI). Then, we introduce two novel variants of PPI. The first, referred to as tuned CPPI, provides CPPI with an additional degree of freedom in adapting to the quality of the ML-based labels. The second, meta-CPPI (MCPPI), extends tuned CPPI via the joint optimization of the ML labeling models and of the parameters of interest. Finally, we showcase two applications of PPI-based techniques in wireless systems, namely beam alignment based on channel knowledge maps in millimeter-wave systems and received signal strength information-based indoor localization. Simulation results show the advantages of PPI-based techniques over conventional approaches that rely solely on labeled data or that apply standard pseudo-labeling strategies from semi-supervised learning. Furthermore, the proposed tuned CPPI method is observed to guarantee the best performance among all benchmark schemes, especially in the regime of limited labeled data.
{"title":"Semi-Supervised Learning via Cross-Prediction-Powered Inference for Wireless Systems","authors":"Houssem Sifaou;Osvaldo Simeone","doi":"10.1109/TMLCN.2024.3503543","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3503543","url":null,"abstract":"In many wireless application scenarios, acquiring labeled data can be prohibitively costly, requiring complex optimization processes or measurement campaigns. Semi-supervised learning leverages unlabeled samples to augment the available dataset by assigning synthetic labels obtained via machine learning (ML)-based predictions. However, treating the synthetic labels as true labels may yield worse-performing models as compared to models trained using only labeled data. Inspired by the recently developed prediction-powered inference (PPI) framework, this work investigates how to leverage the synthetic labels produced by an ML model, while accounting for the inherent bias concerning true labels. To this end, we first review PPI and its recent extensions, namely tuned PPI and cross-prediction-powered inference (CPPI). Then, we introduce two novel variants of PPI. The first, referred to as tuned CPPI, provides CPPI with an additional degree of freedom in adapting to the quality of the ML-based labels. The second, meta-CPPI (MCPPI), extends tuned CPPI via the joint optimization of the ML labeling models and of the parameters of interest. Finally, we showcase two applications of PPI-based techniques in wireless systems, namely beam alignment based on channel knowledge maps in millimeter-wave systems and received signal strength information-based indoor localization. Simulation results show the advantages of PPI-based techniques over conventional approaches that rely solely on labeled data or that apply standard pseudo-labeling strategies from semi-supervised learning. Furthermore, the proposed tuned CPPI method is observed to guarantee the best performance among all benchmark schemes, especially in the regime of limited labeled data.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"30-44"},"PeriodicalIF":0.0,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10758826","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142844291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-19DOI: 10.1109/TMLCN.2024.3502576
Tianjiao Sun;Sixing Yin;Li Deng;F. Richard Yu
Taking advantages of both unmanned aerial vehicles (UAVs) and reconfigurable intelligent surfaces (RISs), UAV-mounted-RIS systems are expected to enhance transmission performance in complicated wireless environments. In this paper, we focus on system design for a UAV-mounted-RIS system and investigate joint optimization for the RIS’s phase shift and the UAV’s trajectory. To cope with the practical issue of inaccessible information on the user terminals’ (UTs) location and channel state, a reinforcement learning (RL)-based solution is proposed to find the optimal policy with finite steps of “trial-and-error”. As the action space is continuous, the deep deterministic policy gradient (DDPG) algorithm is applied to train the RL model. However, the online interaction between the agent and environment may lead to instability during the training and the assumption of (first-order) Markovian state transition could be impractical in real-world problems. Therefore, the decision transformer (DT) algorithm is employed as an alternative for RL model training to adapt to more general situations of state transition. Experimental results demonstrate that the proposed RL solutions are highly efficient in model training along with acceptable performance close to the benchmark, which relies on conventional optimization algorithms with the UT’s locations and channel parameters explicitly known beforehand.
{"title":"Reinforcement-Learning-Based Trajectory Design and Phase-Shift Control in UAV-Mounted-RIS Communications","authors":"Tianjiao Sun;Sixing Yin;Li Deng;F. Richard Yu","doi":"10.1109/TMLCN.2024.3502576","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3502576","url":null,"abstract":"Taking advantages of both unmanned aerial vehicles (UAVs) and reconfigurable intelligent surfaces (RISs), UAV-mounted-RIS systems are expected to enhance transmission performance in complicated wireless environments. In this paper, we focus on system design for a UAV-mounted-RIS system and investigate joint optimization for the RIS’s phase shift and the UAV’s trajectory. To cope with the practical issue of inaccessible information on the user terminals’ (UTs) location and channel state, a reinforcement learning (RL)-based solution is proposed to find the optimal policy with finite steps of “trial-and-error”. As the action space is continuous, the deep deterministic policy gradient (DDPG) algorithm is applied to train the RL model. However, the online interaction between the agent and environment may lead to instability during the training and the assumption of (first-order) Markovian state transition could be impractical in real-world problems. Therefore, the decision transformer (DT) algorithm is employed as an alternative for RL model training to adapt to more general situations of state transition. Experimental results demonstrate that the proposed RL solutions are highly efficient in model training along with acceptable performance close to the benchmark, which relies on conventional optimization algorithms with the UT’s locations and channel parameters explicitly known beforehand.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"163-175"},"PeriodicalIF":0.0,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10758222","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142918339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-18DOI: 10.1109/TMLCN.2024.3501217
Rodrigo Carvalho;Faroq Al-Tam;Noélia Correia
As a growing trend, edge computing infrastructures are starting to be integrated with Internet of Things (IoT) systems to facilitate time-critical applications. These systems often require the processing of data with limited usefulness in time, so the edge becomes vital in the development of such reactive IoT applications with real-time requirements. Although different architectural designs will always have advantages and disadvantages, mobile gateways appear to be particularly relevant in enabling this integration with the edge, particularly in the context of wide area networks with occasional data generation. In these scenarios, mobility planning is necessary, as aspects of the technology need to be aligned with the temporal needs of an application. The nature of this planning problem makes cutting-edge deep reinforcement learning (DRL) techniques useful in solving pertinent issues, such as having to deal with multiple dimensions in the action space while aiming for optimum levels of system performance. This article presents a novel scalable DRL model that incorporates a pointer-network (Ptr-Net) and an actor-critic algorithm to handle complex action spaces. The model synchronously determines the gateway location and visit time. Ultimately, the gateways are able to attain high-quality trajectory planning with reduced latency.
{"title":"A2PC: Augmented Advantage Pointer-Critic Model for Low Latency on Mobile IoT With Edge Computing","authors":"Rodrigo Carvalho;Faroq Al-Tam;Noélia Correia","doi":"10.1109/TMLCN.2024.3501217","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3501217","url":null,"abstract":"As a growing trend, edge computing infrastructures are starting to be integrated with Internet of Things (IoT) systems to facilitate time-critical applications. These systems often require the processing of data with limited usefulness in time, so the edge becomes vital in the development of such reactive IoT applications with real-time requirements. Although different architectural designs will always have advantages and disadvantages, mobile gateways appear to be particularly relevant in enabling this integration with the edge, particularly in the context of wide area networks with occasional data generation. In these scenarios, mobility planning is necessary, as aspects of the technology need to be aligned with the temporal needs of an application. The nature of this planning problem makes cutting-edge deep reinforcement learning (DRL) techniques useful in solving pertinent issues, such as having to deal with multiple dimensions in the action space while aiming for optimum levels of system performance. This article presents a novel scalable DRL model that incorporates a pointer-network (Ptr-Net) and an actor-critic algorithm to handle complex action spaces. The model synchronously determines the gateway location and visit time. Ultimately, the gateways are able to attain high-quality trajectory planning with reduced latency.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"1-16"},"PeriodicalIF":0.0,"publicationDate":"2024-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10755120","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-04DOI: 10.1109/TMLCN.2024.3491054
Zain Ali;Zouheir Rezki;Mohamed-Slim Alouini
The next generation of communication devices will require robust connectivity for millions of ground devices such as sensors or mobile devices in remote or disaster-stricken areas to be connected to the network. Non-terrestrial network (NTN) nodes can play a vital role in fulfilling these requirements. Specifically, low-earth orbiting (LEO) satellites have emerged as an efficient and cost-effective technique to connect devices over long distances through space. However, due to their low power and environmental limitations, LEO satellites may require assistance from aerial devices such as high-altitude platforms (HAPs) or unmanned aerial vehicles to forward their data to the ground devices. Moreover, the limited power available at the NTNs makes it crucial to utilize available resources efficiently. In this paper, we present a model in which a LEO satellite communicates with multiple ground devices with the help of HAPs that relay LEO data to the ground devices. We formulate the problem of optimizing power allocation at the LEO satellite and all the HAPs to maximize the sum-rate of the system. To take advantage of the benefits of free-space optical (FSO) communication in satellites, we consider the LEO transmitting data to the HAPs on FSO links, which are then broadcast to the connected ground devices on radio frequency channels. We transform the complex non-convex problem into a convex form and compute the Karush-Kuhn-Tucker (KKT) conditions-based solution of the problem for power allocation at the LEO satellite and HAPs. Then, to reduce computation time, we propose a soft actor-critic (SAC) reinforcement learning (RL) framework that provides the solution in significantly less time while delivering comparable performance to the KKT scheme. Our simulation results demonstrate that the proposed solutions provide excellent performance and are scalable to any number of HAPs and ground devices in the system.
{"title":"Optimizing Power Allocation in HAPs Assisted LEO Satellite Communications","authors":"Zain Ali;Zouheir Rezki;Mohamed-Slim Alouini","doi":"10.1109/TMLCN.2024.3491054","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3491054","url":null,"abstract":"The next generation of communication devices will require robust connectivity for millions of ground devices such as sensors or mobile devices in remote or disaster-stricken areas to be connected to the network. Non-terrestrial network (NTN) nodes can play a vital role in fulfilling these requirements. Specifically, low-earth orbiting (LEO) satellites have emerged as an efficient and cost-effective technique to connect devices over long distances through space. However, due to their low power and environmental limitations, LEO satellites may require assistance from aerial devices such as high-altitude platforms (HAPs) or unmanned aerial vehicles to forward their data to the ground devices. Moreover, the limited power available at the NTNs makes it crucial to utilize available resources efficiently. In this paper, we present a model in which a LEO satellite communicates with multiple ground devices with the help of HAPs that relay LEO data to the ground devices. We formulate the problem of optimizing power allocation at the LEO satellite and all the HAPs to maximize the sum-rate of the system. To take advantage of the benefits of free-space optical (FSO) communication in satellites, we consider the LEO transmitting data to the HAPs on FSO links, which are then broadcast to the connected ground devices on radio frequency channels. We transform the complex non-convex problem into a convex form and compute the Karush-Kuhn-Tucker (KKT) conditions-based solution of the problem for power allocation at the LEO satellite and HAPs. Then, to reduce computation time, we propose a soft actor-critic (SAC) reinforcement learning (RL) framework that provides the solution in significantly less time while delivering comparable performance to the KKT scheme. Our simulation results demonstrate that the proposed solutions provide excellent performance and are scalable to any number of HAPs and ground devices in the system.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1661-1677"},"PeriodicalIF":0.0,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10741546","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142636509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The integration of high-precision cellular localization and machine learning (ML) is considered a cornerstone technique in future cellular navigation systems, offering unparalleled accuracy and functionality. This study focuses on localization based on uplink channel measurements in a fifth-generation (5G) new radio (NR) system. An attention-aided ML-based single-snapshot localization pipeline is presented, which consists of several cascaded blocks, namely a signal processing block, an attention-aided block, and an uncertainty estimation block. Specifically, the signal processing block generates an impulse response beam matrix for all beams. The attention-aided block trains on the channel impulse responses using an attention-aided network, which captures the correlation between impulse responses for different beams. The uncertainty estimation block predicts the probability density function of the user equipment (UE) position, thereby also indicating the confidence level of the localization result. Two representative uncertainty estimation techniques, the negative log-likelihood and the regression-by-classification techniques, are applied and compared. Furthermore, for dynamic measurements with multiple snapshots available, we combine the proposed pipeline with a Kalman filter to enhance localization accuracy. To evaluate our approach, we extract channel impulse responses for different beams from a commercial base station. The outdoor measurement campaign covers Line-of-Sight (LoS), Non Line-of-Sight (NLoS), and a mix of LoS and NLoS scenarios. The results show that sub-meter localization accuracy can be achieved.
高精度蜂窝定位与机器学习(ML)的集成被认为是未来蜂窝导航系统的基石技术,可提供无与伦比的精度和功能。本研究的重点是第五代(5G)新无线电(NR)系统中基于上行链路信道测量的定位。本文介绍了一种基于注意力辅助 ML 的单快照定位流水线,它由几个级联块组成,即信号处理块、注意力辅助块和不确定性估计块。具体来说,信号处理模块为所有波束生成脉冲响应波束矩阵。注意力辅助块利用注意力辅助网络对信道脉冲响应进行训练,从而捕捉不同波束脉冲响应之间的相关性。不确定性估计模块预测用户设备(UE)位置的概率密度函数,从而显示定位结果的置信度。应用了两种具有代表性的不确定性估计技术,即负对数概率和分类回归技术,并进行了比较。此外,对于具有多个可用快照的动态测量,我们将提议的管道与卡尔曼滤波器相结合,以提高定位精度。为了评估我们的方法,我们从一个商用基站提取了不同波束的信道脉冲响应。室外测量活动涵盖了视距(LoS)、非视距(NLoS)以及 LoS 和 NLoS 场景的混合。结果表明,可以实现亚米级定位精度。
{"title":"Attention-Aided Outdoor Localization in Commercial 5G NR Systems","authors":"Guoda Tian;Dino Pjanić;Xuesong Cai;Bo Bernhardsson;Fredrik Tufvesson","doi":"10.1109/TMLCN.2024.3490496","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3490496","url":null,"abstract":"The integration of high-precision cellular localization and machine learning (ML) is considered a cornerstone technique in future cellular navigation systems, offering unparalleled accuracy and functionality. This study focuses on localization based on uplink channel measurements in a fifth-generation (5G) new radio (NR) system. An attention-aided ML-based single-snapshot localization pipeline is presented, which consists of several cascaded blocks, namely a signal processing block, an attention-aided block, and an uncertainty estimation block. Specifically, the signal processing block generates an impulse response beam matrix for all beams. The attention-aided block trains on the channel impulse responses using an attention-aided network, which captures the correlation between impulse responses for different beams. The uncertainty estimation block predicts the probability density function of the user equipment (UE) position, thereby also indicating the confidence level of the localization result. Two representative uncertainty estimation techniques, the negative log-likelihood and the regression-by-classification techniques, are applied and compared. Furthermore, for dynamic measurements with multiple snapshots available, we combine the proposed pipeline with a Kalman filter to enhance localization accuracy. To evaluate our approach, we extract channel impulse responses for different beams from a commercial base station. The outdoor measurement campaign covers Line-of-Sight (LoS), Non Line-of-Sight (NLoS), and a mix of LoS and NLoS scenarios. The results show that sub-meter localization accuracy can be achieved.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1678-1692"},"PeriodicalIF":0.0,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10741343","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142694615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-24DOI: 10.1109/TMLCN.2024.3485520
Tianlun Hu;Qi Liao;Qiang Liu;Georg Carle
Network slicing enables operators to efficiently support diverse applications on a shared infrastructure. However, the evolving complexity of networks, compounded by inter-cell interference, necessitates agile and adaptable resource management. While deep learning offers solutions for coping with complexity, its adaptability to dynamic configurations remains limited. In this paper, we propose a novel hybrid deep learning algorithm called IDLA (integrated deep learning with the Lagrangian method). This integrated approach aims to enhance the scalability, flexibility, and robustness of slicing resource allocation solutions by harnessing the high approximation capability of deep learning and the strong generalization of classical non-linear optimization methods. Then, we introduce a variational information bottleneck (VIB)-assisted domain adaptation (DA) approach to enhance integrated deep learning and Lagrangian method (IDLA)’s adaptability across diverse network environments and conditions. We propose pre-training a variational information bottleneck (VIB)-based Quality of Service (QoS) estimator, using slice-specific inputs shared across all source domain slices. Each target domain slice can deploy this estimator to predict its QoS and optimize slice resource allocation using the IDLA algorithm. This VIB-based estimator is continuously fine-tuned with a mixture of samples from both the source and target domains until convergence. Evaluating on a multi-cell network with time-varying slice configurations, the VIB-enhanced IDLA algorithm outperforms baselines such as heuristic and deep reinforcement learning-based solutions, achieving twice the convergence speed and 16.52% higher asymptotic performance after slicing configuration changes. Transferability assessment demonstrates a 25.66% improvement in estimation accuracy with VIB, especially in scenarios with significant domain gaps, highlighting its robustness and effectiveness across diverse domains.
{"title":"Information Bottleneck-Based Domain Adaptation for Hybrid Deep Learning in Scalable Network Slicing","authors":"Tianlun Hu;Qi Liao;Qiang Liu;Georg Carle","doi":"10.1109/TMLCN.2024.3485520","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3485520","url":null,"abstract":"Network slicing enables operators to efficiently support diverse applications on a shared infrastructure. However, the evolving complexity of networks, compounded by inter-cell interference, necessitates agile and adaptable resource management. While deep learning offers solutions for coping with complexity, its adaptability to dynamic configurations remains limited. In this paper, we propose a novel hybrid deep learning algorithm called IDLA (integrated deep learning with the Lagrangian method). This integrated approach aims to enhance the scalability, flexibility, and robustness of slicing resource allocation solutions by harnessing the high approximation capability of deep learning and the strong generalization of classical non-linear optimization methods. Then, we introduce a variational information bottleneck (VIB)-assisted domain adaptation (DA) approach to enhance integrated deep learning and Lagrangian method (IDLA)’s adaptability across diverse network environments and conditions. We propose pre-training a variational information bottleneck (VIB)-based Quality of Service (QoS) estimator, using slice-specific inputs shared across all source domain slices. Each target domain slice can deploy this estimator to predict its QoS and optimize slice resource allocation using the IDLA algorithm. This VIB-based estimator is continuously fine-tuned with a mixture of samples from both the source and target domains until convergence. Evaluating on a multi-cell network with time-varying slice configurations, the VIB-enhanced IDLA algorithm outperforms baselines such as heuristic and deep reinforcement learning-based solutions, achieving twice the convergence speed and 16.52% higher asymptotic performance after slicing configuration changes. Transferability assessment demonstrates a 25.66% improvement in estimation accuracy with VIB, especially in scenarios with significant domain gaps, highlighting its robustness and effectiveness across diverse domains.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1642-1660"},"PeriodicalIF":0.0,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10734592","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142579172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-23DOI: 10.1109/TMLCN.2024.3485521
Anzhe Ye;Haotian Chen;Ryo Natsuaki;Akira Hirose
The performance of a wireless communication system depends to a large extent on the wireless channel. Due to the multipath fading environment during the radio wave propagation, channel prediction plays a vital role to enable adaptive transmission for wireless communication systems. Predicting various channel characteristics by using neural networks can help address more complex communication environments. However, achieving this goal typically requires the simultaneous use of multiple distinct neural models, which is undoubtedly unaffordable for mobile communications. Therefore, it is necessary to enable a simpler structure to simultaneously predict multiple channel characteristics. In this paper, we propose a fading channel prediction method using phasor quaternion neural networks (PQNNs) to predict the polarization states, with phase information involved to enhance the channel compensation ability. We evaluate the performance of the proposed PQNN method in two different fading situations in an actual environment, and we find that the proposed scheme provides 2.8 dB and 4.0 dB improvements at bit error rate (BER) of $10^{-4}$