Xiaomin Liao, Yulai Wang, Xuan Zhu, Chushan Lin, Yang Han, You Li
Unmanned aerial vehicles (UAVs) serving as aerial base stations have attracted enormous attention in dense cellular network, disaster relief, sixth generation mobile networks, etc. However, the efficiency is obstructed by scarce spectrum resources, especially in massive UAV swarms. This paper investigates a graph neural network-based spectrum resource optimisation algorithm to formulate the channel access and transmit power of UAVs with the consideration of both spectrum efficiency (SE) and energy efficiency (EE). We first construct a domain knowledge graph of UAV swarm (KG-UAVs) to manage the multi-source heterogeneous information and transform the multi-objective optimisation problem into a knowledge graph completion problem. Then a novel attribute fusion graph attention transformer network (AFGATrN) is proposed to complete the missing part in KG-UAVS, which consists of an attribute aware relational graph attention network encoder and a transformer based channel and power prediction decoder. Extensive simulation on both public and domain datasets demonstrates that, the proposed AFGATrN with a rapid convergence speed not only attains more practical spectrum resource allocation scheme with partial channel distribution information (CDI), but also significantly outperforms the other five existing algorithms in terms of the computation time and the trade-off between the SE and EE performance of the UAVs.
{"title":"Graph Neural Network Assisted Spectrum Resource Optimisation for UAV Swarm","authors":"Xiaomin Liao, Yulai Wang, Xuan Zhu, Chushan Lin, Yang Han, You Li","doi":"10.1049/cmu2.70078","DOIUrl":"10.1049/cmu2.70078","url":null,"abstract":"<p>Unmanned aerial vehicles (UAVs) serving as aerial base stations have attracted enormous attention in dense cellular network, disaster relief, sixth generation mobile networks, etc. However, the efficiency is obstructed by scarce spectrum resources, especially in massive UAV swarms. This paper investigates a graph neural network-based spectrum resource optimisation algorithm to formulate the channel access and transmit power of UAVs with the consideration of both spectrum efficiency (SE) and energy efficiency (EE). We first construct a domain knowledge graph of UAV swarm (KG-UAVs) to manage the multi-source heterogeneous information and transform the multi-objective optimisation problem into a knowledge graph completion problem. Then a novel attribute fusion graph attention transformer network (AFGATrN) is proposed to complete the missing part in KG-UAVS, which consists of an attribute aware relational graph attention network encoder and a transformer based channel and power prediction decoder. Extensive simulation on both public and domain datasets demonstrates that, the proposed AFGATrN with a rapid convergence speed not only attains more practical spectrum resource allocation scheme with partial channel distribution information (CDI), but also significantly outperforms the other five existing algorithms in terms of the computation time and the trade-off between the SE and EE performance of the UAVs.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":"19 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.70078","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144998689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiang Chen, Bin Wang, Laifeng Zhang, Yanqing Lai, Tingting Shi, Mengyue Zhu, Yuanzhe Li
The deployment of industrial robots in time-critical applications demands ultra-low latency and high reliability in communication systems. This study presents a novel delay optimisation framework for industrial robot control systems using 6G network slicing technologies. A Gale–Shapley (GS)-based elastic switching model is proposed to dynamically match robot controllers to optimised network slices and base stations under latency-sensitive conditions. To enhance resource adaptability, a long short-term memory (LSTM)-based encoder-decoder structure is developed for predictive resource allocation across slices. The proposed integrated matching mechanism achieves a success rate of 91.16% for slice access and a base station access rate of 90.83%, outperforming conventional integrated and two-stage schemes. The LSTM-based resource allocation achieves a mean absolute error of 0.04 and a violation rate below 10%, with over 92% utilisation of both node and link resources. Experimental simulations demonstrate a consistent end-to-end latency below 7 ms and a throughput of 18.4 Mbit/s, validating the proposed models' effectiveness in ensuring robust, real-time communication for industrial robot operations. This research contributes a scalable solution for dynamic 6G network resource management, providing a foundation for advanced industrial automation and intelligent manufacturing.
{"title":"Adaptive Network Slicing and LSTM-Based Resource Allocation for Real-Time Industrial Robot Control in 6G Networks","authors":"Xiang Chen, Bin Wang, Laifeng Zhang, Yanqing Lai, Tingting Shi, Mengyue Zhu, Yuanzhe Li","doi":"10.1049/cmu2.70080","DOIUrl":"10.1049/cmu2.70080","url":null,"abstract":"<p>The deployment of industrial robots in time-critical applications demands ultra-low latency and high reliability in communication systems. This study presents a novel delay optimisation framework for industrial robot control systems using 6G network slicing technologies. A Gale–Shapley (GS)-based elastic switching model is proposed to dynamically match robot controllers to optimised network slices and base stations under latency-sensitive conditions. To enhance resource adaptability, a long short-term memory (LSTM)-based encoder-decoder structure is developed for predictive resource allocation across slices. The proposed integrated matching mechanism achieves a success rate of 91.16% for slice access and a base station access rate of 90.83%, outperforming conventional integrated and two-stage schemes. The LSTM-based resource allocation achieves a mean absolute error of 0.04 and a violation rate below 10%, with over 92% utilisation of both node and link resources. Experimental simulations demonstrate a consistent end-to-end latency below 7 ms and a throughput of 18.4 Mbit/s, validating the proposed models' effectiveness in ensuring robust, real-time communication for industrial robot operations. This research contributes a scalable solution for dynamic 6G network resource management, providing a foundation for advanced industrial automation and intelligent manufacturing.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":"19 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.70080","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144934880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a novel 3D geometry-based stochastic channel model for intelligent reflecting surface (IRS)-assisted wireless communication, where a cylindrical array-based large antenna transmitter (LAT) is employed. Unlike conventional planar array models, the proposed configuration captures the spatial characteristics of both azimuth and elevation domains, enabling enhanced beamforming and coverage flexibility. The system model incorporates the physical positions of each antenna element and their contributions to the overall channel response, including propagation delays, Doppler shifts, and phase variations. Furthermore, hardware impairments at the LAT and IRS are integrated into the channel formulation to assess their impact on spectral efficiency (SE). A compact channel coefficient expression is derived based on the cylindrical geometry and used to evaluate the SE under ideal and non-ideal conditions. Simulation results demonstrate that the proposed CA-based LAT-IRS system achieves significant performance gains over conventional planar configurations, especially in dense environments and under realistic hardware constraints.
{"title":"IRS-Assisted Communication in 3D Stochastic Geometry Utilizing Large Antenna Transmitters With Hardware Impairments","authors":"Antwi Owusu Agyeman, Affum Emmanuel Ampoma, Tweneboah-Koduah Samuel, Kwasi Adu-Boahen Opare, Kingsford Sarkodie Obeng Kwakye, Willie Ofosu","doi":"10.1049/cmu2.70079","DOIUrl":"10.1049/cmu2.70079","url":null,"abstract":"<p>This paper presents a novel 3D geometry-based stochastic channel model for intelligent reflecting surface (IRS)-assisted wireless communication, where a cylindrical array-based large antenna transmitter (LAT) is employed. Unlike conventional planar array models, the proposed configuration captures the spatial characteristics of both azimuth and elevation domains, enabling enhanced beamforming and coverage flexibility. The system model incorporates the physical positions of each antenna element and their contributions to the overall channel response, including propagation delays, Doppler shifts, and phase variations. Furthermore, hardware impairments at the LAT and IRS are integrated into the channel formulation to assess their impact on spectral efficiency (SE). A compact channel coefficient expression is derived based on the cylindrical geometry and used to evaluate the SE under ideal and non-ideal conditions. Simulation results demonstrate that the proposed CA-based LAT-IRS system achieves significant performance gains over conventional planar configurations, especially in dense environments and under realistic hardware constraints.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":"19 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.70079","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144929905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sakhshra Monga, Nitin Saluja, Roopali Garg, A. F. M. Shahen Shah, John Ekoru, Milka Madahana
Channel estimation is a critical component of modern wireless communication systems, especially in massive multiple-input multiple-output (MIMO) architectures, where the accuracy of received signal decoding heavily depends on the quality of channel state information. As wireless networks evolve into fifth-generation (5G) and beyond, they face increasingly complex propagation environments with rapid mobility, dense connectivity, and hardware constraints. Accurate and timely channel estimation is therefore essential for maintaining system performance, enabling reliable data transmission, and supporting techniques such as beamforming and interference management. Traditional estimation methods like least squares and minimum mean square error offer baseline performance but are often limited by their computational complexity, sensitivity to noise, and inefficiency in quantised systems—particularly those employing one-bit analogue-to-digital converters. These limitations hinder their applicability in real-time, low-power, and bandwidth-constrained scenarios. To address these challenges, this paper proposes a novel channel estimation framework based on conditional generative adversarial networks. The approach incorporates a U-Net-based generator and a sequential convolutional neural network discriminator to learn complex channel mappings from highly quantised received signals. Unlike existing methods, the proposed architecture dynamically adapts to various noise levels and system configurations, offering improved robustness and generalisation. Comprehensive experiments conducted on realistic indoor massive MIMO datasets demonstrate that the proposed method achieves substantial performance gains. The model improves estimation accuracy from 93% to 95.5% and significantly enhances normalised mean square error, consistently outperforming conventional and deep learning-based techniques across diverse training conditions. These results confirm the effectiveness of the proposed scheme in delivering high-accuracy channel estimation under extreme quantisation conditions, making it suitable for next-generation wireless systems.
{"title":"Innovative Channel Estimation Methods for Massive MIMO Using GAN Architectures","authors":"Sakhshra Monga, Nitin Saluja, Roopali Garg, A. F. M. Shahen Shah, John Ekoru, Milka Madahana","doi":"10.1049/cmu2.70066","DOIUrl":"10.1049/cmu2.70066","url":null,"abstract":"<p>Channel estimation is a critical component of modern wireless communication systems, especially in massive multiple-input multiple-output (MIMO) architectures, where the accuracy of received signal decoding heavily depends on the quality of channel state information. As wireless networks evolve into fifth-generation (5G) and beyond, they face increasingly complex propagation environments with rapid mobility, dense connectivity, and hardware constraints. Accurate and timely channel estimation is therefore essential for maintaining system performance, enabling reliable data transmission, and supporting techniques such as beamforming and interference management. Traditional estimation methods like least squares and minimum mean square error offer baseline performance but are often limited by their computational complexity, sensitivity to noise, and inefficiency in quantised systems—particularly those employing one-bit analogue-to-digital converters. These limitations hinder their applicability in real-time, low-power, and bandwidth-constrained scenarios. To address these challenges, this paper proposes a novel channel estimation framework based on conditional generative adversarial networks. The approach incorporates a U-Net-based generator and a sequential convolutional neural network discriminator to learn complex channel mappings from highly quantised received signals. Unlike existing methods, the proposed architecture dynamically adapts to various noise levels and system configurations, offering improved robustness and generalisation. Comprehensive experiments conducted on realistic indoor massive MIMO datasets demonstrate that the proposed method achieves substantial performance gains. The model improves estimation accuracy from 93% to 95.5% and significantly enhances normalised mean square error, consistently outperforming conventional and deep learning-based techniques across diverse training conditions. These results confirm the effectiveness of the proposed scheme in delivering high-accuracy channel estimation under extreme quantisation conditions, making it suitable for next-generation wireless systems.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":"19 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.70066","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144935342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wenbo Du, Jun Cai, Weijun Zeng, Xiang Zheng, Huali Wang, Lei Zhu
Wireless networks, as the foundation of the modern information society, rely crucially on network topology with the development of 6th generation mobile networks technologies. The network topology structure not only shapes the mechanism and functional dynamics of network evolution, but also reflects the communication relationship and information exchange among nodes. For this reason, wireless network topology inference has become a key research field in network science and the Internet of Things. Wireless network topology inference methods can be roughly divided into cooperative methods and non-cooperative methods. The former needs to directly participate in the communication process of the target network to obtain detailed internal information, and its applicability is limited. In contrast, the latter infers the topology through external observation of data packet timing without the need to know the internal information of the network in advance, and has broader practicability. This paper first outlines the basic concepts and scope of topology inference, and briefly reviews the cooperative methods. Then, three types of non-cooperative methods were comprehensively summarized: based on statistical learning, based on machine learning, and based on rule analysis. Using a unified dataset and evaluation metrics, the performance of four representative non-cooperative topology inference algorithms is compared. Finally, this paper points out the challenges faced by network topology inference and proposes potential future research directions, aiming to provide theoretical support for the continuous development of this field.
{"title":"Variations in Wireless Network Topology Inference: Recent Evolution, Challenges, and Directions","authors":"Wenbo Du, Jun Cai, Weijun Zeng, Xiang Zheng, Huali Wang, Lei Zhu","doi":"10.1049/cmu2.70073","DOIUrl":"10.1049/cmu2.70073","url":null,"abstract":"<p>Wireless networks, as the foundation of the modern information society, rely crucially on network topology with the development of 6th generation mobile networks technologies. The network topology structure not only shapes the mechanism and functional dynamics of network evolution, but also reflects the communication relationship and information exchange among nodes. For this reason, wireless network topology inference has become a key research field in network science and the Internet of Things. Wireless network topology inference methods can be roughly divided into cooperative methods and non-cooperative methods. The former needs to directly participate in the communication process of the target network to obtain detailed internal information, and its applicability is limited. In contrast, the latter infers the topology through external observation of data packet timing without the need to know the internal information of the network in advance, and has broader practicability. This paper first outlines the basic concepts and scope of topology inference, and briefly reviews the cooperative methods. Then, three types of non-cooperative methods were comprehensively summarized: based on statistical learning, based on machine learning, and based on rule analysis. Using a unified dataset and evaluation metrics, the performance of four representative non-cooperative topology inference algorithms is compared. Finally, this paper points out the challenges faced by network topology inference and proposes potential future research directions, aiming to provide theoretical support for the continuous development of this field.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":"19 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.70073","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144915300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ying Tong, Xiang Jia, Yong Deng, Yang Liu, Jiangang Tong
The prediction of the IP multimedia subsystem (IMS) signaling storm is crucial for ensuring the stable operation of voice over new radio (VoNR) services and enhancing operators' core competitiveness. However, the current IMS signaling storm prediction alarm function for live network systems lacks robustness, with most attention focused on equipment fault detection and network element health monitoring. To address this limitation, this paper proposes a signaling storm prediction model comprising two modules: prediction and judgment. The prediction module combines the advantages of long short-term memory (LSTM) models and an attention mechanism (AM), improving convergence and accuracy through an enhanced Particle Swarm Optimization (PSO) algorithm based on trigonometric transformation (TrigPSO). The judgment module effectively classifies predicted values into different alarm levels using K-Means. Experimental results based on data from China telecom's scientific apparatus show that the proposed model accurately predicts key indicator values, with an improved r-squared (R2) value of 0.854 compared to other models such as LSTM, LSTM-AM, LSTM-PSO, and LSTM-AM-PSO. Additionally, the k-means model performs well in experimental data validation, demonstrating its scientific validity and high efficiency.
IP多媒体子系统(IMS)信令风暴的预测对于保证VoNR业务的稳定运行,提高运营商的核心竞争力至关重要。然而,目前针对现网系统的IMS信令风暴预报报警功能缺乏鲁棒性,主要集中在设备故障检测和网元健康监测上。为了解决这一问题,本文提出了一个信号风暴预测模型,该模型由预测和判断两个模块组成。该预测模块结合了长短期记忆(LSTM)模型和注意机制(AM)的优点,通过基于三角变换(TrigPSO)的增强粒子群优化(PSO)算法提高了收敛性和准确性。判断模块利用K-Means将预测值有效地划分为不同的报警级别。基于中国电信科学仪器数据的实验结果表明,该模型预测关键指标值的准确性较高,与LSTM、LSTM- am、LSTM- pso、LSTM- am - pso等模型相比,r²(R2)值提高了0.854。此外,k-means模型在实验数据验证中表现良好,证明了其科学有效性和高效率。
{"title":"Research on Predicting Alarm of Signaling Storm by Hybrid LSTM-AM Optimized With Improved PSO","authors":"Ying Tong, Xiang Jia, Yong Deng, Yang Liu, Jiangang Tong","doi":"10.1049/cmu2.70074","DOIUrl":"10.1049/cmu2.70074","url":null,"abstract":"<p>The prediction of the IP multimedia subsystem (IMS) signaling storm is crucial for ensuring the stable operation of voice over new radio (VoNR) services and enhancing operators' core competitiveness. However, the current IMS signaling storm prediction alarm function for live network systems lacks robustness, with most attention focused on equipment fault detection and network element health monitoring. To address this limitation, this paper proposes a signaling storm prediction model comprising two modules: prediction and judgment. The prediction module combines the advantages of long short-term memory (LSTM) models and an attention mechanism (AM), improving convergence and accuracy through an enhanced Particle Swarm Optimization (PSO) algorithm based on trigonometric transformation (TrigPSO). The judgment module effectively classifies predicted values into different alarm levels using K-Means. Experimental results based on data from China telecom's scientific apparatus show that the proposed model accurately predicts key indicator values, with an improved r-squared (R<sup>2</sup>) value of 0.854 compared to other models such as LSTM, LSTM-AM, LSTM-PSO, and LSTM-AM-PSO. Additionally, the k-means model performs well in experimental data validation, demonstrating its scientific validity and high efficiency.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":"19 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.70074","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144888456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuanmo Lin, Zhiyong Xu, Jianhua Li, Jingyuan Wang, Cheng Li
This paper investigates a multi unmanned aerial vehicles (UAVs) assisted mobile edge computing (MEC) emergency communication system in which each UAV acts as a mobile MEC server for computing tasks offloaded by ground sensor users. Considering the stochastic dynamic characteristics of multi-UAVs-assisted MEC systems and the precision of spectrum resources, the deep reinforcement learning (DRL) algorithm and the non-orthogonal multiple access (NOMA) techniques are introduced. Specifically, we design an offloading algorithm based on a multi-agent deep deterministic policy gradient that jointly optimizes the UAVs' flight trajectories, the sensors' offloading powers, and the dynamic spectrum access to maximize the number of successfully offloaded tasks. The algorithm employs the Gumbel-Softmax method to effectively control both the discrete sensor access action and the continuous offloading power action. Sufficient simulation results show that the proposed algorithm performs significantly better than other benchmark algorithms.
{"title":"Deep Reinforcement Learning-Based Intelligent Resource Management in Multi-UAVs-Assisted MEC Emergency Communication System","authors":"Yuanmo Lin, Zhiyong Xu, Jianhua Li, Jingyuan Wang, Cheng Li","doi":"10.1049/cmu2.70063","DOIUrl":"10.1049/cmu2.70063","url":null,"abstract":"<p>This paper investigates a multi unmanned aerial vehicles (UAVs) assisted mobile edge computing (MEC) emergency communication system in which each UAV acts as a mobile MEC server for computing tasks offloaded by ground sensor users. Considering the stochastic dynamic characteristics of multi-UAVs-assisted MEC systems and the precision of spectrum resources, the deep reinforcement learning (DRL) algorithm and the non-orthogonal multiple access (NOMA) techniques are introduced. Specifically, we design an offloading algorithm based on a multi-agent deep deterministic policy gradient that jointly optimizes the UAVs' flight trajectories, the sensors' offloading powers, and the dynamic spectrum access to maximize the number of successfully offloaded tasks. The algorithm employs the Gumbel-Softmax method to effectively control both the discrete sensor access action and the continuous offloading power action. Sufficient simulation results show that the proposed algorithm performs significantly better than other benchmark algorithms.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":"19 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.70063","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144891722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accurate channel estimation is essential for optimising intelligent reflecting surface-assisted multi-user communication systems, particularly in dynamic indoor environments. Conventional techniques such as least squares (LS), linear minimum mean square error (LMMSE), and orthogonal matching pursuit (OMP) suffer from noise sensitivity and fail to effectively capture spatial dependencies in high-dimensional intelligent reflecting surface (IRS)-assisted channels. To overcome these limitations, this work proposes a deep learning-driven ResNet+UNet framework that refines initial LS estimates using residual learning and multi-scale feature reconstruction. While UNet enhances channel estimation through hierarchical processing, efficiently decreasing noise and enhancing estimate accuracy, ResNet gathers spatial features. Simulation results show that the proposed method significantly outperforms existing methods across various performance metrics. In NMSE versus signal-to-noise ratio assessments, the proposed approach surpasses convolutional deep residual network (CDRN) by 59%, OMP by 81%, LMMSE by 114%, and LS by 115%. When IRS elements are modified, it overcomes CDRN by 60%, OMP by 78%, LS by 107%, and LMMSE by 110%. Along with this, recommended structure performs more effectively than CDRN by 39%, OMP by 44%, LS by 122%, and LMMSE by 129% across various antenna configurations. The proposed approach is particularly beneficial for augmented reality (AR) applications, where real-time, high-precision channel estimation ensures seamless data streaming and ultra-low latency, enhancing immersive experiences in AR-based communication and interactive environments. These results illustrate the proposed method's scalability and resilience, making it a suitable choice for next-generation IRS-assisted wireless communication networks.
{"title":"Intelligent Reflecting Surface-Aided Wireless Networks: Deep Learning-Based Channel Estimation Using ResNet+UNet","authors":"Sakhshra Monga, Aditya Pathania, Nitin Saluja, Gunjan Gupta, Ashutosh Sharma","doi":"10.1049/cmu2.70075","DOIUrl":"10.1049/cmu2.70075","url":null,"abstract":"<p>Accurate channel estimation is essential for optimising intelligent reflecting surface-assisted multi-user communication systems, particularly in dynamic indoor environments. Conventional techniques such as least squares (LS), linear minimum mean square error (LMMSE), and orthogonal matching pursuit (OMP) suffer from noise sensitivity and fail to effectively capture spatial dependencies in high-dimensional intelligent reflecting surface (IRS)-assisted channels. To overcome these limitations, this work proposes a deep learning-driven ResNet+UNet framework that refines initial LS estimates using residual learning and multi-scale feature reconstruction. While UNet enhances channel estimation through hierarchical processing, efficiently decreasing noise and enhancing estimate accuracy, ResNet gathers spatial features. Simulation results show that the proposed method significantly outperforms existing methods across various performance metrics. In NMSE versus signal-to-noise ratio assessments, the proposed approach surpasses convolutional deep residual network (CDRN) by 59%, OMP by 81%, LMMSE by 114%, and LS by 115%. When IRS elements are modified, it overcomes CDRN by 60%, OMP by 78%, LS by 107%, and LMMSE by 110%. Along with this, recommended structure performs more effectively than CDRN by 39%, OMP by 44%, LS by 122%, and LMMSE by 129% across various antenna configurations. The proposed approach is particularly beneficial for augmented reality (AR) applications, where real-time, high-precision channel estimation ensures seamless data streaming and ultra-low latency, enhancing immersive experiences in AR-based communication and interactive environments. These results illustrate the proposed method's scalability and resilience, making it a suitable choice for next-generation IRS-assisted wireless communication networks.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":"19 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.70075","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144861878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As massive distribution automation terminals connect and data is acquired at high frequencies, the demand for low-latency processing of distribution service data has increased dramatically. Edge clusters, integrating multiple edge servers, can effectively mitigate transmission delays. Cloud-edge fusion leverages its data processing capabilities and the real-time responsiveness of edge computing to meet the needs of efficient data processing and optimal resource allocation. However, existing access methods for distribution automation terminals in cloud-edge fusion architectures exclusively depend on either cloud or edge computing for data processing. These conventional approaches fail to incorporate critical aspects such as: adaptive access mechanisms for edge clusters of distribution automation terminals, flexible strategies including data offloading, knowledge sharing among edge clusters, and load awareness capabilities. Consequently, they demonstrate significant limitations in achieving deep fusion between cloud and edge computing paradigms. Additionally, they lack consideration for the perception of global information and queue backlog, making it difficult to meet the low-latency data transmission requirements of distribution automation services in dynamic environments. To address these issues, we propose an adaptive access method for edge clusters of distribution automation terminals based on cloud-edge fusion. Firstly, a data processing architecture for adaptive access of distribution automation terminal edge clusters are designed to coordinate terminal access, data processing distribution, and decision optimization for computing resource allocation, enabling efficient data transmission and processing. Secondly, an optimization problem for adaptive access in edge clusters of distribution automation terminals is formulated, aiming to minimize the weighted sum of total queuing delay and load balancing degree. Finally, a federated twin delayed deep deterministic policy gradient (federated TD3)-based edge cluster adaptive access method for distribution automation terminal is proposed. This approach integrates model parameters from edge servers at the cloud level and distributes them to the edge cluster level, learning strategies for terminal access, data processing allocation, and computing resource allocation based on queue backlog fluctuations. This enhances load balancing between the distribution terminal layer and edge layer, achieving collaborative optimization of load balancing and delay under massive distribution terminal access. Simulation results demonstrate that the proposed method significantly reduces system queuing delay, optimizes load balancing, and enhances overall operation efficiency.
{"title":"An Adaptive Access Method for Edge Clusters of Distribution Automation Terminals Based on Cloud-Edge Fusion","authors":"Ruijiang Zeng, Zhiyong Li","doi":"10.1049/cmu2.70057","DOIUrl":"10.1049/cmu2.70057","url":null,"abstract":"<p>As massive distribution automation terminals connect and data is acquired at high frequencies, the demand for low-latency processing of distribution service data has increased dramatically. Edge clusters, integrating multiple edge servers, can effectively mitigate transmission delays. Cloud-edge fusion leverages its data processing capabilities and the real-time responsiveness of edge computing to meet the needs of efficient data processing and optimal resource allocation. However, existing access methods for distribution automation terminals in cloud-edge fusion architectures exclusively depend on either cloud or edge computing for data processing. These conventional approaches fail to incorporate critical aspects such as: adaptive access mechanisms for edge clusters of distribution automation terminals, flexible strategies including data offloading, knowledge sharing among edge clusters, and load awareness capabilities. Consequently, they demonstrate significant limitations in achieving deep fusion between cloud and edge computing paradigms. Additionally, they lack consideration for the perception of global information and queue backlog, making it difficult to meet the low-latency data transmission requirements of distribution automation services in dynamic environments. To address these issues, we propose an adaptive access method for edge clusters of distribution automation terminals based on cloud-edge fusion. Firstly, a data processing architecture for adaptive access of distribution automation terminal edge clusters are designed to coordinate terminal access, data processing distribution, and decision optimization for computing resource allocation, enabling efficient data transmission and processing. Secondly, an optimization problem for adaptive access in edge clusters of distribution automation terminals is formulated, aiming to minimize the weighted sum of total queuing delay and load balancing degree. Finally, a federated twin delayed deep deterministic policy gradient (federated TD3)-based edge cluster adaptive access method for distribution automation terminal is proposed. This approach integrates model parameters from edge servers at the cloud level and distributes them to the edge cluster level, learning strategies for terminal access, data processing allocation, and computing resource allocation based on queue backlog fluctuations. This enhances load balancing between the distribution terminal layer and edge layer, achieving collaborative optimization of load balancing and delay under massive distribution terminal access. Simulation results demonstrate that the proposed method significantly reduces system queuing delay, optimizes load balancing, and enhances overall operation efficiency.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":"19 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.70057","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144832643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a novel method for optimising intrusion detection systems (IDS) by using two powerful techniques, namely ‘Principal component analysis (PCA)’ and ‘Particle swarm optimisation (PSO).’ Furthermore, the proposed approach is implemented on two categories of classifiers, Neuro-Fuzzy and support vector machines (SVM), which function on four widely used intrusion detection system datasets: CAIDA, DARPA, NSLKDD, and ISCX2012. Performance results are analysed individually based on a set of established evaluation criteria. Furthermore, the PSO algorithm is applied in search of the best combination of the outputs from the Neuro-Fuzzy and the SVM models, resulting in better attack detection accuracy with reduced false alarm rates. Another benefit of using PCA in the proposed method is that it considerably reduces the dimensions of the data by computing the principal components. This offers several advantages, such as reduced model complexity, training and execution time, memory usage, and model overfitting prevention. By focusing on the major components, PCA reduces noise in data to a certain extent, leading to increased classification accuracy and robustness. It also improves model interpretability by highlighting the key components. The application of PSO to find the most optimal parameters leads to the optimisation of the Neuro-Fuzzy and SVM models' parameters. The results achieved support that the proposed method for output combination in both Neuro-Fuzzy and SVM categories significantly enhances the accuracy of attack detection while reducing the false alarm rate.
{"title":"A Novel Hybrid Approach for Intrusion Detection Using Neuro-Fuzzy, SVM, and PSO","authors":"Soodeh Hosseini, Fahime Lotfi, Hossein Seilani","doi":"10.1049/cmu2.70071","DOIUrl":"10.1049/cmu2.70071","url":null,"abstract":"<p>This paper presents a novel method for optimising intrusion detection systems (IDS) by using two powerful techniques, namely ‘Principal component analysis (PCA)’ and ‘Particle swarm optimisation (PSO).’ Furthermore, the proposed approach is implemented on two categories of classifiers, Neuro-Fuzzy and support vector machines (SVM), which function on four widely used intrusion detection system datasets: CAIDA, DARPA, NSLKDD, and ISCX2012. Performance results are analysed individually based on a set of established evaluation criteria. Furthermore, the PSO algorithm is applied in search of the best combination of the outputs from the Neuro-Fuzzy and the SVM models, resulting in better attack detection accuracy with reduced false alarm rates. Another benefit of using PCA in the proposed method is that it considerably reduces the dimensions of the data by computing the principal components. This offers several advantages, such as reduced model complexity, training and execution time, memory usage, and model overfitting prevention. By focusing on the major components, PCA reduces noise in data to a certain extent, leading to increased classification accuracy and robustness. It also improves model interpretability by highlighting the key components. The application of PSO to find the most optimal parameters leads to the optimisation of the Neuro-Fuzzy and SVM models' parameters. The results achieved support that the proposed method for output combination in both Neuro-Fuzzy and SVM categories significantly enhances the accuracy of attack detection while reducing the false alarm rate.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":"19 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.70071","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144782625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}