首页 > 最新文献

IET Communications最新文献

英文 中文
Graph Neural Network Assisted Spectrum Resource Optimisation for UAV Swarm 图神经网络辅助的无人机群频谱资源优化
IF 1.6 4区 计算机科学 Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-09-05 DOI: 10.1049/cmu2.70078
Xiaomin Liao, Yulai Wang, Xuan Zhu, Chushan Lin, Yang Han, You Li

Unmanned aerial vehicles (UAVs) serving as aerial base stations have attracted enormous attention in dense cellular network, disaster relief, sixth generation mobile networks, etc. However, the efficiency is obstructed by scarce spectrum resources, especially in massive UAV swarms. This paper investigates a graph neural network-based spectrum resource optimisation algorithm to formulate the channel access and transmit power of UAVs with the consideration of both spectrum efficiency (SE) and energy efficiency (EE). We first construct a domain knowledge graph of UAV swarm (KG-UAVs) to manage the multi-source heterogeneous information and transform the multi-objective optimisation problem into a knowledge graph completion problem. Then a novel attribute fusion graph attention transformer network (AFGATrN) is proposed to complete the missing part in KG-UAVS, which consists of an attribute aware relational graph attention network encoder and a transformer based channel and power prediction decoder. Extensive simulation on both public and domain datasets demonstrates that, the proposed AFGATrN with a rapid convergence speed not only attains more practical spectrum resource allocation scheme with partial channel distribution information (CDI), but also significantly outperforms the other five existing algorithms in terms of the computation time and the trade-off between the SE and EE performance of the UAVs.

无人机作为空中基站在密集蜂窝网络、救灾、第六代移动网络等领域受到广泛关注。然而,频谱资源的稀缺阻碍了效率的提高,特别是在大规模的无人机群中。本文研究了一种基于图神经网络的频谱资源优化算法,在考虑频谱效率(SE)和能量效率(EE)的情况下制定无人机的信道接入和发射功率。首先构建无人机群(kg -UAV)的领域知识图,对多源异构信息进行管理,将多目标优化问题转化为知识图补全问题;在此基础上,提出了一种新的属性融合图注意变压器网络(AFGATrN),该网络由属性感知关系图注意网络编码器和基于变压器的信道和功率预测解码器组成,弥补了KG-UAVS中缺失的部分。在公共和领域数据集上的大量仿真结果表明,该算法不仅具有较快的收敛速度,获得了更实用的具有部分信道分布信息(CDI)的频谱资源分配方案,而且在计算时间和无人机SE性能与EE性能之间的权衡方面显著优于其他五种现有算法。
{"title":"Graph Neural Network Assisted Spectrum Resource Optimisation for UAV Swarm","authors":"Xiaomin Liao,&nbsp;Yulai Wang,&nbsp;Xuan Zhu,&nbsp;Chushan Lin,&nbsp;Yang Han,&nbsp;You Li","doi":"10.1049/cmu2.70078","DOIUrl":"10.1049/cmu2.70078","url":null,"abstract":"<p>Unmanned aerial vehicles (UAVs) serving as aerial base stations have attracted enormous attention in dense cellular network, disaster relief, sixth generation mobile networks, etc. However, the efficiency is obstructed by scarce spectrum resources, especially in massive UAV swarms. This paper investigates a graph neural network-based spectrum resource optimisation algorithm to formulate the channel access and transmit power of UAVs with the consideration of both spectrum efficiency (SE) and energy efficiency (EE). We first construct a domain knowledge graph of UAV swarm (KG-UAVs) to manage the multi-source heterogeneous information and transform the multi-objective optimisation problem into a knowledge graph completion problem. Then a novel attribute fusion graph attention transformer network (AFGATrN) is proposed to complete the missing part in KG-UAVS, which consists of an attribute aware relational graph attention network encoder and a transformer based channel and power prediction decoder. Extensive simulation on both public and domain datasets demonstrates that, the proposed AFGATrN with a rapid convergence speed not only attains more practical spectrum resource allocation scheme with partial channel distribution information (CDI), but also significantly outperforms the other five existing algorithms in terms of the computation time and the trade-off between the SE and EE performance of the UAVs.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":"19 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.70078","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144998689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Network Slicing and LSTM-Based Resource Allocation for Real-Time Industrial Robot Control in 6G Networks 6G网络中工业机器人实时控制的自适应网络切片和lstm资源分配
IF 1.6 4区 计算机科学 Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-09-04 DOI: 10.1049/cmu2.70080
Xiang Chen, Bin Wang, Laifeng Zhang, Yanqing Lai, Tingting Shi, Mengyue Zhu, Yuanzhe Li

The deployment of industrial robots in time-critical applications demands ultra-low latency and high reliability in communication systems. This study presents a novel delay optimisation framework for industrial robot control systems using 6G network slicing technologies. A Gale–Shapley (GS)-based elastic switching model is proposed to dynamically match robot controllers to optimised network slices and base stations under latency-sensitive conditions. To enhance resource adaptability, a long short-term memory (LSTM)-based encoder-decoder structure is developed for predictive resource allocation across slices. The proposed integrated matching mechanism achieves a success rate of 91.16% for slice access and a base station access rate of 90.83%, outperforming conventional integrated and two-stage schemes. The LSTM-based resource allocation achieves a mean absolute error of 0.04 and a violation rate below 10%, with over 92% utilisation of both node and link resources. Experimental simulations demonstrate a consistent end-to-end latency below 7 ms and a throughput of 18.4 Mbit/s, validating the proposed models' effectiveness in ensuring robust, real-time communication for industrial robot operations. This research contributes a scalable solution for dynamic 6G network resource management, providing a foundation for advanced industrial automation and intelligent manufacturing.

工业机器人在时间关键应用中的部署要求通信系统的超低延迟和高可靠性。本研究提出了一种基于6G网络切片技术的工业机器人控制系统延迟优化框架。提出了一种基于Gale-Shapley (GS)的弹性交换模型,在延迟敏感条件下,将机器人控制器与优化的网络切片和基站动态匹配。为了提高资源的适应性,提出了一种基于长短期记忆(LSTM)的编码器-解码器结构,用于预测资源在片间的分配。所提出的综合匹配机制的分片接入成功率为91.16%,基站接入成功率为90.83%,优于传统的综合方案和两阶段方案。基于lstm的资源分配平均绝对误差为0.04,违例率低于10%,节点和链路资源的利用率均在92%以上。实验模拟表明,端到端延迟低于7 ms,吞吐量为18.4 Mbit/s,验证了所提出模型在确保工业机器人操作的鲁棒实时通信方面的有效性。该研究为动态6G网络资源管理提供了可扩展的解决方案,为先进工业自动化和智能制造提供了基础。
{"title":"Adaptive Network Slicing and LSTM-Based Resource Allocation for Real-Time Industrial Robot Control in 6G Networks","authors":"Xiang Chen,&nbsp;Bin Wang,&nbsp;Laifeng Zhang,&nbsp;Yanqing Lai,&nbsp;Tingting Shi,&nbsp;Mengyue Zhu,&nbsp;Yuanzhe Li","doi":"10.1049/cmu2.70080","DOIUrl":"10.1049/cmu2.70080","url":null,"abstract":"<p>The deployment of industrial robots in time-critical applications demands ultra-low latency and high reliability in communication systems. This study presents a novel delay optimisation framework for industrial robot control systems using 6G network slicing technologies. A Gale–Shapley (GS)-based elastic switching model is proposed to dynamically match robot controllers to optimised network slices and base stations under latency-sensitive conditions. To enhance resource adaptability, a long short-term memory (LSTM)-based encoder-decoder structure is developed for predictive resource allocation across slices. The proposed integrated matching mechanism achieves a success rate of 91.16% for slice access and a base station access rate of 90.83%, outperforming conventional integrated and two-stage schemes. The LSTM-based resource allocation achieves a mean absolute error of 0.04 and a violation rate below 10%, with over 92% utilisation of both node and link resources. Experimental simulations demonstrate a consistent end-to-end latency below 7 ms and a throughput of 18.4 Mbit/s, validating the proposed models' effectiveness in ensuring robust, real-time communication for industrial robot operations. This research contributes a scalable solution for dynamic 6G network resource management, providing a foundation for advanced industrial automation and intelligent manufacturing.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":"19 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.70080","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144934880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IRS-Assisted Communication in 3D Stochastic Geometry Utilizing Large Antenna Transmitters With Hardware Impairments 利用硬件缺陷的大型天线发射机在三维随机几何中的irs辅助通信
IF 1.6 4区 计算机科学 Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-09-03 DOI: 10.1049/cmu2.70079
Antwi Owusu Agyeman, Affum Emmanuel Ampoma, Tweneboah-Koduah Samuel, Kwasi Adu-Boahen Opare, Kingsford Sarkodie Obeng Kwakye, Willie Ofosu

This paper presents a novel 3D geometry-based stochastic channel model for intelligent reflecting surface (IRS)-assisted wireless communication, where a cylindrical array-based large antenna transmitter (LAT) is employed. Unlike conventional planar array models, the proposed configuration captures the spatial characteristics of both azimuth and elevation domains, enabling enhanced beamforming and coverage flexibility. The system model incorporates the physical positions of each antenna element and their contributions to the overall channel response, including propagation delays, Doppler shifts, and phase variations. Furthermore, hardware impairments at the LAT and IRS are integrated into the channel formulation to assess their impact on spectral efficiency (SE). A compact channel coefficient expression is derived based on the cylindrical geometry and used to evaluate the SE under ideal and non-ideal conditions. Simulation results demonstrate that the proposed CA-based LAT-IRS system achieves significant performance gains over conventional planar configurations, especially in dense environments and under realistic hardware constraints.

本文提出了一种基于三维几何的智能反射面(IRS)辅助无线通信随机信道模型,该模型采用圆柱阵列大天线发射机。与传统的平面阵列模型不同,所提出的配置捕获了方位角和高程域的空间特征,从而增强了波束形成和覆盖的灵活性。系统模型结合了每个天线元件的物理位置及其对整体信道响应的贡献,包括传播延迟、多普勒频移和相位变化。此外,将LAT和IRS的硬件缺陷集成到信道公式中,以评估它们对频谱效率(SE)的影响。基于圆柱几何导出了紧凑的通道系数表达式,并用于评估理想和非理想条件下的SE。仿真结果表明,与传统的平面结构相比,基于ca的LAT-IRS系统具有显著的性能提升,特别是在密集环境和现实硬件约束下。
{"title":"IRS-Assisted Communication in 3D Stochastic Geometry Utilizing Large Antenna Transmitters With Hardware Impairments","authors":"Antwi Owusu Agyeman,&nbsp;Affum Emmanuel Ampoma,&nbsp;Tweneboah-Koduah Samuel,&nbsp;Kwasi Adu-Boahen Opare,&nbsp;Kingsford Sarkodie Obeng Kwakye,&nbsp;Willie Ofosu","doi":"10.1049/cmu2.70079","DOIUrl":"10.1049/cmu2.70079","url":null,"abstract":"<p>This paper presents a novel 3D geometry-based stochastic channel model for intelligent reflecting surface (IRS)-assisted wireless communication, where a cylindrical array-based large antenna transmitter (LAT) is employed. Unlike conventional planar array models, the proposed configuration captures the spatial characteristics of both azimuth and elevation domains, enabling enhanced beamforming and coverage flexibility. The system model incorporates the physical positions of each antenna element and their contributions to the overall channel response, including propagation delays, Doppler shifts, and phase variations. Furthermore, hardware impairments at the LAT and IRS are integrated into the channel formulation to assess their impact on spectral efficiency (SE). A compact channel coefficient expression is derived based on the cylindrical geometry and used to evaluate the SE under ideal and non-ideal conditions. Simulation results demonstrate that the proposed CA-based LAT-IRS system achieves significant performance gains over conventional planar configurations, especially in dense environments and under realistic hardware constraints.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":"19 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.70079","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144929905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Innovative Channel Estimation Methods for Massive MIMO Using GAN Architectures 基于GAN架构的大规模MIMO信道估计创新方法
IF 1.6 4区 计算机科学 Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-09-03 DOI: 10.1049/cmu2.70066
Sakhshra Monga, Nitin Saluja, Roopali Garg, A. F. M. Shahen Shah, John Ekoru, Milka Madahana

Channel estimation is a critical component of modern wireless communication systems, especially in massive multiple-input multiple-output (MIMO) architectures, where the accuracy of received signal decoding heavily depends on the quality of channel state information. As wireless networks evolve into fifth-generation (5G) and beyond, they face increasingly complex propagation environments with rapid mobility, dense connectivity, and hardware constraints. Accurate and timely channel estimation is therefore essential for maintaining system performance, enabling reliable data transmission, and supporting techniques such as beamforming and interference management. Traditional estimation methods like least squares and minimum mean square error offer baseline performance but are often limited by their computational complexity, sensitivity to noise, and inefficiency in quantised systems—particularly those employing one-bit analogue-to-digital converters. These limitations hinder their applicability in real-time, low-power, and bandwidth-constrained scenarios. To address these challenges, this paper proposes a novel channel estimation framework based on conditional generative adversarial networks. The approach incorporates a U-Net-based generator and a sequential convolutional neural network discriminator to learn complex channel mappings from highly quantised received signals. Unlike existing methods, the proposed architecture dynamically adapts to various noise levels and system configurations, offering improved robustness and generalisation. Comprehensive experiments conducted on realistic indoor massive MIMO datasets demonstrate that the proposed method achieves substantial performance gains. The model improves estimation accuracy from 93% to 95.5% and significantly enhances normalised mean square error, consistently outperforming conventional and deep learning-based techniques across diverse training conditions. These results confirm the effectiveness of the proposed scheme in delivering high-accuracy channel estimation under extreme quantisation conditions, making it suitable for next-generation wireless systems.

信道估计是现代无线通信系统的重要组成部分,特别是在大规模多输入多输出(MIMO)体系结构中,接收信号解码的准确性很大程度上取决于信道状态信息的质量。随着无线网络向第五代(5G)及以后发展,它们面临着日益复杂的传播环境,具有快速移动性、密集连接和硬件限制。因此,准确及时的信道估计对于维持系统性能、实现可靠的数据传输以及支持波束形成和干扰管理等技术至关重要。传统的估计方法,如最小二乘和最小均方误差提供了基准性能,但通常受到其计算复杂性,对噪声的敏感性和量化系统效率低下的限制-特别是那些采用1位模数转换器的系统。这些限制阻碍了它们在实时、低功耗和带宽受限场景中的适用性。为了解决这些问题,本文提出了一种新的基于条件生成对抗网络的信道估计框架。该方法结合了一个基于u - net的生成器和一个顺序卷积神经网络鉴别器,从高度量化的接收信号中学习复杂的信道映射。与现有方法不同,所提出的体系结构动态适应各种噪声水平和系统配置,提供了更好的鲁棒性和泛化性。在真实的室内海量MIMO数据集上进行的综合实验表明,该方法取得了显著的性能提升。该模型将估计精度从93%提高到95.5%,并显著提高了归一化均方误差,在不同的训练条件下始终优于传统和基于深度学习的技术。这些结果证实了该方案在极端量化条件下提供高精度信道估计的有效性,使其适用于下一代无线系统。
{"title":"Innovative Channel Estimation Methods for Massive MIMO Using GAN Architectures","authors":"Sakhshra Monga,&nbsp;Nitin Saluja,&nbsp;Roopali Garg,&nbsp;A. F. M. Shahen Shah,&nbsp;John Ekoru,&nbsp;Milka Madahana","doi":"10.1049/cmu2.70066","DOIUrl":"10.1049/cmu2.70066","url":null,"abstract":"<p>Channel estimation is a critical component of modern wireless communication systems, especially in massive multiple-input multiple-output (MIMO) architectures, where the accuracy of received signal decoding heavily depends on the quality of channel state information. As wireless networks evolve into fifth-generation (5G) and beyond, they face increasingly complex propagation environments with rapid mobility, dense connectivity, and hardware constraints. Accurate and timely channel estimation is therefore essential for maintaining system performance, enabling reliable data transmission, and supporting techniques such as beamforming and interference management. Traditional estimation methods like least squares and minimum mean square error offer baseline performance but are often limited by their computational complexity, sensitivity to noise, and inefficiency in quantised systems—particularly those employing one-bit analogue-to-digital converters. These limitations hinder their applicability in real-time, low-power, and bandwidth-constrained scenarios. To address these challenges, this paper proposes a novel channel estimation framework based on conditional generative adversarial networks. The approach incorporates a U-Net-based generator and a sequential convolutional neural network discriminator to learn complex channel mappings from highly quantised received signals. Unlike existing methods, the proposed architecture dynamically adapts to various noise levels and system configurations, offering improved robustness and generalisation. Comprehensive experiments conducted on realistic indoor massive MIMO datasets demonstrate that the proposed method achieves substantial performance gains. The model improves estimation accuracy from 93% to 95.5% and significantly enhances normalised mean square error, consistently outperforming conventional and deep learning-based techniques across diverse training conditions. These results confirm the effectiveness of the proposed scheme in delivering high-accuracy channel estimation under extreme quantisation conditions, making it suitable for next-generation wireless systems.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":"19 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.70066","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144935342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Variations in Wireless Network Topology Inference: Recent Evolution, Challenges, and Directions 无线网络拓扑推断的变化:最近的发展、挑战和方向
IF 1.6 4区 计算机科学 Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-08-29 DOI: 10.1049/cmu2.70073
Wenbo Du, Jun Cai, Weijun Zeng, Xiang Zheng, Huali Wang, Lei Zhu

Wireless networks, as the foundation of the modern information society, rely crucially on network topology with the development of 6th generation mobile networks technologies. The network topology structure not only shapes the mechanism and functional dynamics of network evolution, but also reflects the communication relationship and information exchange among nodes. For this reason, wireless network topology inference has become a key research field in network science and the Internet of Things. Wireless network topology inference methods can be roughly divided into cooperative methods and non-cooperative methods. The former needs to directly participate in the communication process of the target network to obtain detailed internal information, and its applicability is limited. In contrast, the latter infers the topology through external observation of data packet timing without the need to know the internal information of the network in advance, and has broader practicability. This paper first outlines the basic concepts and scope of topology inference, and briefly reviews the cooperative methods. Then, three types of non-cooperative methods were comprehensively summarized: based on statistical learning, based on machine learning, and based on rule analysis. Using a unified dataset and evaluation metrics, the performance of four representative non-cooperative topology inference algorithms is compared. Finally, this paper points out the challenges faced by network topology inference and proposes potential future research directions, aiming to provide theoretical support for the continuous development of this field.

随着第六代移动网络技术的发展,无线网络作为现代信息社会的基础,对网络拓扑结构的依赖性越来越大。网络拓扑结构不仅塑造了网络演化的机制和功能动态,而且反映了节点间的通信关系和信息交换。因此,无线网络拓扑推理已成为网络科学和物联网的一个重点研究领域。无线网络拓扑推理方法大致可分为协作方法和非协作方法。前者需要直接参与目标网络的通信过程,获取详细的内部信息,适用性有限。后者不需要事先知道网络的内部信息,而是通过外部观察数据包时间来推断拓扑,具有更广泛的实用性。本文首先概述了拓扑推理的基本概念和范围,并简要回顾了协作方法。然后,综合总结了基于统计学习、基于机器学习和基于规则分析的三种非合作方法。利用统一的数据集和评价指标,比较了四种具有代表性的非合作拓扑推理算法的性能。最后,本文指出了网络拓扑推理面临的挑战,并提出了未来可能的研究方向,旨在为该领域的持续发展提供理论支持。
{"title":"Variations in Wireless Network Topology Inference: Recent Evolution, Challenges, and Directions","authors":"Wenbo Du,&nbsp;Jun Cai,&nbsp;Weijun Zeng,&nbsp;Xiang Zheng,&nbsp;Huali Wang,&nbsp;Lei Zhu","doi":"10.1049/cmu2.70073","DOIUrl":"10.1049/cmu2.70073","url":null,"abstract":"<p>Wireless networks, as the foundation of the modern information society, rely crucially on network topology with the development of 6th generation mobile networks technologies. The network topology structure not only shapes the mechanism and functional dynamics of network evolution, but also reflects the communication relationship and information exchange among nodes. For this reason, wireless network topology inference has become a key research field in network science and the Internet of Things. Wireless network topology inference methods can be roughly divided into cooperative methods and non-cooperative methods. The former needs to directly participate in the communication process of the target network to obtain detailed internal information, and its applicability is limited. In contrast, the latter infers the topology through external observation of data packet timing without the need to know the internal information of the network in advance, and has broader practicability. This paper first outlines the basic concepts and scope of topology inference, and briefly reviews the cooperative methods. Then, three types of non-cooperative methods were comprehensively summarized: based on statistical learning, based on machine learning, and based on rule analysis. Using a unified dataset and evaluation metrics, the performance of four representative non-cooperative topology inference algorithms is compared. Finally, this paper points out the challenges faced by network topology inference and proposes potential future research directions, aiming to provide theoretical support for the continuous development of this field.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":"19 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.70073","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144915300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on Predicting Alarm of Signaling Storm by Hybrid LSTM-AM Optimized With Improved PSO 基于改进粒子群优化的混合LSTM-AM信号风暴预警研究
IF 1.6 4区 计算机科学 Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-08-22 DOI: 10.1049/cmu2.70074
Ying Tong, Xiang Jia, Yong Deng, Yang Liu, Jiangang Tong

The prediction of the IP multimedia subsystem (IMS) signaling storm is crucial for ensuring the stable operation of voice over new radio (VoNR) services and enhancing operators' core competitiveness. However, the current IMS signaling storm prediction alarm function for live network systems lacks robustness, with most attention focused on equipment fault detection and network element health monitoring. To address this limitation, this paper proposes a signaling storm prediction model comprising two modules: prediction and judgment. The prediction module combines the advantages of long short-term memory (LSTM) models and an attention mechanism (AM), improving convergence and accuracy through an enhanced Particle Swarm Optimization (PSO) algorithm based on trigonometric transformation (TrigPSO). The judgment module effectively classifies predicted values into different alarm levels using K-Means. Experimental results based on data from China telecom's scientific apparatus show that the proposed model accurately predicts key indicator values, with an improved r-squared (R2) value of 0.854 compared to other models such as LSTM, LSTM-AM, LSTM-PSO, and LSTM-AM-PSO. Additionally, the k-means model performs well in experimental data validation, demonstrating its scientific validity and high efficiency.

IP多媒体子系统(IMS)信令风暴的预测对于保证VoNR业务的稳定运行,提高运营商的核心竞争力至关重要。然而,目前针对现网系统的IMS信令风暴预报报警功能缺乏鲁棒性,主要集中在设备故障检测和网元健康监测上。为了解决这一问题,本文提出了一个信号风暴预测模型,该模型由预测和判断两个模块组成。该预测模块结合了长短期记忆(LSTM)模型和注意机制(AM)的优点,通过基于三角变换(TrigPSO)的增强粒子群优化(PSO)算法提高了收敛性和准确性。判断模块利用K-Means将预测值有效地划分为不同的报警级别。基于中国电信科学仪器数据的实验结果表明,该模型预测关键指标值的准确性较高,与LSTM、LSTM- am、LSTM- pso、LSTM- am - pso等模型相比,r²(R2)值提高了0.854。此外,k-means模型在实验数据验证中表现良好,证明了其科学有效性和高效率。
{"title":"Research on Predicting Alarm of Signaling Storm by Hybrid LSTM-AM Optimized With Improved PSO","authors":"Ying Tong,&nbsp;Xiang Jia,&nbsp;Yong Deng,&nbsp;Yang Liu,&nbsp;Jiangang Tong","doi":"10.1049/cmu2.70074","DOIUrl":"10.1049/cmu2.70074","url":null,"abstract":"<p>The prediction of the IP multimedia subsystem (IMS) signaling storm is crucial for ensuring the stable operation of voice over new radio (VoNR) services and enhancing operators' core competitiveness. However, the current IMS signaling storm prediction alarm function for live network systems lacks robustness, with most attention focused on equipment fault detection and network element health monitoring. To address this limitation, this paper proposes a signaling storm prediction model comprising two modules: prediction and judgment. The prediction module combines the advantages of long short-term memory (LSTM) models and an attention mechanism (AM), improving convergence and accuracy through an enhanced Particle Swarm Optimization (PSO) algorithm based on trigonometric transformation (TrigPSO). The judgment module effectively classifies predicted values into different alarm levels using K-Means. Experimental results based on data from China telecom's scientific apparatus show that the proposed model accurately predicts key indicator values, with an improved r-squared (R<sup>2</sup>) value of 0.854 compared to other models such as LSTM, LSTM-AM, LSTM-PSO, and LSTM-AM-PSO. Additionally, the k-means model performs well in experimental data validation, demonstrating its scientific validity and high efficiency.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":"19 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.70074","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144888456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Reinforcement Learning-Based Intelligent Resource Management in Multi-UAVs-Assisted MEC Emergency Communication System 基于深度强化学习的多无人机辅助MEC应急通信系统智能资源管理
IF 1.6 4区 计算机科学 Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-08-22 DOI: 10.1049/cmu2.70063
Yuanmo Lin, Zhiyong Xu, Jianhua Li, Jingyuan Wang, Cheng Li

This paper investigates a multi unmanned aerial vehicles (UAVs) assisted mobile edge computing (MEC) emergency communication system in which each UAV acts as a mobile MEC server for computing tasks offloaded by ground sensor users. Considering the stochastic dynamic characteristics of multi-UAVs-assisted MEC systems and the precision of spectrum resources, the deep reinforcement learning (DRL) algorithm and the non-orthogonal multiple access (NOMA) techniques are introduced. Specifically, we design an offloading algorithm based on a multi-agent deep deterministic policy gradient that jointly optimizes the UAVs' flight trajectories, the sensors' offloading powers, and the dynamic spectrum access to maximize the number of successfully offloaded tasks. The algorithm employs the Gumbel-Softmax method to effectively control both the discrete sensor access action and the continuous offloading power action. Sufficient simulation results show that the proposed algorithm performs significantly better than other benchmark algorithms.

研究了一种多无人机辅助移动边缘计算(MEC)应急通信系统,其中每架无人机作为移动MEC服务器,处理由地面传感器用户卸载的计算任务。考虑到多无人机辅助MEC系统的随机动态特性和频谱资源的精度,介绍了深度强化学习(DRL)算法和非正交多址(NOMA)技术。具体来说,我们设计了一种基于多智能体深度确定性策略梯度的卸载算法,通过联合优化无人机的飞行轨迹、传感器的卸载功率和动态频谱访问,使成功卸载的任务数量最大化。该算法采用Gumbel-Softmax方法,有效地控制了离散的传感器接入动作和连续的卸载功率动作。充分的仿真结果表明,该算法的性能明显优于其他基准算法。
{"title":"Deep Reinforcement Learning-Based Intelligent Resource Management in Multi-UAVs-Assisted MEC Emergency Communication System","authors":"Yuanmo Lin,&nbsp;Zhiyong Xu,&nbsp;Jianhua Li,&nbsp;Jingyuan Wang,&nbsp;Cheng Li","doi":"10.1049/cmu2.70063","DOIUrl":"10.1049/cmu2.70063","url":null,"abstract":"<p>This paper investigates a multi unmanned aerial vehicles (UAVs) assisted mobile edge computing (MEC) emergency communication system in which each UAV acts as a mobile MEC server for computing tasks offloaded by ground sensor users. Considering the stochastic dynamic characteristics of multi-UAVs-assisted MEC systems and the precision of spectrum resources, the deep reinforcement learning (DRL) algorithm and the non-orthogonal multiple access (NOMA) techniques are introduced. Specifically, we design an offloading algorithm based on a multi-agent deep deterministic policy gradient that jointly optimizes the UAVs' flight trajectories, the sensors' offloading powers, and the dynamic spectrum access to maximize the number of successfully offloaded tasks. The algorithm employs the Gumbel-Softmax method to effectively control both the discrete sensor access action and the continuous offloading power action. Sufficient simulation results show that the proposed algorithm performs significantly better than other benchmark algorithms.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":"19 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.70063","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144891722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intelligent Reflecting Surface-Aided Wireless Networks: Deep Learning-Based Channel Estimation Using ResNet+UNet 智能反射表面辅助无线网络:基于深度学习的信道估计使用ResNet+UNet
IF 1.6 4区 计算机科学 Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-08-18 DOI: 10.1049/cmu2.70075
Sakhshra Monga, Aditya Pathania, Nitin Saluja, Gunjan Gupta, Ashutosh Sharma

Accurate channel estimation is essential for optimising intelligent reflecting surface-assisted multi-user communication systems, particularly in dynamic indoor environments. Conventional techniques such as least squares (LS), linear minimum mean square error (LMMSE), and orthogonal matching pursuit (OMP) suffer from noise sensitivity and fail to effectively capture spatial dependencies in high-dimensional intelligent reflecting surface (IRS)-assisted channels. To overcome these limitations, this work proposes a deep learning-driven ResNet+UNet framework that refines initial LS estimates using residual learning and multi-scale feature reconstruction. While UNet enhances channel estimation through hierarchical processing, efficiently decreasing noise and enhancing estimate accuracy, ResNet gathers spatial features. Simulation results show that the proposed method significantly outperforms existing methods across various performance metrics. In NMSE versus signal-to-noise ratio assessments, the proposed approach surpasses convolutional deep residual network (CDRN) by 59%, OMP by 81%, LMMSE by 114%, and LS by 115%. When IRS elements are modified, it overcomes CDRN by 60%, OMP by 78%, LS by 107%, and LMMSE by 110%. Along with this, recommended structure performs more effectively than CDRN by 39%, OMP by 44%, LS by 122%, and LMMSE by 129% across various antenna configurations. The proposed approach is particularly beneficial for augmented reality (AR) applications, where real-time, high-precision channel estimation ensures seamless data streaming and ultra-low latency, enhancing immersive experiences in AR-based communication and interactive environments. These results illustrate the proposed method's scalability and resilience, making it a suitable choice for next-generation IRS-assisted wireless communication networks.

准确的信道估计对于优化智能反射面辅助多用户通信系统至关重要,特别是在动态室内环境中。传统的最小二乘(LS)、线性最小均方误差(LMMSE)和正交匹配追踪(OMP)等技术存在噪声敏感性,无法有效捕获高维智能反射面(IRS)辅助通道中的空间依赖性。为了克服这些限制,本工作提出了一个深度学习驱动的ResNet+UNet框架,该框架使用残差学习和多尺度特征重建来改进初始LS估计。UNet通过分层处理增强信道估计,有效降低噪声,提高估计精度,而ResNet则收集空间特征。仿真结果表明,该方法在各种性能指标上都明显优于现有方法。在NMSE与信噪比评估中,所提出的方法比卷积深度残差网络(CDRN)高出59%,比OMP高出81%,比LMMSE高出114%,比LS高出115%。对IRS元素进行修饰后,CDRN优于60%,OMP优于78%,LS优于107%,LMMSE优于110%。此外,在各种天线配置中,推荐结构的效率比CDRN高39%,OMP高44%,LS高122%,LMMSE高129%。所提出的方法特别有利于增强现实(AR)应用,其中实时、高精度的信道估计确保了无缝的数据流和超低延迟,增强了基于AR的通信和交互环境中的沉浸式体验。这些结果说明了该方法的可扩展性和弹性,使其成为下一代irs辅助无线通信网络的合适选择。
{"title":"Intelligent Reflecting Surface-Aided Wireless Networks: Deep Learning-Based Channel Estimation Using ResNet+UNet","authors":"Sakhshra Monga,&nbsp;Aditya Pathania,&nbsp;Nitin Saluja,&nbsp;Gunjan Gupta,&nbsp;Ashutosh Sharma","doi":"10.1049/cmu2.70075","DOIUrl":"10.1049/cmu2.70075","url":null,"abstract":"<p>Accurate channel estimation is essential for optimising intelligent reflecting surface-assisted multi-user communication systems, particularly in dynamic indoor environments. Conventional techniques such as least squares (LS), linear minimum mean square error (LMMSE), and orthogonal matching pursuit (OMP) suffer from noise sensitivity and fail to effectively capture spatial dependencies in high-dimensional intelligent reflecting surface (IRS)-assisted channels. To overcome these limitations, this work proposes a deep learning-driven ResNet+UNet framework that refines initial LS estimates using residual learning and multi-scale feature reconstruction. While UNet enhances channel estimation through hierarchical processing, efficiently decreasing noise and enhancing estimate accuracy, ResNet gathers spatial features. Simulation results show that the proposed method significantly outperforms existing methods across various performance metrics. In NMSE versus signal-to-noise ratio assessments, the proposed approach surpasses convolutional deep residual network (CDRN) by 59%, OMP by 81%, LMMSE by 114%, and LS by 115%. When IRS elements are modified, it overcomes CDRN by 60%, OMP by 78%, LS by 107%, and LMMSE by 110%. Along with this, recommended structure performs more effectively than CDRN by 39%, OMP by 44%, LS by 122%, and LMMSE by 129% across various antenna configurations. The proposed approach is particularly beneficial for augmented reality (AR) applications, where real-time, high-precision channel estimation ensures seamless data streaming and ultra-low latency, enhancing immersive experiences in AR-based communication and interactive environments. These results illustrate the proposed method's scalability and resilience, making it a suitable choice for next-generation IRS-assisted wireless communication networks.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":"19 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.70075","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144861878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Adaptive Access Method for Edge Clusters of Distribution Automation Terminals Based on Cloud-Edge Fusion 一种基于云边缘融合的配电自动化终端边缘集群自适应接入方法
IF 1.6 4区 计算机科学 Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-08-13 DOI: 10.1049/cmu2.70057
Ruijiang Zeng, Zhiyong Li

As massive distribution automation terminals connect and data is acquired at high frequencies, the demand for low-latency processing of distribution service data has increased dramatically. Edge clusters, integrating multiple edge servers, can effectively mitigate transmission delays. Cloud-edge fusion leverages its data processing capabilities and the real-time responsiveness of edge computing to meet the needs of efficient data processing and optimal resource allocation. However, existing access methods for distribution automation terminals in cloud-edge fusion architectures exclusively depend on either cloud or edge computing for data processing. These conventional approaches fail to incorporate critical aspects such as: adaptive access mechanisms for edge clusters of distribution automation terminals, flexible strategies including data offloading, knowledge sharing among edge clusters, and load awareness capabilities. Consequently, they demonstrate significant limitations in achieving deep fusion between cloud and edge computing paradigms. Additionally, they lack consideration for the perception of global information and queue backlog, making it difficult to meet the low-latency data transmission requirements of distribution automation services in dynamic environments. To address these issues, we propose an adaptive access method for edge clusters of distribution automation terminals based on cloud-edge fusion. Firstly, a data processing architecture for adaptive access of distribution automation terminal edge clusters are designed to coordinate terminal access, data processing distribution, and decision optimization for computing resource allocation, enabling efficient data transmission and processing. Secondly, an optimization problem for adaptive access in edge clusters of distribution automation terminals is formulated, aiming to minimize the weighted sum of total queuing delay and load balancing degree. Finally, a federated twin delayed deep deterministic policy gradient (federated TD3)-based edge cluster adaptive access method for distribution automation terminal is proposed. This approach integrates model parameters from edge servers at the cloud level and distributes them to the edge cluster level, learning strategies for terminal access, data processing allocation, and computing resource allocation based on queue backlog fluctuations. This enhances load balancing between the distribution terminal layer and edge layer, achieving collaborative optimization of load balancing and delay under massive distribution terminal access. Simulation results demonstrate that the proposed method significantly reduces system queuing delay, optimizes load balancing, and enhances overall operation efficiency.

随着海量配电自动化终端的连接和数据的高频采集,对配电业务数据的低延迟处理的需求急剧增加。边缘集群集成了多个边缘服务器,可以有效地降低传输延迟。云边缘融合利用其数据处理能力和边缘计算的实时响应能力来满足高效数据处理和优化资源分配的需求。然而,现有的云边缘融合体系结构中配电自动化终端的访问方法完全依赖于云计算或边缘计算来进行数据处理。这些传统的方法未能纳入关键方面,如:配电自动化终端边缘集群的自适应访问机制,包括数据卸载在内的灵活策略,边缘集群之间的知识共享以及负载感知能力。因此,它们在实现云和边缘计算范式之间的深度融合方面表现出显着的局限性。此外,它们缺乏对全局信息感知和队列积压的考虑,难以满足动态环境下配电自动化业务的低延迟数据传输要求。针对这些问题,提出了一种基于云边缘融合的配电自动化终端边缘集群自适应接入方法。首先,设计了分布式自动化终端边缘集群自适应接入的数据处理架构,协调终端接入、数据处理分配和计算资源分配决策优化,实现高效的数据传输和处理;其次,提出了配电自动化终端边缘集群中自适应接入的优化问题,以最小化总排队延迟和负载均衡程度的加权和为目标;最后,提出一种基于联邦双延迟深度确定性策略梯度(federated TD3)的配电自动化终端边缘聚类自适应接入方法。该方法集成了云级边缘服务器的模型参数,并将其分发到边缘集群级、终端访问学习策略、数据处理分配和基于队列积压波动的计算资源分配。增强了分布终端层和边缘层之间的负载均衡,实现了分布终端海量接入下的负载均衡和时延协同优化。仿真结果表明,该方法显著降低了系统排队延迟,优化了负载均衡,提高了整体运行效率。
{"title":"An Adaptive Access Method for Edge Clusters of Distribution Automation Terminals Based on Cloud-Edge Fusion","authors":"Ruijiang Zeng,&nbsp;Zhiyong Li","doi":"10.1049/cmu2.70057","DOIUrl":"10.1049/cmu2.70057","url":null,"abstract":"<p>As massive distribution automation terminals connect and data is acquired at high frequencies, the demand for low-latency processing of distribution service data has increased dramatically. Edge clusters, integrating multiple edge servers, can effectively mitigate transmission delays. Cloud-edge fusion leverages its data processing capabilities and the real-time responsiveness of edge computing to meet the needs of efficient data processing and optimal resource allocation. However, existing access methods for distribution automation terminals in cloud-edge fusion architectures exclusively depend on either cloud or edge computing for data processing. These conventional approaches fail to incorporate critical aspects such as: adaptive access mechanisms for edge clusters of distribution automation terminals, flexible strategies including data offloading, knowledge sharing among edge clusters, and load awareness capabilities. Consequently, they demonstrate significant limitations in achieving deep fusion between cloud and edge computing paradigms. Additionally, they lack consideration for the perception of global information and queue backlog, making it difficult to meet the low-latency data transmission requirements of distribution automation services in dynamic environments. To address these issues, we propose an adaptive access method for edge clusters of distribution automation terminals based on cloud-edge fusion. Firstly, a data processing architecture for adaptive access of distribution automation terminal edge clusters are designed to coordinate terminal access, data processing distribution, and decision optimization for computing resource allocation, enabling efficient data transmission and processing. Secondly, an optimization problem for adaptive access in edge clusters of distribution automation terminals is formulated, aiming to minimize the weighted sum of total queuing delay and load balancing degree. Finally, a federated twin delayed deep deterministic policy gradient (federated TD3)-based edge cluster adaptive access method for distribution automation terminal is proposed. This approach integrates model parameters from edge servers at the cloud level and distributes them to the edge cluster level, learning strategies for terminal access, data processing allocation, and computing resource allocation based on queue backlog fluctuations. This enhances load balancing between the distribution terminal layer and edge layer, achieving collaborative optimization of load balancing and delay under massive distribution terminal access. Simulation results demonstrate that the proposed method significantly reduces system queuing delay, optimizes load balancing, and enhances overall operation efficiency.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":"19 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.70057","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144832643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Hybrid Approach for Intrusion Detection Using Neuro-Fuzzy, SVM, and PSO 基于神经模糊、支持向量机和粒子群的入侵检测混合方法
IF 1.6 4区 计算机科学 Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-08-06 DOI: 10.1049/cmu2.70071
Soodeh Hosseini, Fahime Lotfi, Hossein Seilani

This paper presents a novel method for optimising intrusion detection systems (IDS) by using two powerful techniques, namely ‘Principal component analysis (PCA)’ and ‘Particle swarm optimisation (PSO).’ Furthermore, the proposed approach is implemented on two categories of classifiers, Neuro-Fuzzy and support vector machines (SVM), which function on four widely used intrusion detection system datasets: CAIDA, DARPA, NSLKDD, and ISCX2012. Performance results are analysed individually based on a set of established evaluation criteria. Furthermore, the PSO algorithm is applied in search of the best combination of the outputs from the Neuro-Fuzzy and the SVM models, resulting in better attack detection accuracy with reduced false alarm rates. Another benefit of using PCA in the proposed method is that it considerably reduces the dimensions of the data by computing the principal components. This offers several advantages, such as reduced model complexity, training and execution time, memory usage, and model overfitting prevention. By focusing on the major components, PCA reduces noise in data to a certain extent, leading to increased classification accuracy and robustness. It also improves model interpretability by highlighting the key components. The application of PSO to find the most optimal parameters leads to the optimisation of the Neuro-Fuzzy and SVM models' parameters. The results achieved support that the proposed method for output combination in both Neuro-Fuzzy and SVM categories significantly enhances the accuracy of attack detection while reducing the false alarm rate.

本文提出了一种利用主成分分析(PCA)和粒子群优化(PSO)两种强大的技术来优化入侵检测系统(IDS)的新方法。此外,提出的方法在两类分类器,神经模糊和支持向量机(SVM)上实现,这两类分类器在四个广泛使用的入侵检测系统数据集上起作用:CAIDA, DARPA, NSLKDD和ISCX2012。绩效结果是根据一套既定的评估标准单独分析的。此外,应用粒子群算法寻找神经模糊模型和支持向量机模型输出的最佳组合,从而提高攻击检测精度,降低虚警率。在所提出的方法中使用PCA的另一个好处是,它通过计算主成分大大降低了数据的维数。这提供了几个优点,例如降低模型复杂性、训练和执行时间、内存使用以及防止模型过拟合。PCA通过关注主要成分,在一定程度上降低了数据中的噪声,从而提高了分类精度和鲁棒性。它还通过突出显示关键组件来提高模型的可解释性。利用粒子群算法寻找最优参数,实现了神经模糊模型和支持向量机模型参数的优化。结果表明,本文提出的神经模糊和支持向量机两类输出组合方法显著提高了攻击检测的准确率,同时降低了虚警率。
{"title":"A Novel Hybrid Approach for Intrusion Detection Using Neuro-Fuzzy, SVM, and PSO","authors":"Soodeh Hosseini,&nbsp;Fahime Lotfi,&nbsp;Hossein Seilani","doi":"10.1049/cmu2.70071","DOIUrl":"10.1049/cmu2.70071","url":null,"abstract":"<p>This paper presents a novel method for optimising intrusion detection systems (IDS) by using two powerful techniques, namely ‘Principal component analysis (PCA)’ and ‘Particle swarm optimisation (PSO).’ Furthermore, the proposed approach is implemented on two categories of classifiers, Neuro-Fuzzy and support vector machines (SVM), which function on four widely used intrusion detection system datasets: CAIDA, DARPA, NSLKDD, and ISCX2012. Performance results are analysed individually based on a set of established evaluation criteria. Furthermore, the PSO algorithm is applied in search of the best combination of the outputs from the Neuro-Fuzzy and the SVM models, resulting in better attack detection accuracy with reduced false alarm rates. Another benefit of using PCA in the proposed method is that it considerably reduces the dimensions of the data by computing the principal components. This offers several advantages, such as reduced model complexity, training and execution time, memory usage, and model overfitting prevention. By focusing on the major components, PCA reduces noise in data to a certain extent, leading to increased classification accuracy and robustness. It also improves model interpretability by highlighting the key components. The application of PSO to find the most optimal parameters leads to the optimisation of the Neuro-Fuzzy and SVM models' parameters. The results achieved support that the proposed method for output combination in both Neuro-Fuzzy and SVM categories significantly enhances the accuracy of attack detection while reducing the false alarm rate.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":"19 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.70071","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144782625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IET Communications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1