首页 > 最新文献

IEEE Transactions on Machine Learning in Communications and Networking最新文献

英文 中文
Radio Map-Based Delivery Sequence Design and Trajectory Optimization in UAV Cargo Delivery Systems 基于无线电地图的无人机货物配送序列设计与轨迹优化
Pub Date : 2025-12-02 DOI: 10.1109/TMLCN.2025.3639348
Fahui Wu;Zhijie Wang;Jiangling Cao;Shi Peng;Yu Xu;Yunfei Gao;Qinghua Wu;Dingcheng Yang
In this paper, we consider a UAV-assisted cargo delivery system with limited payload capacity. Due to the limited load capacity of the cargo UAV, it needs to make multiple trips to the warehouse to pick up the parcels. Meanwhile, due to the uneven distribution of cellular signal strength in the air, to send logistics information to ground users (GUs) in time, the cellular-connected UAV needs to bypass the weak signal area in the air. Therefore, these two factors lead to the increase of the total cargo delivery time. To reduce the total delivery time and ensure the communication quality of the UAV, we formulate an objective function to be optimized, which is the weighted sum of the delivery time and the communication outage time of the cargo UAV. We propose a limited payload UAV delivery (LP-UAV-D) framework to solve this problem. The framework consists of the particle swarm optimization (PSO) algorithm and the dueling double deep Q network (D3QN) algorithm. We used two classic algorithms as control groups. The numerical results show that regardless of the maximum payload or flight speed of the UAV, the objective function value obtained through our proposed LP-UAV-D framework and with the help of radio maps is always the smallest. Specifically, the performance of solving the trade-off problem between delivery time and communication quality is improved by about 10%-20% compared with the two comparison algorithms.
本文考虑了一种载荷能力有限的无人机辅助货物运输系统。由于货运无人机的载货能力有限,需要多次往返仓库取包裹。同时,由于蜂窝信号在空中的强度分布不均匀,为了及时向地面用户发送物流信息,蜂窝连接无人机需要绕过空中的弱信号区域。因此,这两个因素导致货物总交货期的增加。为了减少无人机的总交付时间,保证无人机的通信质量,我们制定了一个目标函数进行优化,该目标函数为货运无人机的交付时间与通信中断时间的加权和。为了解决这一问题,我们提出了一种有限载荷无人机投送(LP-UAV-D)框架。该框架由粒子群优化(PSO)算法和决斗双深度Q网络(D3QN)算法组成。我们使用两种经典算法作为对照组。数值结果表明,无论无人机的最大载荷或飞行速度如何,通过本文提出的LP-UAV-D框架和无线电地图得到的目标函数值总是最小的。具体而言,与两种比较算法相比,解决交付时间和通信质量之间权衡问题的性能提高了约10%-20%。
{"title":"Radio Map-Based Delivery Sequence Design and Trajectory Optimization in UAV Cargo Delivery Systems","authors":"Fahui Wu;Zhijie Wang;Jiangling Cao;Shi Peng;Yu Xu;Yunfei Gao;Qinghua Wu;Dingcheng Yang","doi":"10.1109/TMLCN.2025.3639348","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3639348","url":null,"abstract":"In this paper, we consider a UAV-assisted cargo delivery system with limited payload capacity. Due to the limited load capacity of the cargo UAV, it needs to make multiple trips to the warehouse to pick up the parcels. Meanwhile, due to the uneven distribution of cellular signal strength in the air, to send logistics information to ground users (GUs) in time, the cellular-connected UAV needs to bypass the weak signal area in the air. Therefore, these two factors lead to the increase of the total cargo delivery time. To reduce the total delivery time and ensure the communication quality of the UAV, we formulate an objective function to be optimized, which is the weighted sum of the delivery time and the communication outage time of the cargo UAV. We propose a limited payload UAV delivery (LP-UAV-D) framework to solve this problem. The framework consists of the particle swarm optimization (PSO) algorithm and the dueling double deep Q network (D3QN) algorithm. We used two classic algorithms as control groups. The numerical results show that regardless of the maximum payload or flight speed of the UAV, the objective function value obtained through our proposed LP-UAV-D framework and with the help of radio maps is always the smallest. Specifically, the performance of solving the trade-off problem between delivery time and communication quality is improved by about 10%-20% compared with the two comparison algorithms.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"4 ","pages":"17-32"},"PeriodicalIF":0.0,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11272178","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Generalization of Graph Neural Networks for Scheduling Backscatter Communications at Scale 大规模后向散射通信调度的图神经网络鲁棒泛化
Pub Date : 2025-11-28 DOI: 10.1109/TMLCN.2025.3638711
Daniel F. Pérez-Ramírez;Nicolas Tsiftes;Carlos Pérez-Penichet;Dejan Kostić;Thiemo Voigt;Magnus Boman
Novel backscatter communication techniques allow battery-free sensor tags to operate with standard IoT devices, thereby augmenting a network’s sensing capabilities. For communicating, sensor tags rely on an unmodulated carrier provided by neighboring IoT devices, with a schedule coordinating this provisioning across the network. Computing schedules to interrogate all sensor tags while minimizing energy, spectrum utilization, and latency—i.e., carrier scheduling—is an NP-hard problem. While recent work introduces learning-based systems for carrier scheduling, we find that their advantage over traditional heuristics progressively decreases for networks with hundreds of IoT nodes. Moreover, we find that their generalization is not consistent: it greatly varies across identically trained models while fixing the dataset, hyperparameters and random seeds used. We present RobustGANTT, a Graph Neural Network scheduler for backscatter networks that learns from optimal schedules of small networks (up to 10 nodes). Our scheduler generalizes, without the need for retraining, to networks of up to hundreds of nodes ( $mathbf {100}boldsymbol {times }$ training topology sizes), and exhibits consistent generalization across independent training rounds. We evaluate our system on both simulated topologies of up to 1000 nodes and real-life IoT network topologies of up to 300 IoT devices. RobustGANTT not only exhibits better generalization than existing systems, it also computes schedules achieving up to $mathbf {2}boldsymbol {times }$ less energy and spectrum utilization. Additionally, its polynomial runtime complexity allows it to react fast to changing network conditions. Our work facilitates the operation of large-scale IoT networks, and our machine learning findings further advance the capabilities of learning-based network scheduling. We release our code, datasets and pre-trained models.
新型的反向散射通信技术允许无电池传感器标签与标准物联网设备一起工作,从而增强了网络的传感能力。为了进行通信,传感器标签依赖于相邻物联网设备提供的未调制载波,并在网络上协调这种供应。计算调度来询问所有传感器标签,同时最小化能量、频谱利用率和延迟。运营商调度是一个np难题。虽然最近的工作引入了基于学习的载波调度系统,但我们发现,对于具有数百个物联网节点的网络,它们相对于传统启发式的优势逐渐降低。此外,我们发现他们的泛化是不一致的:在固定数据集、超参数和使用的随机种子时,它在相同训练的模型中有很大的不同。我们提出了RobustGANTT,一个反向散射网络的图神经网络调度程序,它从小型网络(最多10个节点)的最优调度中学习。我们的调度器在不需要重新训练的情况下泛化到多达数百个节点的网络($mathbf {100}boldsymbol {times}$训练拓扑大小),并在独立的训练回合中表现出一致的泛化。我们在多达1000个节点的模拟拓扑和多达300个物联网设备的现实物联网网络拓扑上评估我们的系统。鲁棒gantt不仅表现出比现有系统更好的泛化,它还计算调度,实现高达$mathbf {2}boldsymbol {times}$更少的能量和频谱利用率。此外,它的多项式运行时复杂度允许它对不断变化的网络条件做出快速反应。我们的工作促进了大规模物联网网络的运行,我们的机器学习发现进一步推进了基于学习的网络调度能力。我们发布代码、数据集和预训练模型。
{"title":"Robust Generalization of Graph Neural Networks for Scheduling Backscatter Communications at Scale","authors":"Daniel F. Pérez-Ramírez;Nicolas Tsiftes;Carlos Pérez-Penichet;Dejan Kostić;Thiemo Voigt;Magnus Boman","doi":"10.1109/TMLCN.2025.3638711","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3638711","url":null,"abstract":"Novel backscatter communication techniques allow battery-free sensor tags to operate with standard IoT devices, thereby augmenting a network’s sensing capabilities. For communicating, sensor tags rely on an unmodulated carrier provided by neighboring IoT devices, with a schedule coordinating this provisioning across the network. Computing schedules to interrogate all sensor tags while minimizing energy, spectrum utilization, and latency—i.e., carrier scheduling—is an NP-hard problem. While recent work introduces learning-based systems for carrier scheduling, we find that their advantage over traditional heuristics progressively decreases for networks with hundreds of IoT nodes. Moreover, we find that their generalization is not consistent: it greatly varies across identically trained models while fixing the dataset, hyperparameters and random seeds used. We present RobustGANTT, a Graph Neural Network scheduler for backscatter networks that learns from optimal schedules of small networks (up to 10 nodes). Our scheduler generalizes, without the need for retraining, to networks of up to hundreds of nodes (<inline-formula> <tex-math>$mathbf {100}boldsymbol {times }$ </tex-math></inline-formula> training topology sizes), and exhibits consistent generalization across independent training rounds. We evaluate our system on both simulated topologies of up to 1000 nodes and real-life IoT network topologies of up to 300 IoT devices. RobustGANTT not only exhibits better generalization than existing systems, it also computes schedules achieving up to <inline-formula> <tex-math>$mathbf {2}boldsymbol {times }$ </tex-math></inline-formula> less energy and spectrum utilization. Additionally, its polynomial runtime complexity allows it to react fast to changing network conditions. Our work facilitates the operation of large-scale IoT networks, and our machine learning findings further advance the capabilities of learning-based network scheduling. We release our code, datasets and pre-trained models.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"4 ","pages":"76-97"},"PeriodicalIF":0.0,"publicationDate":"2025-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11271344","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145778406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Clustered Federated Learning to Support Context-Dependent CSI Decoding 支持上下文相关CSI解码的聚类联邦学习
Pub Date : 2025-11-28 DOI: 10.1109/TMLCN.2025.3638983
Heasung Kim;Hyeji Kim;Gustavo De Veciana
Neural network-based encoders and decoders have demonstrated significant performance gains over traditional methods for Channel State Information (CSI) feedback in MIMO communications. However, key challenges in deploying these models in real-world scenarios remain underexplored, including: a) the need to efficiently accommodate diverse channel conditions across varying contexts, e.g., environments, and whether to use multiple encoders and decoders; b) the cost of gathering sufficient data to train neural network models across various contexts; and c) the need to protect sensitive data regarding competing providers’ coverages. To address the first challenge, we propose a novel system using context-dependent decoders and a universal encoder. We limit the number of decoders by clustering similar contexts and allowing those within a cluster to share the same decoder. To address the second and third challenges, we introduce a clustered federated learning-based approach that jointly clusters contexts and learns the desired encoder and context cluster-dependent decoders, leveraging distributed data. The clustering is performed efficiently based on the similarity of time-averaged gradients across contexts. To evaluate our approach, a new dataset reflecting the heterogeneous nature of the wireless systems was curated and made publicly available. Extensive experimental results demonstrate that our proposed CSI compression framework is highly effective and able to efficiently determine a correct context clustering and associated encoder and decoders.
在MIMO通信中,基于神经网络的编码器和解码器的性能比传统的信道状态信息(CSI)反馈方法有了显著的提高。然而,在现实场景中部署这些模型的关键挑战仍未得到充分探讨,包括:a)需要在不同的环境中有效地适应不同的信道条件,例如,环境,以及是否使用多个编码器和解码器;B)收集足够的数据来训练跨各种环境的神经网络模型的成本;c)需要保护与竞争供应商的覆盖范围有关的敏感数据。为了解决第一个挑战,我们提出了一个使用上下文相关解码器和通用编码器的新系统。我们通过聚类相似的上下文来限制解码器的数量,并允许集群内的解码器共享相同的解码器。为了解决第二个和第三个挑战,我们引入了一种基于集群联合学习的方法,该方法利用分布式数据,联合聚类上下文并学习所需的编码器和上下文集群相关的解码器。基于上下文间时间平均梯度的相似性,有效地进行聚类。为了评估我们的方法,一个反映无线系统异构性质的新数据集被整理并公开提供。大量的实验结果表明,我们提出的CSI压缩框架是非常有效的,能够有效地确定正确的上下文聚类和相关的编码器和解码器。
{"title":"Clustered Federated Learning to Support Context-Dependent CSI Decoding","authors":"Heasung Kim;Hyeji Kim;Gustavo De Veciana","doi":"10.1109/TMLCN.2025.3638983","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3638983","url":null,"abstract":"Neural network-based encoders and decoders have demonstrated significant performance gains over traditional methods for Channel State Information (CSI) feedback in MIMO communications. However, key challenges in deploying these models in real-world scenarios remain underexplored, including: a) the need to efficiently accommodate diverse channel conditions across varying contexts, e.g., environments, and whether to use multiple encoders and decoders; b) the cost of gathering sufficient data to train neural network models across various contexts; and c) the need to protect sensitive data regarding competing providers’ coverages. To address the first challenge, we propose a novel system using context-dependent decoders and a universal encoder. We limit the number of decoders by clustering similar contexts and allowing those within a cluster to share the same decoder. To address the second and third challenges, we introduce a clustered federated learning-based approach that jointly clusters contexts and learns the desired encoder and context cluster-dependent decoders, leveraging distributed data. The clustering is performed efficiently based on the similarity of time-averaged gradients across contexts. To evaluate our approach, a new dataset reflecting the heterogeneous nature of the wireless systems was curated and made publicly available. Extensive experimental results demonstrate that our proposed CSI compression framework is highly effective and able to efficiently determine a correct context clustering and associated encoder and decoders.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"4 ","pages":"211-227"},"PeriodicalIF":0.0,"publicationDate":"2025-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11271400","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PFL-GAN: Client Heterogeneity Meets Generative Models in Personalized Federated Learning PFL-GAN:客户异质性与个性化联邦学习中的生成模型
Pub Date : 2025-11-27 DOI: 10.1109/TMLCN.2025.3637784
Achintha Wijesinghe;Songyang Zhang;Zhi Ding
Recent advances in generative artificial intelligence (AI) have led to rising interest in federated learning (FL) based on generative adversarial network (GAN) models. GAN-based FL shows promises in many communication and network applications, such as edge computing and the Internet of Things. In the context of FL, GANs can capture the underlying client data structure, and regenerate samples resembling the original data distribution without compromising data privacy. Although most existing GAN-based FL works focus on training a global model, some scenarios exist where personalized FL (PFL) can be more desirable when incorporating client data heterogeneity in terms of distinct data distributions, feature spaces, and labels. To cope with client heterogeneity in GAN-based FL, we propose a novel GAN sharing and aggregation strategy for PFL that can efficiently characterize client heterogeneity in different settings. More specifically, our proposed PFL-GAN first learns the similarities among clients before implementing a weighted collaborative data aggregation. Our empirical results through rigorous experimentation on several well-known datasets demonstrate the effectiveness of PFL-GAN.
生成式人工智能(AI)的最新进展引起了人们对基于生成式对抗网络(GAN)模型的联邦学习(FL)的兴趣。基于gan的FL在许多通信和网络应用中显示出前景,例如边缘计算和物联网。在FL上下文中,gan可以捕获底层客户端数据结构,并在不损害数据隐私的情况下重新生成与原始数据分布相似的样本。尽管大多数现有的基于gan的FL工作都集中在训练全局模型上,但在某些情况下,当结合不同数据分布、特征空间和标签方面的客户端数据异质性时,个性化FL (PFL)可能更可取。为了应对基于GAN的FL中的客户端异质性,我们提出了一种新的GAN共享和聚合策略,该策略可以有效地表征不同设置下的客户端异质性。更具体地说,我们提出的PFL-GAN在实现加权协作数据聚合之前首先学习客户端之间的相似性。我们在几个知名数据集上进行了严格的实验,结果证明了PFL-GAN的有效性。
{"title":"PFL-GAN: Client Heterogeneity Meets Generative Models in Personalized Federated Learning","authors":"Achintha Wijesinghe;Songyang Zhang;Zhi Ding","doi":"10.1109/TMLCN.2025.3637784","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3637784","url":null,"abstract":"Recent advances in generative artificial intelligence (AI) have led to rising interest in federated learning (FL) based on generative adversarial network (GAN) models. GAN-based FL shows promises in many communication and network applications, such as edge computing and the Internet of Things. In the context of FL, GANs can capture the underlying client data structure, and regenerate samples resembling the original data distribution without compromising data privacy. Although most existing GAN-based FL works focus on training a global model, some scenarios exist where personalized FL (PFL) can be more desirable when incorporating client data heterogeneity in terms of distinct data distributions, feature spaces, and labels. To cope with client heterogeneity in GAN-based FL, we propose a novel GAN sharing and aggregation strategy for PFL that can efficiently characterize client heterogeneity in different settings. More specifically, our proposed PFL-GAN first learns the similarities among clients before implementing a weighted collaborative data aggregation. Our empirical results through rigorous experimentation on several well-known datasets demonstrate the effectiveness of PFL-GAN.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"4 ","pages":"33-44"},"PeriodicalIF":0.0,"publicationDate":"2025-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11270937","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Personalized Federated Learning With Adaptive Transformer Pruning and Hypernetwork-Driven Personalization in Wireless Networks 无线网络中自适应变压器剪枝和超网络驱动个性化的个性化联邦学习
Pub Date : 2025-11-25 DOI: 10.1109/TMLCN.2025.3637083
Moqbel Hamood;Abdullatif Albaseer;Hassan El-Sallabi;Mohamed Abdallah;Ala Al-Fuqaha;Bechir Hamdaoui
Deploying transformer models in Personalized Federated Learning (PFL) at the wireless edge faces critical challenges, including high communication overhead, latency, and energy consumption. Existing compression methods, such as pruning and sparsification, typically degrade performance due to the sensitivity of self-attention layers (SALs) to parameter reduction. Also, standard federated averaging (FedAvg) often diminishes personalization by blending crucial client-specific parameters. To overcome these issues, we propose PFL-TPP (Personalized Federated Learning with Transformer Pruning and Personalization). This dual-strategy framework effectively reduces computational and communication burdens while maintaining high model accuracy and personalization. Our approach employs dynamic, learnable threshold pruning on feed-forward layers (FFLs) to eliminate redundant computations. For SALs, we introduce a novel server-side hypernetwork that generates personalized attention parameters from client-specific embeddings, significantly cutting communication overhead without sacrificing personalization. Extensive experiments demonstrate that PFL-TPP achieves up to 82.73% energy savings, 86% reduction in training time, and improved model accuracy compared to standard baselines. These results demonstrate the effectiveness of our proposed approach in enabling scalable, communication-efficient deployment of transformers in real-world PFL scenarios.
在无线边缘的个性化联邦学习(PFL)中部署变压器模型面临着严峻的挑战,包括高通信开销、延迟和能耗。由于自关注层(self-attention layer, SALs)对参数缩减的敏感性,现有的压缩方法(如剪枝和稀疏化)通常会降低性能。此外,标准联邦平均(fedag)通常通过混合关键的特定于客户的参数来降低个性化。为了克服这些问题,我们提出了PFL-TPP(具有变压器修剪和个性化的个性化联邦学习)。这种双策略框架有效地减少了计算和通信负担,同时保持了较高的模型准确性和个性化。我们的方法在前馈层(ffl)上采用动态、可学习的阈值修剪来消除冗余计算。对于SALs,我们引入了一种新的服务器端超网络,它可以从特定于客户端的嵌入中生成个性化的注意力参数,在不牺牲个性化的情况下显著降低通信开销。大量的实验表明,与标准基线相比,PFL-TPP可节省高达82.73%的能源,减少86%的训练时间,并提高模型精度。这些结果证明了我们提出的方法在实际PFL场景中实现可扩展、通信高效的变压器部署方面的有效性。
{"title":"Personalized Federated Learning With Adaptive Transformer Pruning and Hypernetwork-Driven Personalization in Wireless Networks","authors":"Moqbel Hamood;Abdullatif Albaseer;Hassan El-Sallabi;Mohamed Abdallah;Ala Al-Fuqaha;Bechir Hamdaoui","doi":"10.1109/TMLCN.2025.3637083","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3637083","url":null,"abstract":"Deploying transformer models in Personalized Federated Learning (PFL) at the wireless edge faces critical challenges, including high communication overhead, latency, and energy consumption. Existing compression methods, such as pruning and sparsification, typically degrade performance due to the sensitivity of self-attention layers (SALs) to parameter reduction. Also, standard federated averaging (FedAvg) often diminishes personalization by blending crucial client-specific parameters. To overcome these issues, we propose PFL-TPP (Personalized Federated Learning with Transformer Pruning and Personalization). This dual-strategy framework effectively reduces computational and communication burdens while maintaining high model accuracy and personalization. Our approach employs dynamic, learnable threshold pruning on feed-forward layers (FFLs) to eliminate redundant computations. For SALs, we introduce a novel server-side hypernetwork that generates personalized attention parameters from client-specific embeddings, significantly cutting communication overhead without sacrificing personalization. Extensive experiments demonstrate that PFL-TPP achieves up to 82.73% energy savings, 86% reduction in training time, and improved model accuracy compared to standard baselines. These results demonstrate the effectiveness of our proposed approach in enabling scalable, communication-efficient deployment of transformers in real-world PFL scenarios.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"4 ","pages":"1-16"},"PeriodicalIF":0.0,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11268477","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Resource Optimization in Multi-Hop IAB Networks: Balancing Data Freshness and Spectral Efficiency 多跳IAB网络中的资源优化:平衡数据新鲜度和频谱效率
Pub Date : 2025-11-20 DOI: 10.1109/TMLCN.2025.3635578
Sarder Fakhrul Abedin;Aamir Mahmood;Zhu Han;Mikael Gidlund
This work proposes a multi-objective resource optimization framework for integrated access and backhaul (IAB) networks, tackling the dual challenges of timely data updates and spectral efficiency under dynamic wireless conditions. Conventional single-objective optimization is often impractical for IAB networks, where objective preferences are unknown or difficult to predefine. Therefore, we formulate a multi-objective problem that minimizes the age of information (AoI) and maximizes spectral efficiency, subject to a risk-aware AoI constraint, access-backhaul throughput fairness, and other contextual requirements. A lightweight proportional fair (PF) scheduling algorithm first handles user association and access resource allocation. Subsequently, a Pareto Q-learning-based reinforcement learning (RL) scheme allocates backhaul resources, with the PF scheduler’s outcomes integrated into the state and constrained action spaces of a Markov decision process (MDP). The reward function balances AoI and spectral efficiency objectives while explicitly capturing fairness, thereby resulting in robust long-term performance without imposing fixed weights. Furthermore, an adaptive value-difference-based exploration technique adjusts exploration rates based on Q-value estimate variances, promoting strategic exploration for optimal trade-offs. Simulations show that the proposed method outperforms baselines, reducing the convexity gap between approximated and optimal Pareto fronts by 68.6% and improving fairness by 16.9%.
本文提出了一种用于综合接入回程(IAB)网络的多目标资源优化框架,解决了动态无线条件下数据及时更新和频谱效率的双重挑战。传统的单目标优化对于IAB网络通常是不切实际的,因为目标偏好是未知的或难以预先定义的。因此,我们制定了一个多目标问题,最小化信息年龄(AoI)和最大化频谱效率,受风险感知AoI约束、访问-回程吞吐量公平和其他上下文要求的约束。轻量级比例公平(PF)调度算法首先处理用户关联和访问资源分配。随后,基于Pareto q -learning的强化学习(RL)方案分配回程资源,将PF调度程序的结果集成到马尔可夫决策过程(MDP)的状态和约束动作空间中。奖励函数平衡AoI和频谱效率目标,同时明确捕获公平性,从而在不施加固定权重的情况下产生稳健的长期性能。此外,基于值差的自适应勘探技术根据q值估计方差调整勘探速率,促进最佳权衡的战略勘探。仿真结果表明,该方法优于基线,将逼近和最优Pareto前沿之间的凸度差距减小了68.6%,公平性提高了16.9%。
{"title":"Resource Optimization in Multi-Hop IAB Networks: Balancing Data Freshness and Spectral Efficiency","authors":"Sarder Fakhrul Abedin;Aamir Mahmood;Zhu Han;Mikael Gidlund","doi":"10.1109/TMLCN.2025.3635578","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3635578","url":null,"abstract":"This work proposes a multi-objective resource optimization framework for integrated access and backhaul (IAB) networks, tackling the dual challenges of timely data updates and spectral efficiency under dynamic wireless conditions. Conventional single-objective optimization is often impractical for IAB networks, where objective preferences are unknown or difficult to predefine. Therefore, we formulate a multi-objective problem that minimizes the age of information (AoI) and maximizes spectral efficiency, subject to a risk-aware AoI constraint, access-backhaul throughput fairness, and other contextual requirements. A lightweight proportional fair (PF) scheduling algorithm first handles user association and access resource allocation. Subsequently, a Pareto Q-learning-based reinforcement learning (RL) scheme allocates backhaul resources, with the PF scheduler’s outcomes integrated into the state and constrained action spaces of a Markov decision process (MDP). The reward function balances AoI and spectral efficiency objectives while explicitly capturing fairness, thereby resulting in robust long-term performance without imposing fixed weights. Furthermore, an adaptive value-difference-based exploration technique adjusts exploration rates based on Q-value estimate variances, promoting strategic exploration for optimal trade-offs. Simulations show that the proposed method outperforms baselines, reducing the convexity gap between approximated and optimal Pareto fronts by 68.6% and improving fairness by 16.9%.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"1287-1310"},"PeriodicalIF":0.0,"publicationDate":"2025-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11262194","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Task-Specific Sharpness-Aware O-RAN Resource Management Using Multi-Agent Reinforcement Learning 基于多智能体强化学习的特定任务锐度感知O-RAN资源管理
Pub Date : 2025-11-19 DOI: 10.1109/TMLCN.2025.3634994
Fatemeh Lotfi;Hossein Rajoli;Fatemeh Afghah
Next-generation networks utilize the Open Radio Access Network (O-RAN) architecture to enable dynamic resource management, facilitated by the RAN Intelligent Controller (RIC). While deep reinforcement learning (DRL) models show promise in optimizing network resources, they often struggle with robustness and generalizability in dynamic environments. This paper introduces a novel resource management approach that enhances the Soft Actor Critic (SAC) algorithm with Sharpness-Aware Minimization (SAM) in a distributed Multi-Agent RL (MARL) framework. Our method introduces an adaptive and selective SAM mechanism, where regularization is explicitly driven by temporal-difference (TD)-error variance, ensuring that only agents facing high environmental complexity are regularized. This targeted strategy reduces unnecessary overhead, improves training stability, and enhances generalization without sacrificing learning efficiency. We further incorporate a dynamic $rho $ scheduling scheme to refine the exploration-exploitation trade-off across agents. Experimental results show our method significantly outperforms conventional DRL approaches, yielding up to a 22% improvement in resource allocation efficiency and ensuring superior QoS satisfaction across diverse O-RAN slices.
下一代网络利用开放无线接入网(O-RAN)架构实现动态资源管理,由RAN智能控制器(RIC)提供便利。虽然深度强化学习(DRL)模型在优化网络资源方面表现出了希望,但它们在动态环境中经常与鲁棒性和泛化性作斗争。本文介绍了一种新的资源管理方法,该方法在分布式多智能体RL (MARL)框架中使用锐度感知最小化(SAM)来增强软行为者评价(SAC)算法。我们的方法引入了一种自适应和选择性SAM机制,其中正则化是由时间差(TD)误差方差明确驱动的,确保只有面对高环境复杂性的代理才被正则化。这种有针对性的策略减少了不必要的开销,提高了训练稳定性,并在不牺牲学习效率的情况下增强了泛化。我们进一步结合了一个动态的$rho $调度方案,以改进跨代理的探索-开发权衡。实验结果表明,我们的方法显著优于传统的DRL方法,在资源分配效率上提高了22%,并在不同的O-RAN切片中确保了更高的QoS满意度。
{"title":"Task-Specific Sharpness-Aware O-RAN Resource Management Using Multi-Agent Reinforcement Learning","authors":"Fatemeh Lotfi;Hossein Rajoli;Fatemeh Afghah","doi":"10.1109/TMLCN.2025.3634994","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3634994","url":null,"abstract":"Next-generation networks utilize the Open Radio Access Network (O-RAN) architecture to enable dynamic resource management, facilitated by the RAN Intelligent Controller (RIC). While deep reinforcement learning (DRL) models show promise in optimizing network resources, they often struggle with robustness and generalizability in dynamic environments. This paper introduces a novel resource management approach that enhances the Soft Actor Critic (SAC) algorithm with Sharpness-Aware Minimization (SAM) in a distributed Multi-Agent RL (MARL) framework. Our method introduces an adaptive and selective SAM mechanism, where regularization is explicitly driven by temporal-difference (TD)-error variance, ensuring that only agents facing high environmental complexity are regularized. This targeted strategy reduces unnecessary overhead, improves training stability, and enhances generalization without sacrificing learning efficiency. We further incorporate a dynamic <inline-formula> <tex-math>$rho $ </tex-math></inline-formula> scheduling scheme to refine the exploration-exploitation trade-off across agents. Experimental results show our method significantly outperforms conventional DRL approaches, yielding up to a 22% improvement in resource allocation efficiency and ensuring superior QoS satisfaction across diverse O-RAN slices.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"4 ","pages":"98-114"},"PeriodicalIF":0.0,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11260483","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145778407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Communication Efficient Federated Learning With Quantization-Aware Training Design 沟通高效联邦学习与量化感知训练设计
Pub Date : 2025-11-19 DOI: 10.1109/TMLCN.2025.3635050
Xiang Fang;Li Chen;Huarui Yin;Xiaohui Chen;Weidong Wang
Model quantization is an effective method that can improve communication efficiency in federated learning (FL). The existing FL quantization protocols almost stay at the level of post-training quantization (PTQ), which comes at the cost of large quantization loss, especially in the setting of low-bits quantization. In this work, we propose a FL quantization training strategy to reduce the impact of quantization on model quality. Specifically, we first apply quantization-aware training (QAT) to FL (QAT-FL), which reduces quantization distortion by adding a fake-quantization module to the model so that the model could perceive future quantization during training. The convergence guarantee of the QAT-FL algorithm is established under certain assumptions. On the basis of the QAT-FL algorithm, we extend the discussion of non-uniform quantization and the adaptive algorithm, so that the model can adaptively adjust the parametric distribution and the number of quantization bits to reduce the amount of traffic in training. Experimental results based on MNIST, CIFAR-10 and FEMNIST datasets show that QAT-FL has advantages in terms of training loss and model inference accuracy, and adaptive-bits quantization of QAT-FL also greatly improves communication efficiency.
模型量化是提高联邦学习通信效率的一种有效方法。现有的FL量化协议几乎停留在训练后量化(PTQ)的水平,这是以很大的量化损失为代价的,特别是在低比特量化的情况下。在这项工作中,我们提出了一种FL量化训练策略,以减少量化对模型质量的影响。具体而言,我们首先将量化感知训练(QAT)应用于FL (QAT-FL),该方法通过在模型中添加假量化模块来减少量化失真,从而使模型在训练过程中能够感知未来的量化。在一定的假设条件下,建立了QAT-FL算法的收敛性保证。在QAT-FL算法的基础上,扩展了对非均匀量化和自适应算法的讨论,使模型能够自适应地调整参数分布和量化比特数,以减少训练流量。基于MNIST、CIFAR-10和FEMNIST数据集的实验结果表明,QAT-FL在训练损失和模型推理精度方面具有优势,QAT-FL的自适应比特量化也大大提高了通信效率。
{"title":"Communication Efficient Federated Learning With Quantization-Aware Training Design","authors":"Xiang Fang;Li Chen;Huarui Yin;Xiaohui Chen;Weidong Wang","doi":"10.1109/TMLCN.2025.3635050","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3635050","url":null,"abstract":"Model quantization is an effective method that can improve communication efficiency in federated learning (FL). The existing FL quantization protocols almost stay at the level of post-training quantization (PTQ), which comes at the cost of large quantization loss, especially in the setting of low-bits quantization. In this work, we propose a FL quantization training strategy to reduce the impact of quantization on model quality. Specifically, we first apply quantization-aware training (QAT) to FL (QAT-FL), which reduces quantization distortion by adding a fake-quantization module to the model so that the model could perceive future quantization during training. The convergence guarantee of the QAT-FL algorithm is established under certain assumptions. On the basis of the QAT-FL algorithm, we extend the discussion of non-uniform quantization and the adaptive algorithm, so that the model can adaptively adjust the parametric distribution and the number of quantization bits to reduce the amount of traffic in training. Experimental results based on MNIST, CIFAR-10 and FEMNIST datasets show that QAT-FL has advantages in terms of training loss and model inference accuracy, and adaptive-bits quantization of QAT-FL also greatly improves communication efficiency.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"4 ","pages":"45-59"},"PeriodicalIF":0.0,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11260453","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Reinforcement Learning Framework for Resource Allocation in Uplink Carrier Aggregation in the Presence of Self Interference 存在自干扰的上行载波聚合资源分配的强化学习框架
Pub Date : 2025-11-14 DOI: 10.1109/TMLCN.2025.3633248
Jaswanth Bodempudi;Batta Siva Sairam;Madepalli Haritha;Sandesh Rao Mattu;Ananthanarayanan Chockalingam
To meet the ever-increasing demand for higher data rates in mobile networks across generations, many novel schemes have been proposed in the standards. One such scheme is carrier aggregation (CA). Simply put, CA is a technique that allows mobile networks to combine multiple carriers to increase data rate and improve network efficiency. On the uplink, for power constrained users, this translates to the need for an efficient resource allocation scheme, where each user distributes its available power among its assigned uplink carriers. Choosing a good set of carriers and allocating appropriate power on the carriers is of paramount importance for good performance. Another factor that is critical to obtaining good performance is how well the degradation caused by the harmonic/intermodulation terms generated by the user’s transmitter non-linearities is handled. Specifically, for example, if the carrier allocation is such that a harmonic of a user’s uplink carrier falls on the downlink frequency of that user, it leads to a self coupling-induced sensitivity degradation of that user’s downlink receiver. Considering these factors, in this paper, we model the uplink carrier aggregation problem as an optimal resource allocation problem with the associated constraints of non-linearities induced self interference (SI). This involves optimization over a discrete variable (which carriers need to be turned on) and a continuous variable (what power needs to be allocated on the selected carriers) in dynamic environments, a problem which is hard to solve using traditional methods owing to the mixed nature of the optimization variables and the additional need to consider the SI constraint in the problem. Therefore, in this paper, we adopt a reinforcement learning (RL) framework involving a compound-action actor-critic (CA2C) algorithm for the uplink carrier aggregation problem. We propose a novel reward function that is critical for enabling the proposed CA2C algorithm to efficiently handle SI. The CA2C algorithm along with the proposed reward function learns to assign and activate suitable carriers in an online fashion. Numerical results demonstrate that the proposed RL based scheme is able to achieve higher sum throughputs compared to naive schemes. The results also demonstrate that the proposed reward function allows the CA2C algorithm to adapt the optimization both in the presence and absence of SI.
为了满足移动网络中不断增长的跨代数据传输速率需求,标准中提出了许多新颖的方案。其中一种方案是载波聚合(CA)。简单地说,CA是一种允许移动网络组合多个运营商以提高数据速率和提高网络效率的技术。在上行链路上,对于功率受限的用户,这意味着需要一种有效的资源分配方案,其中每个用户将其可用功率分配给其分配的上行链路载波。选择一组好的载波并在载波上分配适当的功率对于获得良好的性能至关重要。另一个对获得良好性能至关重要的因素是如何处理由用户发射机非线性产生的谐波/互调项引起的退化。具体来说,例如,如果载波分配使得用户上行载波的谐波落在该用户的下行频率上,则会导致该用户下行接收器的自耦合引起的灵敏度下降。考虑到这些因素,本文将上行载波聚合问题建模为具有非线性诱导自干扰约束的最优资源分配问题。这涉及在动态环境中对离散变量(需要打开哪些载波)和连续变量(需要在选定的载波上分配哪些功率)进行优化,由于优化变量的混合性质以及考虑问题中的SI约束的额外需要,使用传统方法很难解决这个问题。因此,在本文中,我们采用强化学习(RL)框架,其中涉及复合动作actor-critic (CA2C)算法来解决上行载波聚合问题。我们提出了一种新的奖励函数,它对于使所提出的CA2C算法有效地处理SI至关重要。CA2C算法与所提出的奖励函数一起学习在线方式分配和激活合适的载体。数值结果表明,与原始方案相比,基于强化学习的方案能够获得更高的总吞吐量。结果还表明,所提出的奖励函数允许CA2C算法在存在和不存在SI的情况下都能适应优化。
{"title":"A Reinforcement Learning Framework for Resource Allocation in Uplink Carrier Aggregation in the Presence of Self Interference","authors":"Jaswanth Bodempudi;Batta Siva Sairam;Madepalli Haritha;Sandesh Rao Mattu;Ananthanarayanan Chockalingam","doi":"10.1109/TMLCN.2025.3633248","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3633248","url":null,"abstract":"To meet the ever-increasing demand for higher data rates in mobile networks across generations, many novel schemes have been proposed in the standards. One such scheme is carrier aggregation (CA). Simply put, CA is a technique that allows mobile networks to combine multiple carriers to increase data rate and improve network efficiency. On the uplink, for power constrained users, this translates to the need for an efficient resource allocation scheme, where each user distributes its available power among its assigned uplink carriers. Choosing a good set of carriers and allocating appropriate power on the carriers is of paramount importance for good performance. Another factor that is critical to obtaining good performance is how well the degradation caused by the harmonic/intermodulation terms generated by the user’s transmitter non-linearities is handled. Specifically, for example, if the carrier allocation is such that a harmonic of a user’s uplink carrier falls on the downlink frequency of that user, it leads to a self coupling-induced sensitivity degradation of that user’s downlink receiver. Considering these factors, in this paper, we model the uplink carrier aggregation problem as an optimal resource allocation problem with the associated constraints of non-linearities induced self interference (SI). This involves optimization over a discrete variable (which carriers need to be turned on) and a continuous variable (what power needs to be allocated on the selected carriers) in dynamic environments, a problem which is hard to solve using traditional methods owing to the mixed nature of the optimization variables and the additional need to consider the SI constraint in the problem. Therefore, in this paper, we adopt a reinforcement learning (RL) framework involving a compound-action actor-critic (CA2C) algorithm for the uplink carrier aggregation problem. We propose a novel reward function that is critical for enabling the proposed CA2C algorithm to efficiently handle SI. The CA2C algorithm along with the proposed reward function learns to assign and activate suitable carriers in an online fashion. Numerical results demonstrate that the proposed RL based scheme is able to achieve higher sum throughputs compared to naive schemes. The results also demonstrate that the proposed reward function allows the CA2C algorithm to adapt the optimization both in the presence and absence of SI.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"1265-1286"},"PeriodicalIF":0.0,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11248959","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward Autonomous and Efficient Cybersecurity: A Multi-Objective AutoML-Based Intrusion Detection System 迈向自主高效的网络安全:一种多目标自动入侵检测系统
Pub Date : 2025-11-11 DOI: 10.1109/TMLCN.2025.3631379
Li Yang;Abdallah Shami
With increasingly sophisticated cybersecurity threats and rising demand for network automation, autonomous cybersecurity mechanisms are becoming critical for securing modern networks. The rapid expansion of Internet of Things (IoT) systems amplifies these challenges, as resource-constrained IoT devices demand scalable and efficient security solutions. In this work, an innovative Intrusion Detection System (IDS) utilizing Automated Machine Learning (AutoML) and Multi-Objective Optimization (MOO) is proposed for autonomous and optimized cyber-attack detection in modern networking environments. The proposed IDS framework integrates two primary innovative techniques: Optimized Importance and Percentage-based Automated Feature Selection (OIP-AutoFS) and Optimized Performance, Confidence, and Efficiency-based Combined Algorithm Selection and Hyperparameter Optimization (OPCE-CASH). These components optimize feature selection and model learning processes to strike a balance between intrusion detection effectiveness and computational efficiency. This work presents the first IDS framework that integrates all four AutoML stages and employs multi-objective optimization to jointly optimize detection effectiveness, efficiency, and confidence for deployment in resource-constrained systems. Experimental evaluations over two benchmark cybersecurity datasets demonstrate that the proposed MOO-AutoML IDS outperforms state-of-the-art IDSs, establishing a new benchmark for autonomous, efficient, and optimized security for networks. Designed to support IoT and edge environments with resource constraints, the proposed framework is applicable to a variety of autonomous cybersecurity applications across diverse networked environments.
随着网络安全威胁的日益复杂和对网络自动化需求的不断增长,自主网络安全机制对现代网络的安全至关重要。物联网(IoT)系统的快速扩展放大了这些挑战,因为资源有限的物联网设备需要可扩展且高效的安全解决方案。在这项工作中,提出了一种利用自动机器学习(AutoML)和多目标优化(MOO)的创新入侵检测系统(IDS),用于现代网络环境中自主和优化的网络攻击检测。提出的IDS框架集成了两种主要的创新技术:基于优化重要性和百分比的自动特征选择(OIP-AutoFS)和基于优化性能、置信度和效率的组合算法选择和超参数优化(OPCE-CASH)。这些组件优化特征选择和模型学习过程,在入侵检测有效性和计算效率之间取得平衡。这项工作提出了第一个集成了所有四个AutoML阶段的IDS框架,并采用多目标优化来共同优化在资源受限系统中部署的检测有效性、效率和信心。在两个基准网络安全数据集上的实验评估表明,所提出的MOO-AutoML IDS优于最先进的IDS,为网络的自主、高效和优化安全性建立了新的基准。该框架旨在支持具有资源约束的物联网和边缘环境,适用于不同网络环境中的各种自主网络安全应用。
{"title":"Toward Autonomous and Efficient Cybersecurity: A Multi-Objective AutoML-Based Intrusion Detection System","authors":"Li Yang;Abdallah Shami","doi":"10.1109/TMLCN.2025.3631379","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3631379","url":null,"abstract":"With increasingly sophisticated cybersecurity threats and rising demand for network automation, autonomous cybersecurity mechanisms are becoming critical for securing modern networks. The rapid expansion of Internet of Things (IoT) systems amplifies these challenges, as resource-constrained IoT devices demand scalable and efficient security solutions. In this work, an innovative Intrusion Detection System (IDS) utilizing Automated Machine Learning (AutoML) and Multi-Objective Optimization (MOO) is proposed for autonomous and optimized cyber-attack detection in modern networking environments. The proposed IDS framework integrates two primary innovative techniques: Optimized Importance and Percentage-based Automated Feature Selection (OIP-AutoFS) and Optimized Performance, Confidence, and Efficiency-based Combined Algorithm Selection and Hyperparameter Optimization (OPCE-CASH). These components optimize feature selection and model learning processes to strike a balance between intrusion detection effectiveness and computational efficiency. This work presents the first IDS framework that integrates all four AutoML stages and employs multi-objective optimization to jointly optimize detection effectiveness, efficiency, and confidence for deployment in resource-constrained systems. Experimental evaluations over two benchmark cybersecurity datasets demonstrate that the proposed MOO-AutoML IDS outperforms state-of-the-art IDSs, establishing a new benchmark for autonomous, efficient, and optimized security for networks. Designed to support IoT and edge environments with resource constraints, the proposed framework is applicable to a variety of autonomous cybersecurity applications across diverse networked environments.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"1244-1264"},"PeriodicalIF":0.0,"publicationDate":"2025-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11240569","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145560635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Machine Learning in Communications and Networking
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1