Pub Date : 2025-11-19DOI: 10.1109/TMLCN.2025.3634994
Fatemeh Lotfi;Hossein Rajoli;Fatemeh Afghah
Next-generation networks utilize the Open Radio Access Network (O-RAN) architecture to enable dynamic resource management, facilitated by the RAN Intelligent Controller (RIC). While deep reinforcement learning (DRL) models show promise in optimizing network resources, they often struggle with robustness and generalizability in dynamic environments. This paper introduces a novel resource management approach that enhances the Soft Actor Critic (SAC) algorithm with Sharpness-Aware Minimization (SAM) in a distributed Multi-Agent RL (MARL) framework. Our method introduces an adaptive and selective SAM mechanism, where regularization is explicitly driven by temporal-difference (TD)-error variance, ensuring that only agents facing high environmental complexity are regularized. This targeted strategy reduces unnecessary overhead, improves training stability, and enhances generalization without sacrificing learning efficiency. We further incorporate a dynamic $rho $ scheduling scheme to refine the exploration-exploitation trade-off across agents. Experimental results show our method significantly outperforms conventional DRL approaches, yielding up to a 22% improvement in resource allocation efficiency and ensuring superior QoS satisfaction across diverse O-RAN slices.
{"title":"Task-Specific Sharpness-Aware O-RAN Resource Management Using Multi-Agent Reinforcement Learning","authors":"Fatemeh Lotfi;Hossein Rajoli;Fatemeh Afghah","doi":"10.1109/TMLCN.2025.3634994","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3634994","url":null,"abstract":"Next-generation networks utilize the Open Radio Access Network (O-RAN) architecture to enable dynamic resource management, facilitated by the RAN Intelligent Controller (RIC). While deep reinforcement learning (DRL) models show promise in optimizing network resources, they often struggle with robustness and generalizability in dynamic environments. This paper introduces a novel resource management approach that enhances the Soft Actor Critic (SAC) algorithm with Sharpness-Aware Minimization (SAM) in a distributed Multi-Agent RL (MARL) framework. Our method introduces an adaptive and selective SAM mechanism, where regularization is explicitly driven by temporal-difference (TD)-error variance, ensuring that only agents facing high environmental complexity are regularized. This targeted strategy reduces unnecessary overhead, improves training stability, and enhances generalization without sacrificing learning efficiency. We further incorporate a dynamic <inline-formula> <tex-math>$rho $ </tex-math></inline-formula> scheduling scheme to refine the exploration-exploitation trade-off across agents. Experimental results show our method significantly outperforms conventional DRL approaches, yielding up to a 22% improvement in resource allocation efficiency and ensuring superior QoS satisfaction across diverse O-RAN slices.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"4 ","pages":"98-114"},"PeriodicalIF":0.0,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11260483","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145778407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To meet the ever-increasing demand for higher data rates in mobile networks across generations, many novel schemes have been proposed in the standards. One such scheme is carrier aggregation (CA). Simply put, CA is a technique that allows mobile networks to combine multiple carriers to increase data rate and improve network efficiency. On the uplink, for power constrained users, this translates to the need for an efficient resource allocation scheme, where each user distributes its available power among its assigned uplink carriers. Choosing a good set of carriers and allocating appropriate power on the carriers is of paramount importance for good performance. Another factor that is critical to obtaining good performance is how well the degradation caused by the harmonic/intermodulation terms generated by the user’s transmitter non-linearities is handled. Specifically, for example, if the carrier allocation is such that a harmonic of a user’s uplink carrier falls on the downlink frequency of that user, it leads to a self coupling-induced sensitivity degradation of that user’s downlink receiver. Considering these factors, in this paper, we model the uplink carrier aggregation problem as an optimal resource allocation problem with the associated constraints of non-linearities induced self interference (SI). This involves optimization over a discrete variable (which carriers need to be turned on) and a continuous variable (what power needs to be allocated on the selected carriers) in dynamic environments, a problem which is hard to solve using traditional methods owing to the mixed nature of the optimization variables and the additional need to consider the SI constraint in the problem. Therefore, in this paper, we adopt a reinforcement learning (RL) framework involving a compound-action actor-critic (CA2C) algorithm for the uplink carrier aggregation problem. We propose a novel reward function that is critical for enabling the proposed CA2C algorithm to efficiently handle SI. The CA2C algorithm along with the proposed reward function learns to assign and activate suitable carriers in an online fashion. Numerical results demonstrate that the proposed RL based scheme is able to achieve higher sum throughputs compared to naive schemes. The results also demonstrate that the proposed reward function allows the CA2C algorithm to adapt the optimization both in the presence and absence of SI.
{"title":"A Reinforcement Learning Framework for Resource Allocation in Uplink Carrier Aggregation in the Presence of Self Interference","authors":"Jaswanth Bodempudi;Batta Siva Sairam;Madepalli Haritha;Sandesh Rao Mattu;Ananthanarayanan Chockalingam","doi":"10.1109/TMLCN.2025.3633248","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3633248","url":null,"abstract":"To meet the ever-increasing demand for higher data rates in mobile networks across generations, many novel schemes have been proposed in the standards. One such scheme is carrier aggregation (CA). Simply put, CA is a technique that allows mobile networks to combine multiple carriers to increase data rate and improve network efficiency. On the uplink, for power constrained users, this translates to the need for an efficient resource allocation scheme, where each user distributes its available power among its assigned uplink carriers. Choosing a good set of carriers and allocating appropriate power on the carriers is of paramount importance for good performance. Another factor that is critical to obtaining good performance is how well the degradation caused by the harmonic/intermodulation terms generated by the user’s transmitter non-linearities is handled. Specifically, for example, if the carrier allocation is such that a harmonic of a user’s uplink carrier falls on the downlink frequency of that user, it leads to a self coupling-induced sensitivity degradation of that user’s downlink receiver. Considering these factors, in this paper, we model the uplink carrier aggregation problem as an optimal resource allocation problem with the associated constraints of non-linearities induced self interference (SI). This involves optimization over a discrete variable (which carriers need to be turned on) and a continuous variable (what power needs to be allocated on the selected carriers) in dynamic environments, a problem which is hard to solve using traditional methods owing to the mixed nature of the optimization variables and the additional need to consider the SI constraint in the problem. Therefore, in this paper, we adopt a reinforcement learning (RL) framework involving a compound-action actor-critic (CA2C) algorithm for the uplink carrier aggregation problem. We propose a novel reward function that is critical for enabling the proposed CA2C algorithm to efficiently handle SI. The CA2C algorithm along with the proposed reward function learns to assign and activate suitable carriers in an online fashion. Numerical results demonstrate that the proposed RL based scheme is able to achieve higher sum throughputs compared to naive schemes. The results also demonstrate that the proposed reward function allows the CA2C algorithm to adapt the optimization both in the presence and absence of SI.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"1265-1286"},"PeriodicalIF":0.0,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11248959","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-11DOI: 10.1109/TMLCN.2025.3631379
Li Yang;Abdallah Shami
With increasingly sophisticated cybersecurity threats and rising demand for network automation, autonomous cybersecurity mechanisms are becoming critical for securing modern networks. The rapid expansion of Internet of Things (IoT) systems amplifies these challenges, as resource-constrained IoT devices demand scalable and efficient security solutions. In this work, an innovative Intrusion Detection System (IDS) utilizing Automated Machine Learning (AutoML) and Multi-Objective Optimization (MOO) is proposed for autonomous and optimized cyber-attack detection in modern networking environments. The proposed IDS framework integrates two primary innovative techniques: Optimized Importance and Percentage-based Automated Feature Selection (OIP-AutoFS) and Optimized Performance, Confidence, and Efficiency-based Combined Algorithm Selection and Hyperparameter Optimization (OPCE-CASH). These components optimize feature selection and model learning processes to strike a balance between intrusion detection effectiveness and computational efficiency. This work presents the first IDS framework that integrates all four AutoML stages and employs multi-objective optimization to jointly optimize detection effectiveness, efficiency, and confidence for deployment in resource-constrained systems. Experimental evaluations over two benchmark cybersecurity datasets demonstrate that the proposed MOO-AutoML IDS outperforms state-of-the-art IDSs, establishing a new benchmark for autonomous, efficient, and optimized security for networks. Designed to support IoT and edge environments with resource constraints, the proposed framework is applicable to a variety of autonomous cybersecurity applications across diverse networked environments.
{"title":"Toward Autonomous and Efficient Cybersecurity: A Multi-Objective AutoML-Based Intrusion Detection System","authors":"Li Yang;Abdallah Shami","doi":"10.1109/TMLCN.2025.3631379","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3631379","url":null,"abstract":"With increasingly sophisticated cybersecurity threats and rising demand for network automation, autonomous cybersecurity mechanisms are becoming critical for securing modern networks. The rapid expansion of Internet of Things (IoT) systems amplifies these challenges, as resource-constrained IoT devices demand scalable and efficient security solutions. In this work, an innovative Intrusion Detection System (IDS) utilizing Automated Machine Learning (AutoML) and Multi-Objective Optimization (MOO) is proposed for autonomous and optimized cyber-attack detection in modern networking environments. The proposed IDS framework integrates two primary innovative techniques: Optimized Importance and Percentage-based Automated Feature Selection (OIP-AutoFS) and Optimized Performance, Confidence, and Efficiency-based Combined Algorithm Selection and Hyperparameter Optimization (OPCE-CASH). These components optimize feature selection and model learning processes to strike a balance between intrusion detection effectiveness and computational efficiency. This work presents the first IDS framework that integrates all four AutoML stages and employs multi-objective optimization to jointly optimize detection effectiveness, efficiency, and confidence for deployment in resource-constrained systems. Experimental evaluations over two benchmark cybersecurity datasets demonstrate that the proposed MOO-AutoML IDS outperforms state-of-the-art IDSs, establishing a new benchmark for autonomous, efficient, and optimized security for networks. Designed to support IoT and edge environments with resource constraints, the proposed framework is applicable to a variety of autonomous cybersecurity applications across diverse networked environments.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"1244-1264"},"PeriodicalIF":0.0,"publicationDate":"2025-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11240569","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145560635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-07DOI: 10.1109/TMLCN.2025.3630589
Xin Wang;Xudong Wang
Pulse-based systems provide a promising alternative to terahertz (THz) communications, especially for joint communication and sensing applications. For such a system, THz Gaussian pulses are the fundamental and commonly used waveforms, but are susceptible to distortion due to their large bandwidth occupation. A critical yet unresolved issue is generating THz pulses with tunable center frequencies and bandwidths. In this paper, a THz-pulse generator is designed based on diffractive surfaces cascaded in multiple layers. Given the THz Gaussian pulse as an input, each surface has the ability to change its amplitude and phase and diffracts it to the next surface, so as to generate pulses with expected frequencies and bandwidths. To determine parameters for millions of elements on all surfaces and to handle the case of generating multiple THz pulses corresponding to the same THz Gaussian input signal, a diffractive autoencoder neural network (DANN) is developed. Subsequently, using the generated pulses for data transmission under THz channel, the symbol error rate (SER) performance is analyzed. Extensive simulations are conducted to validate and evaluate the DANN-based THz pulse generator. Additionally, using just 5 diffractive surfaces, the generator can support at least 10 pairs of orthogonal THz pulses with a correlation error ratio of less than $10^{-1}$ .
{"title":"Generation of Orthogonal THz Pulses for Wireless Communications Based on Diffractive Autoencoder Neural Networks","authors":"Xin Wang;Xudong Wang","doi":"10.1109/TMLCN.2025.3630589","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3630589","url":null,"abstract":"Pulse-based systems provide a promising alternative to terahertz (THz) communications, especially for joint communication and sensing applications. For such a system, THz Gaussian pulses are the fundamental and commonly used waveforms, but are susceptible to distortion due to their large bandwidth occupation. A critical yet unresolved issue is generating THz pulses with tunable center frequencies and bandwidths. In this paper, a THz-pulse generator is designed based on diffractive surfaces cascaded in multiple layers. Given the THz Gaussian pulse as an input, each surface has the ability to change its amplitude and phase and diffracts it to the next surface, so as to generate pulses with expected frequencies and bandwidths. To determine parameters for millions of elements on all surfaces and to handle the case of generating multiple THz pulses corresponding to the same THz Gaussian input signal, a diffractive autoencoder neural network (DANN) is developed. Subsequently, using the generated pulses for data transmission under THz channel, the symbol error rate (SER) performance is analyzed. Extensive simulations are conducted to validate and evaluate the DANN-based THz pulse generator. Additionally, using just 5 diffractive surfaces, the generator can support at least 10 pairs of orthogonal THz pulses with a correlation error ratio of less than <inline-formula> <tex-math>$10^{-1}$ </tex-math></inline-formula>.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"1210-1226"},"PeriodicalIF":0.0,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11231335","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145510074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-03DOI: 10.1109/TMLCN.2025.3628535
Mingyu Yang;Bowen Liu;Boyang Wang;Hun-Seok Kim
Deep learning-based joint source-channel coding (deep JSCC) has been demonstrated to be an effective approach for wireless image transmission. However, many current approaches utilize an autoencoder framework to optimize conventional metrics such as Mean Squared Error (MSE) and Structural Similarity Index (SSIM), which are inadequate for preserving the perceptual quality of reconstructed images. Such an issue is more prominent under stringent bandwidth constraints or low signal-to-noise ratio (SNR) conditions. To tackle this challenge, we propose DiffJSCC, a novel framework that leverages the prior knowledge of the pre-trained Stable Diffusion model to produce high-realism images via the conditional diffusion denoising process. First, our DiffJSCC employs an autoencoder structure similar to prior deep JSCC works to generate an initial image reconstruction from the noisy channel symbols. This preliminary reconstruction serves as an intermediate step where robust multimodal spatial and textual features are extracted. In the following diffusion step, DiffJSCC uses the derived multimodal features, together with channel state information such as the signal-to-noise ratio (SNR) and channel gain, to guide the diffusion denoising process through a novel control module. To maintain the balance between realism and fidelity, an optional intermediate guidance approach using the initial image reconstruction is implemented. Extensive experiments on diverse datasets reveal that our method significantly surpasses prior deep JSCC approaches on both perceptual metrics and downstream task performance, showcasing its ability to preserve the semantics of the original transmitted images. Notably, DiffJSCC can achieve highly realistic reconstructions for $768times 512$ pixel Kodak images with only 3072 symbols (<0.008>https://github.com/mingyuyng/DiffJSCC
{"title":"Diffusion-Aided Joint Source Channel Coding for High Realism Wireless Image Transmission","authors":"Mingyu Yang;Bowen Liu;Boyang Wang;Hun-Seok Kim","doi":"10.1109/TMLCN.2025.3628535","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3628535","url":null,"abstract":"Deep learning-based joint source-channel coding (deep JSCC) has been demonstrated to be an effective approach for wireless image transmission. However, many current approaches utilize an autoencoder framework to optimize conventional metrics such as Mean Squared Error (MSE) and Structural Similarity Index (SSIM), which are inadequate for preserving the perceptual quality of reconstructed images. Such an issue is more prominent under stringent bandwidth constraints or low signal-to-noise ratio (SNR) conditions. To tackle this challenge, we propose DiffJSCC, a novel framework that leverages the prior knowledge of the pre-trained Stable Diffusion model to produce high-realism images via the conditional diffusion denoising process. First, our DiffJSCC employs an autoencoder structure similar to prior deep JSCC works to generate an initial image reconstruction from the noisy channel symbols. This preliminary reconstruction serves as an intermediate step where robust multimodal spatial and textual features are extracted. In the following diffusion step, DiffJSCC uses the derived multimodal features, together with channel state information such as the signal-to-noise ratio (SNR) and channel gain, to guide the diffusion denoising process through a novel control module. To maintain the balance between realism and fidelity, an optional intermediate guidance approach using the initial image reconstruction is implemented. Extensive experiments on diverse datasets reveal that our method significantly surpasses prior deep JSCC approaches on both perceptual metrics and downstream task performance, showcasing its ability to preserve the semantics of the original transmitted images. Notably, DiffJSCC can achieve highly realistic reconstructions for <inline-formula> <tex-math>$768times 512$ </tex-math></inline-formula> pixel Kodak images with only 3072 symbols (<0.008>https://github.com/mingyuyng/DiffJSCC</uri>","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"1227-1243"},"PeriodicalIF":0.0,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11224625","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145510073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-29DOI: 10.1109/TMLCN.2025.3618993
Mazene Ameur;Bouziane Brik;Adlen Ksentini
Presents corrections to the paper, (Errata to “Dual Self-Attention is What You Need for Model Drift Detection in 6G Networks”).
提出了对论文的更正,(“双重自我注意是你需要在6G网络中进行模型漂移检测”的勘误表)。
{"title":"Erratum to “Dual Self-Attention is What You Need for Model Drift Detection in 6G Networks”","authors":"Mazene Ameur;Bouziane Brik;Adlen Ksentini","doi":"10.1109/TMLCN.2025.3618993","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3618993","url":null,"abstract":"Presents corrections to the paper, (Errata to “Dual Self-Attention is What You Need for Model Drift Detection in 6G Networks”).","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"1160-1160"},"PeriodicalIF":0.0,"publicationDate":"2025-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11220870","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145405438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper studies a decentralized online constrained optimization problem characterized by a shared constraint set. Nodes in the communication and learning network conduct local computations and communications to collaboratively solve the problem. Each node can access its own local cost function, whose value depends on its decision at each time step. However, because nodes continuously exchange privacy-sensitive information, most existing algorithms for this problem are susceptible to privacy leakage. To address this challenge, we propose an effective state-decomposition-based privacy-preserving decentralized dual averaging (SD-PPDDA) algorithm. The SD-PPDDA algorithm employs state decomposition scheme to preserve privacy without introducing additional hidden signals (may cause additional optimization errors) or incurring significant computational overhead. Theoretical analysis shows that the SD-PPDDA algorithm achieves the desired sublinear regret, specifically converging at a rate of $mathcal {O} left ({{ sqrt {K} }}right) $ (where $K$ denotes the number of iterations), while preserving the privacy of each node’s cost function. In addition, numerical simulations further validate the convergence and practicality of the algorithm.
研究了一个以共享约束集为特征的分散在线约束优化问题。通信学习网络中的节点进行局部计算和通信,协同解决问题。每个节点都可以访问自己的本地成本函数,其值取决于节点在每个时间步长的决策。然而,由于节点之间不断交换隐私敏感信息,大多数现有算法都容易出现隐私泄露。为了解决这一挑战,我们提出了一种有效的基于状态分解的隐私保护分散对偶平均(SD-PPDDA)算法。SD-PPDDA算法采用状态分解方案来保护隐私,而不会引入额外的隐藏信号(可能导致额外的优化错误)或产生大量的计算开销。理论分析表明,SD-PPDDA算法在保持各节点代价函数隐私性的同时,达到了期望的次线性后悔,收敛速度为$mathcal {O} left ({{ sqrt {K} }}right) $ ($K$表示迭代次数)。此外,数值仿真进一步验证了算法的收敛性和实用性。
{"title":"SD-PPDDA: A Privacy Efficient Decentralized Dual Averaging Algorithm Over Networks","authors":"Qingguo Lü;Chenglong He;Keke Zhang;Huaqing Li;Tingwen Huang","doi":"10.1109/TMLCN.2025.3625519","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3625519","url":null,"abstract":"This paper studies a decentralized online constrained optimization problem characterized by a shared constraint set. Nodes in the communication and learning network conduct local computations and communications to collaboratively solve the problem. Each node can access its own local cost function, whose value depends on its decision at each time step. However, because nodes continuously exchange privacy-sensitive information, most existing algorithms for this problem are susceptible to privacy leakage. To address this challenge, we propose an effective state-decomposition-based privacy-preserving decentralized dual averaging (SD-PPDDA) algorithm. The SD-PPDDA algorithm employs state decomposition scheme to preserve privacy without introducing additional hidden signals (may cause additional optimization errors) or incurring significant computational overhead. Theoretical analysis shows that the SD-PPDDA algorithm achieves the desired sublinear regret, specifically converging at a rate of <inline-formula> <tex-math>$mathcal {O} left ({{ sqrt {K} }}right) $ </tex-math></inline-formula> (where <inline-formula> <tex-math>$K$ </tex-math></inline-formula> denotes the number of iterations), while preserving the privacy of each node’s cost function. In addition, numerical simulations further validate the convergence and practicality of the algorithm.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"1197-1209"},"PeriodicalIF":0.0,"publicationDate":"2025-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11217255","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
5G millimeter-wave (mmWave) communications are essential for enabling ultra-high-speed, low-latency wireless connectivity to support data-intensive applications. However, the highly directional nature and sensitivity of mmWave signals make them particularly susceptible to jamming attacks. Therefore, securing 5G mmWave communication systems against jamming attacks is critical for ensuring reliable wireless connectivity in mission-critical applications. In this paper, we propose an online Bayesian Optimization (BayOpt) framework for joint analog and digital beamforming optimization at a mmWave communication device, aimed at maximizing its packet decoding rate under a constant jamming attack. By modeling the optimization objective as a black-box function and leveraging online learning to guide beam search, the BayOpt framework efficiently identifies near-optimal beam configurations in both the analog and digital domains while not requiring any knowledge of the jamming strategy or channel conditions. We have implemented the proposed anti-jamming solution on a 28 GHz mmWave testbed and conducted extensive evaluations across four distinct jamming scenarios. Over-the-air experiments demonstrate the effectiveness of the BayOpt framework in suppressing jamming interference. Notably, in a scenario where the jamming signal is 10 dB stronger than the desired signal, the BayOpt-enabled mmWave receiver achieves 73% of the throughput observed in a jamming-free environment.
{"title":"Anti-Jamming 5G Millimeter-Wave Communication via Joint Analog and Digital Beamforming: A Bayesian Optimization Approach","authors":"Peihao Yan;Bowei Zhang;Shichen Zhang;Kai Zeng;Huacheng Zeng","doi":"10.1109/TMLCN.2025.3622593","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3622593","url":null,"abstract":"5G millimeter-wave (mmWave) communications are essential for enabling ultra-high-speed, low-latency wireless connectivity to support data-intensive applications. However, the highly directional nature and sensitivity of mmWave signals make them particularly susceptible to jamming attacks. Therefore, securing 5G mmWave communication systems against jamming attacks is critical for ensuring reliable wireless connectivity in mission-critical applications. In this paper, we propose an online Bayesian Optimization (BayOpt) framework for joint analog and digital beamforming optimization at a mmWave communication device, aimed at maximizing its packet decoding rate under a constant jamming attack. By modeling the optimization objective as a black-box function and leveraging online learning to guide beam search, the BayOpt framework efficiently identifies near-optimal beam configurations in both the analog and digital domains while not requiring any knowledge of the jamming strategy or channel conditions. We have implemented the proposed anti-jamming solution on a 28 GHz mmWave testbed and conducted extensive evaluations across four distinct jamming scenarios. Over-the-air experiments demonstrate the effectiveness of the BayOpt framework in suppressing jamming interference. Notably, in a scenario where the jamming signal is 10 dB stronger than the desired signal, the BayOpt-enabled mmWave receiver achieves 73% of the throughput observed in a jamming-free environment.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"1161-1177"},"PeriodicalIF":0.0,"publicationDate":"2025-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11206744","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145405439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-08DOI: 10.1109/TMLCN.2025.3619447
Dieter Verbruggen;Hazem Sallouha;Sofie Pollin
The development of trustworthy and efficient Deep Learning (DL) models is vital for wireless communications, supporting tasks such as automatic modulation classification (AMC), spectrum use, and network optimization. Yet, deploying DL on resource-constrained edge devices remains challenging due to energy and reliability concerns. We propose a width-wise early exiting architecture, a variation of conventional early exiting that enables classification after processing only part of a signal frame. To further improve reliability, we introduce an early rejection mechanism, applying confidence-based abstention both at intermediate exits and the final output. In AMC experiments, our model achieves on average 40% less computation (up to 60% in some cases), while improving classification accuracy by 3% in low-SNR conditions. These results highlight the potential of our approach for robust, efficient, and trustworthy ML deployment in wireless environments.
{"title":"Deep Learning With Width-Wise Early Exiting and Rejection for Computational Efficient and Trustworthy Modulation Classification","authors":"Dieter Verbruggen;Hazem Sallouha;Sofie Pollin","doi":"10.1109/TMLCN.2025.3619447","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3619447","url":null,"abstract":"The development of trustworthy and efficient Deep Learning (DL) models is vital for wireless communications, supporting tasks such as automatic modulation classification (AMC), spectrum use, and network optimization. Yet, deploying DL on resource-constrained edge devices remains challenging due to energy and reliability concerns. We propose a width-wise early exiting architecture, a variation of conventional early exiting that enables classification after processing only part of a signal frame. To further improve reliability, we introduce an early rejection mechanism, applying confidence-based abstention both at intermediate exits and the final output. In AMC experiments, our model achieves on average 40% less computation (up to 60% in some cases), while improving classification accuracy by 3% in low-SNR conditions. These results highlight the potential of our approach for robust, efficient, and trustworthy ML deployment in wireless environments.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"1143-1159"},"PeriodicalIF":0.0,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11197046","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145352235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Task-oriented semantic communications (ToSC) has received significant attention as a promising paradigm for realizing more efficient and intelligent data services. However, ToSC systems often suffer from limited generalization capabilities, requiring retraining to meet performance demands under varying channel conditions. In recent years, artificial intelligence generated content (AIGC) has shone in computer vision (CV) and natural language processing (NLP), and its potential in wireless communications is also emerging. Motivated by these advances, we propose semantic communications (SC)-diffusion in this paper, which generates high-performance parameters for ToSC systems to address the inherent challenges of semantic communications. Specifically, SC-diffusion begins by using an autoencoder to extract latent representations from trained system parameters. A diffusion model is then trained to generate these latent representations from random noise. In particular, to ensure that the generated parameters are adapted to the real-time communication environment, we incorporate channel information as conditional information into the diffusion model. Finally, the latent representations are decoded by the autoencoder’s decoder to yield the final system parameters. In experiments across various ToSC architectures and real-world datasets, SC-diffusion consistently generates models that perform comparable to or better than the original trained models, with minimal additional computational overhead.
{"title":"SC-Diffusion: Parameter Generation for Task-Oriented Semantic Communication Systems via Conditional Diffusion Model","authors":"Yanhu Wang;Shuang Zhang;Anbang Zhang;Shuping Dang;Han Zhang;Shuaishuai Guo","doi":"10.1109/TMLCN.2025.3618802","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3618802","url":null,"abstract":"Task-oriented semantic communications (ToSC) has received significant attention as a promising paradigm for realizing more efficient and intelligent data services. However, ToSC systems often suffer from limited generalization capabilities, requiring retraining to meet performance demands under varying channel conditions. In recent years, artificial intelligence generated content (AIGC) has shone in computer vision (CV) and natural language processing (NLP), and its potential in wireless communications is also emerging. Motivated by these advances, we propose semantic communications (SC)-diffusion in this paper, which generates high-performance parameters for ToSC systems to address the inherent challenges of semantic communications. Specifically, SC-diffusion begins by using an autoencoder to extract latent representations from trained system parameters. A diffusion model is then trained to generate these latent representations from random noise. In particular, to ensure that the generated parameters are adapted to the real-time communication environment, we incorporate channel information as conditional information into the diffusion model. Finally, the latent representations are decoded by the autoencoder’s decoder to yield the final system parameters. In experiments across various ToSC architectures and real-world datasets, SC-diffusion consistently generates models that perform comparable to or better than the original trained models, with minimal additional computational overhead.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"1108-1120"},"PeriodicalIF":0.0,"publicationDate":"2025-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11195863","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145315525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}